CN114003961B - Deep neural network reasoning method with privacy protection - Google Patents

Deep neural network reasoning method with privacy protection Download PDF

Info

Publication number
CN114003961B
CN114003961B CN202111472835.3A CN202111472835A CN114003961B CN 114003961 B CN114003961 B CN 114003961B CN 202111472835 A CN202111472835 A CN 202111472835A CN 114003961 B CN114003961 B CN 114003961B
Authority
CN
China
Prior art keywords
matrix
layer
client
result
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111472835.3A
Other languages
Chinese (zh)
Other versions
CN114003961A (en
Inventor
于佳
郭丽
郝蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University
Original Assignee
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University filed Critical Qingdao University
Priority to CN202111472835.3A priority Critical patent/CN114003961B/en
Publication of CN114003961A publication Critical patent/CN114003961A/en
Application granted granted Critical
Publication of CN114003961B publication Critical patent/CN114003961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a deep neural network reasoning method with privacy protection, which comprises the following steps: the client generates a key; the client encrypts an input data matrix and a weight matrix of the trained deep neural network model by using a secret key, and sends the encrypted data matrix and the weight matrix to the edge server; the edge server performs linear layer calculation on the input data matrix by utilizing the weight matrix of the received deep neural network model, and returns a result to the client; the client verifies the returned result, if the result is correct, the client receives the result, and if the result is incorrect, the client refuses to receive the result; for verifying the correct result, the client recovers the actual output result of the linear layer by using the locally stored secret key and the bias matrix; the client performs the calculation of the nonlinear layer locally, takes the calculation result as the input of the next linear layer, and loops the steps until the final reasoning result is obtained. The invention can save the calculation cost of the user and ensure the privacy of the user data and the model.

Description

Deep neural network reasoning method with privacy protection
Technical Field
The invention relates to the technical field of information security, in particular to a deep neural network reasoning method with privacy protection.
Background
With the development of machine learning and the rise of artificial intelligence, various research fields have attempted to implement artificial intelligence with machine learning algorithms. Such as generation of an countermeasure network for image restoration, a deep learning framework for image recognition, and the like. However, the inference tasks of complex deep neural networks typically involve a large number of computational operations, e.g., based on some popular deep neural network architectures, a single inference task to perform visual inspection requires billions of computational operations, which makes it a challenge to perform these operations efficiently on resource-limited internet of things devices.
The rapid development of edge computation provides an effective method for complex deep neural network reasoning for resource-constrained devices. The outer package computation is one of the most important applications of edge computation. It allows resource-constrained users to outsource complex computations to edge servers, charging only users that use computing resources. According to the provider of the deep neural network model, the existing packet deep neural network reasoning works can be divided into two categories: 1) The user submits the data to be inferred, the cloud server/edge server provides a trained deep neural network model, and the service provided by the server is called as "inference as a service". 2) The trained model and the data to be inferred are provided by the same user, and the cloud server/edge server only provides computing resources. In these ways, a user with limited resources can utilize the computing power of the cloud server/edge server to complete complex computing operations in the deep neural network reasoning phase.
While users may benefit from the push of the deep neural network to reduce the computational and storage burden, protecting the privacy of user data and the validity of the reasoning results is a quite challenging problem. Some of the data collected by the terminal device may be very sensitive, such as medical diagnostic data. Once these data are revealed, many troubles are brought to the user. In addition, some external factors, such as hackers' attacks on cloud servers/edge servers, may also cause the computing results to be invalid. How to make the edge calculation aided deep neural network reasoning safer and more efficient has become a problem to be solved urgently.
There are two common deep neural network reasoning techniques for privacy protection, namely homomorphic encryption techniques and secure multiparty computing. The deep neural network reasoning scheme for constructing privacy protection by using homomorphic encryption technology and secure multi-party computing technology has strong security but low computing efficiency. In order to avoid complexity and inefficiency of homomorphic encryption and secure multiparty computing operations, a new double-edge server framework appears, and deep neural network reasoning is efficiently performed under the condition of privacy protection by adopting a lightweight encryption scheme. The reasoning efficiency of the deep neural network is greatly improved, and the calculation energy consumption of the Internet of things equipment is remarkably saved. However, it can only protect the privacy of the input data, but not the privacy of the model trained by the user. Deep neural network models are also a central property of suppliers because training an effective model requires a large investment in data sets, resources and expertise. However, existing schemes either require time-consuming cryptographic operations or fail to protect the privacy of the training model. Therefore, how to implement safe and efficient deep neural network reasoning while protecting input data and model privacy is an important issue.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a deep neural network reasoning method with privacy protection, a user can send data to be deduced and a trained model to an edge server, the edge server processes a linear layer with heavy calculation and time consumption, and the user only needs to process a nonlinear layer with high calculation efficiency and encryption and decryption operation, so that the calculation cost of the user is saved, and the privacy of the user data and the model can be ensured.
In order to solve the technical problems, the embodiment of the invention provides the following scheme:
A deep neural network reasoning method with privacy protection comprises the following steps:
The client generates a key;
the client encrypts an input data matrix and a weight matrix of the trained deep neural network model by using the secret key, and sends the encrypted weight matrix to the first edge server and the second edge server, and the bias matrix of the deep neural network model is stored locally;
The first edge server and the second edge server perform linear layer calculation on the input data matrix by utilizing the weight matrix of the received deep neural network model, and return results to the client;
The client verifies the returned result, if the result is correct, the client receives the result, and if the result is incorrect, the client refuses to receive the result;
for verifying the correct result, the client recovers the actual output result of the linear layer by using the locally stored secret key and the bias matrix;
And the client locally calculates a nonlinear layer, takes the calculation result as the input of the next linear layer, and loops the steps until the final reasoning result of the deep neural network model is obtained.
Preferably, the client generates the key specifically including:
For a trained deep neural network model, the deep neural network model comprises Q layers altogether, an input data matrix of an ith layer linear layer of the corresponding model is represented by X i, 1 < = i < = Q, a weight matrix is represented by W i, and a bias matrix is represented by B i;
The key is generated by using a Key Gen key generation algorithm, a security parameter k is input, a random number matrix R i and a random number c i are output as the key, each element in R i is a k-bit random number for blinding the weight matrix W i, the size of the weight matrix W i is the same as that of W i, and c i is a k-bit random number for blinding the input data matrix X i of the ith layer.
Preferably, encrypting the input data matrix and the weight matrix of the trained deep neural network model specifically includes:
Encrypting an Input data matrix and a weight matrix by using an Input Encryption algorithm, inputting a random number matrix R i and a random number c i, inputting a data matrix X i and a weight matrix W i, and outputting four matrices X i,a,Xi,b,Wi,a and W i,b;
The encryption process is as follows: first, constructing a matrix C i by using a random number C i, wherein each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; to blinde X i, it is split into two matrices X i,a and X i,b, then blinde the weight matrix W i into two matrices W i,a and W i,b with a random number matrix R i; after encryption is complete, X i,a and W i,a are sent to a first edge server ES A and X i,b and W i,b are sent to a second edge server ES B.
Preferably, in the encryption process, the two matrices X i,a and X i,b satisfy the following conditions:
Xi=Xi,a+Xi,b
Ci=Xi,a-Xi,b
The simplification can be obtained:
Xi,a=1/2(Xi+Ci);
Xi,b=1/2(Xi-Ci);
Then blinding the weight matrix W i with a random number matrix Ri into two matrices W i,a and W i,b;
Wi,a=Wi+Ri
Wi,b=Wi-Ri
Preferably, the linear layer calculation of the input data matrix by the first edge server and the second edge server by using the weight matrix of the received deep neural network model specifically includes:
The first edge server and the second edge server perform linear layer calculation on the input data matrix by utilizing a Privacy-PRESERVING COMPUTATION algorithm, and after the first edge server ES A receives X i,a and W i,a, convolution of the first edge server ES A and the second edge server ES i,a is calculated to obtain a result; after the second edge server ES B receives X i,b and W i,b, calculating the convolution of the two to obtain a result S i,b; the algorithm outputs are S i,a and S i,b.
Preferably, the verification of the returned result by the client specifically includes:
The client verifies the returned result by using a Verification algorithm, the client randomly selects the value of any position of S i,a or S i,b, and then calculates the convolution value of the corresponding position by using X i、Wi and a locally stored secret key, namely a random number matrix R i and a random number c i; the client compares whether the values of the two are equal; if not, the client refuses to receive the returned result; if the two are equal, the next step is continued.
Preferably, the restoring, by the client, the actual output result of the linear layer by using the locally stored key and the bias matrix specifically includes:
The client recovers the encryption result by using a Recovery algorithm, wherein the input of the algorithm is the results S i,a and S i,b returned by the first edge server and the second edge server; the client firstly uses a random number C i to construct a matrix C i, and each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; the client then uses C i, the locally stored random number matrix R i, and the bias matrix B i to recover the actual output result O i;Oi=Si,a+Si,b-Ci·Ri+Bi, then O i is the actual output result of the i-th linear layer.
Preferably, input X i+1=NF(Oi of the i+1 layer linear layer), NF is an activation function of the nonlinear layer; the above algorithms are cyclically executed until the final reasoning result res=nf (O Q) of the deep neural network model is obtained.
Preferably, the deep neural network model comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises a convolution layer, an activation layer, a pooling layer and a full connection layer; the convolution layer and the full connection layer are linear layers, and the activation layer and the pooling layer are nonlinear layers.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
1) Resource-constrained users can also cost less to implement efficient deep neural network reasoning.
2) The inefficiency of cumbersome homomorphic encryption techniques and secure multiparty computing techniques is avoided.
3) The privacy of input data to be inferred by the user can be guaranteed, and the privacy of a deep neural network model trained by the user can be guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a deep neural network reasoning method with privacy protection provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a deep neural network reasoning system with privacy protection provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a basic structure of a hidden layer of a deep neural network model according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a deep neural network reasoning method with privacy protection, the flow of which is shown in fig. 1, and a system model related to the method is shown in fig. 2, wherein the system model comprises a client (a data and deep neural network model owner) and two outsourced edge servers (a first edge server and a second edge server).
The method comprises the following steps:
The client generates a key;
the client encrypts an input data matrix and a weight matrix of the trained deep neural network model by using the secret key, and sends the encrypted weight matrix to the first edge server and the second edge server, and the bias matrix of the deep neural network model is stored locally;
The first edge server and the second edge server perform linear layer calculation on the input data matrix by utilizing the weight matrix of the received deep neural network model, and return results to the client;
The client verifies the returned result, if the result is correct, the client receives the result, and if the result is incorrect, the client refuses to receive the result;
for verifying the correct result, the client recovers the actual output result of the linear layer by using the locally stored secret key and the bias matrix;
And the client locally calculates a nonlinear layer, takes the calculation result as the input of the next linear layer, and loops the steps until the final reasoning result of the deep neural network model is obtained.
In the embodiment of the invention, the user of the client can send the data needing to be inferred and the trained model to the edge server, and the edge server processes the linear layer with heavy calculation and time consumption, and the user only needs to process the nonlinear layer with high calculation efficiency and encryption and decryption operation. The method of the invention can save the calculation cost of the user and ensure the privacy of the user data and the model.
In the embodiment of the invention, the deep neural network model comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises a convolution layer, an activation layer, a pooling layer and a full connection layer, as shown in fig. 3; the convolution layer and the full connection layer are linear layers, and the activation layer and the pooling layer are nonlinear layers.
The function of the convolution layer is to perform feature extraction on the input data matrix, typically comprising a plurality of convolution kernels. The convolution operation is to multiply the convolution kernel and the matrix data in the corresponding input one by one and then add. The convolution operation starts from the top left corner of the input data matrix to the bottom right corner of the image. The matrix obtained after convolution of the original matrix is called a feature map.
Typically, each convolution layer is followed by an activation layer. The activation layer typically enhances the ability of the model to handle non-linearity issues by using an activation function. The main activation functions are sigmoid, tanh and ReLU functions.
The pooling layer is mainly used for reducing the dimension of each feature map, and can keep most important information. The pooling operation generally has two modes, namely maximum pooling and average pooling. The difference between the two is that the processing modes of the values in the pooling window are different, the maximum value pooling is to take the maximum value for the values in the pooling window, and the average value pooling is to take the average value for the values in the pooling window.
The fully connected layer acts as a "classifier" throughout the convolutional neural network. In practical use, the input data of the full-connection layer needs to be preprocessed into a vector form, and the calculation mode is similar to that of a convolution layer.
Deep neural networks are essentially a mapping from input to output that can learn a large number of mappings between input and output without requiring any precise mathematical expression between input and output.
As a specific embodiment of the present invention, it is assumed that there is already a trained deep neural network model, which contains a total of Q-layer linear layers (convolutional layers and fully-connected layers). Input data of the i-th layer linear layer of the corresponding model is represented by X i (1 < =i < =q), the weight matrix is represented by W i (1 < =i < =q), and the bias matrix is represented by B i (1 < =i < =q). In the following description, the index i represents an i-th layer linear layer.
For the ith layer of linear layer, first, the client generates a key by using KeyGen key generation algorithm, inputs a security parameter k, outputs a random number matrix R i and a random number c i as the key, each element in R i is a k-bit random number for blinding the weight matrix W i, the size of which is the same as that of W i, and c i is a k-bit random number for blinding the input data matrix X i of the ith layer.
Then, the Input Encryption algorithm is used to encrypt the Input data matrix and the weight matrix, the random number matrix R i and the random number c i, the Input data matrix X i and the weight matrix W i are Input, and the four matrices X i,a,Xi,b,Wi,a and W i,b are output.
The encryption process is as follows: first, constructing a matrix C i by using a random number C i, wherein each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; for blinding X i, it is divided into two matrices X i,a and X i,b, wherein the two matrices X i,a and X i,b satisfy the following condition:
Xi=Xi,a+Xi,b
Ci=Xi,a-Xi,b
The simplification can be obtained:
Xi,a=1/2(Xi+Ci);
Xi,b=1/2(Xi-Ci);
Then blinding the weight matrix W i with a random number matrix Ri into two matrices W i,a and W i,b;
Wi,a=Wi+Ri
Wi,b=Wi-Ri
After encryption is complete, X i,a and W i,a are sent to a first edge server ES A and X i,b and W i,b are sent to a second edge server ES B.
After the first edge server and the second edge server receive the encrypted data, linear layer calculation is carried out on an input data matrix by utilizing a Privacy-PRESERVING COMPUTATION algorithm, and after the first edge server ES A receives X i,a and W i,a, convolution of the first edge server ES 3948 and the second edge server ES is calculated to obtain a result S i,a; after the second edge server ES B receives X i,b and W i,b, calculating the convolution of the two to obtain a result S i,b; the algorithm outputs are S i,a and S i,b.
After the calculation of the two edge servers is completed, the result is returned to the client. The client verifies the returned result by using a Verification algorithm, the client randomly selects the value of any position of S i,a or S i,b, and then calculates the convolution value of the corresponding position by using X i、Wi and a locally stored secret key, namely a random number matrix R i and a random number c i; the client compares whether the values of the two are equal; if not, the client refuses to receive the returned result; if the two are equal, the next step is continued.
For verifying the correct result, the client recovers the encrypted result by using a Recovery algorithm, wherein the input of the algorithm is the results S i,a and S i,b returned by the first edge server and the second edge server; the client firstly uses a random number C i to construct a matrix C i, and each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; the client then uses C i, the locally stored random number matrix R i, and the bias matrix B i to recover the actual output result O i;Oi=Si,a+Si,b-Ci·Ri+Bi, then O i is the actual output result of the i-th linear layer.
And the client locally calculates the nonlinear layer and takes the calculation result as the input of the next linear layer. Input X i+1=NF(Oi of the i+1th linear layer), NF is an activation function of the nonlinear layer; the above algorithms are cyclically executed until the final reasoning result res=nf (O Q) of the deep neural network model is obtained.
In summary, the deep neural network reasoning method provided by the invention effectively utilizes the edge server to process the linear layer with heavy calculation and time consumption, and a user only needs to process the nonlinear layer with high calculation efficiency and encryption and decryption operation, so that the user with limited resources can also realize high-efficiency deep neural network reasoning with low cost, and meanwhile, the privacy of the user input data and the deep neural network model can be ensured.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The deep neural network reasoning method with privacy protection is characterized by comprising the following steps of:
The client generates a key;
the client encrypts an input data matrix and a weight matrix of the trained deep neural network model by using the secret key, and sends the encrypted weight matrix to the first edge server and the second edge server, and the bias matrix of the deep neural network model is stored locally;
The first edge server and the second edge server perform linear layer calculation on the input data matrix by utilizing the weight matrix of the received deep neural network model, and return results to the client;
The client verifies the returned result, if the result is correct, the client receives the result, and if the result is incorrect, the client refuses to receive the result;
for verifying the correct result, the client recovers the actual output result of the linear layer by using the locally stored secret key and the bias matrix;
The client locally calculates a nonlinear layer, takes the calculation result as the input of the next linear layer, and loops the steps until the final reasoning result of the deep neural network model is obtained;
The first edge server and the second edge server perform linear layer calculation on the input data matrix by using the weight matrix of the received deep neural network model specifically comprises the following steps:
The first edge server and the second edge server perform linear layer calculation on the input data matrix by utilizing a Privacy-PRESERVING COMPUTATION algorithm, and after the first edge server ES A receives X i,a and W i,a, convolution of the first edge server ES A and the second edge server ES i,a is calculated to obtain a result; after the second edge server ES B receives X i,b and W i,b, calculating the convolution of the two to obtain a result S i,b; the algorithm outputs are S i,a and S i,b;
the client side verifying the returned result specifically comprises the following steps:
The client verifies the returned result by using a Verification algorithm, the client randomly selects the value of any position of S i,a or S i,b, and then calculates the convolution value of the corresponding position by using X i、Wi and a locally stored secret key, namely a random number matrix R i and a random number c i; the client compares whether the values of the two are equal; if not, the client refuses to receive the returned result; if the values are equal to each other, continuing to execute the next step;
the client-side recovering the actual output result of the linear layer by using the locally stored secret key and the bias matrix specifically comprises the following steps:
The client recovers the encryption result by using a Recovery algorithm, wherein the input of the algorithm is the results S i,a and S i,b returned by the first edge server and the second edge server; the client firstly uses a random number C i to construct a matrix C i, and each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; then the client uses C i, the locally stored random number matrix R i and the bias matrix B i to recover the actual output result O i;Oi=Si,a+Si,b-Ci·Ri+Bi, and O i is the actual output result of the i-th linear layer;
The X i,a、Xi,b is two matrices obtained after the input data matrix X i of the ith layer of linear layer is blinded, and the W i,a、Wi,b is two matrices obtained after the weight matrix W i of the ith layer of linear layer is blinded.
2. The deep neural network reasoning method of claim 1, wherein the client generating a key specifically comprises:
For a trained deep neural network model, the deep neural network model comprises Q layers altogether, an input data matrix of an ith layer linear layer of the corresponding model is represented by X i, 1 < = i < = Q, a weight matrix is represented by W i, and a bias matrix is represented by B i;
The key is generated by using a Key Gen key generation algorithm, a security parameter k is input, a random number matrix R i and a random number c i are output as the key, each element in R i is a k-bit random number for blinding the weight matrix W i, the size of the weight matrix W i is the same as that of W i, and c i is a k-bit random number for blinding the input data matrix X i of the ith layer.
3. The deep neural network reasoning method of claim 2, wherein encrypting the input data matrix and the weight matrix of the trained deep neural network model specifically comprises:
Encrypting an Input data matrix and a weight matrix by using an Input Encryption algorithm, inputting a random number matrix R i and a random number c i, inputting a data matrix X i and a weight matrix W i, and outputting four matrices X i,a,Xi,b,Wi,a and W i,b;
The encryption process is as follows: first, constructing a matrix C i by using a random number C i, wherein each element of the matrix C i is C i, and the size of the matrix C is consistent with that of X i; to blinde X i, it is split into two matrices X i,a and X i,b, then blinde the weight matrix W i into two matrices W i,a and W i,b with a random number matrix R i; after encryption is complete, X i,a and W i,a are sent to a first edge server ES A and X i,b and W i,b are sent to a second edge server ES B.
4. A deep neural network reasoning method according to claim 3, characterized in that during the encryption process, the two matrices X i,a and X i,b fulfil the following condition:
Xi=Xi,a+Xi,b
Ci=Xi,a-Xi,b
The simplification can be obtained:
Xi,a=1/2(Xi+Ci);
Xi,b=1/2(Xi-Ci);
Then blinding the weight matrix W i with a random number matrix Ri into two matrices W i,a and W i,b;
Wi,a=Wi+Ri
Wi,b=Wi-Ri
5. The deep neural network reasoning method of claim 1, wherein the input X i+1=NF(Oi of the i+1th layer linear layer), NF is an activation function of the nonlinear layer; the above algorithms are cyclically executed until the final reasoning result res=nf (O Q) of the deep neural network model is obtained.
6. The deep neural network reasoning method of any of claims 1-5, wherein the deep neural network model comprises an input layer, a hidden layer, and an output layer, the hidden layer comprising a convolutional layer, an active layer, a pooling layer, and a fully-connected layer; the convolution layer and the full connection layer are linear layers, and the activation layer and the pooling layer are nonlinear layers.
CN202111472835.3A 2021-12-03 2021-12-03 Deep neural network reasoning method with privacy protection Active CN114003961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111472835.3A CN114003961B (en) 2021-12-03 2021-12-03 Deep neural network reasoning method with privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111472835.3A CN114003961B (en) 2021-12-03 2021-12-03 Deep neural network reasoning method with privacy protection

Publications (2)

Publication Number Publication Date
CN114003961A CN114003961A (en) 2022-02-01
CN114003961B true CN114003961B (en) 2024-04-26

Family

ID=79931306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111472835.3A Active CN114003961B (en) 2021-12-03 2021-12-03 Deep neural network reasoning method with privacy protection

Country Status (1)

Country Link
CN (1) CN114003961B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115001748B (en) * 2022-04-29 2023-11-03 北京奇艺世纪科技有限公司 Model processing method and device and computer readable storage medium
CN115345307B (en) * 2022-10-17 2023-02-14 杭州世平信息科技有限公司 Secure convolution neural network reasoning method and system on ciphertext image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259158A (en) * 2018-01-11 2018-07-06 西安电子科技大学 Efficient and secret protection individual layer perceptron learning method under a kind of cloud computing environment
CN108647525A (en) * 2018-05-09 2018-10-12 西安电子科技大学 The secret protection single layer perceptron batch training method that can verify that
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
CN111324870A (en) * 2020-01-22 2020-06-23 武汉大学 Outsourcing convolution neural network privacy protection system based on safe two-party calculation
CN112152806A (en) * 2020-09-25 2020-12-29 青岛大学 Cloud-assisted image identification method, device and equipment supporting privacy protection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598438B (en) * 2019-07-19 2023-05-30 福州大学 Cloud protection outsourcing data privacy protection system based on deep convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259158A (en) * 2018-01-11 2018-07-06 西安电子科技大学 Efficient and secret protection individual layer perceptron learning method under a kind of cloud computing environment
CN108647525A (en) * 2018-05-09 2018-10-12 西安电子科技大学 The secret protection single layer perceptron batch training method that can verify that
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
CN111324870A (en) * 2020-01-22 2020-06-23 武汉大学 Outsourcing convolution neural network privacy protection system based on safe two-party calculation
CN112152806A (en) * 2020-09-25 2020-12-29 青岛大学 Cloud-assisted image identification method, device and equipment supporting privacy protection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于同态加密的卷积神经网络前向传播方法;谢四江;许世聪;章乐;;计算机应用与软件;20200212(02);全文 *

Also Published As

Publication number Publication date
CN114003961A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
US11301571B2 (en) Neural-network training using secure data processing
CN109194507B (en) Non-interactive privacy protection neural network prediction method
CN114003961B (en) Deep neural network reasoning method with privacy protection
US11222138B2 (en) Privacy-preserving machine learning in the three-server model
CN114417414A (en) Privacy protection method based on edge calculation
Mendis et al. A blockchain-powered decentralized and secure computing paradigm
CN111107076A (en) Safe and efficient matrix multiplication outsourcing method
Tian et al. Low-latency privacy-preserving outsourcing of deep neural network inference
Li et al. Secure prediction of neural network in the cloud
Al Shahrani et al. An internet of things (IoT)-based optimization to enhance security in healthcare applications
CN117527223B (en) Distributed decryption method and system for quantum-password-resistant grid
CN117395067B (en) User data privacy protection system and method for Bayesian robust federal learning
CN105119929A (en) Safe mode index outsourcing method and system under single malicious cloud server
CN116684070A (en) Anti-quantum key encapsulation method and system for TLS protocol
CN116628504A (en) Trusted model training method based on federal learning
CN116132017A (en) Method and system for accelerating privacy protection machine learning reasoning
CN115550073A (en) Construction method capable of monitoring stealth address
CN115130568A (en) Longitudinal federated Softmax regression method and system supporting multiple parties
Huang et al. Encrypted domain secret medical-image sharing with secure outsourcing computation in iot environment
Zhou et al. Efficient privacy-preserving outsourced discrete wavelet transform in the encrypted domain
Rath et al. Privacy-Preserving Outsourcing Algorithm for Solving Large Systems of Linear Equations
CN114065193B (en) Deep learning security method applied to image task in edge cloud environment
Li et al. A secure and verifiable outsourcing scheme for assisting mobile device training machine learning model
CN113343277A (en) Safe and efficient method for entrusting private data category prediction
Cheng et al. A Secure and Verifiable Outsourcing Scheme for Assisting Mobile Device Training Machine Learning Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant