CN112766495A - Deep learning model privacy protection method and device based on mixed environment - Google Patents

Deep learning model privacy protection method and device based on mixed environment Download PDF

Info

Publication number
CN112766495A
CN112766495A CN202110104463.2A CN202110104463A CN112766495A CN 112766495 A CN112766495 A CN 112766495A CN 202110104463 A CN202110104463 A CN 202110104463A CN 112766495 A CN112766495 A CN 112766495A
Authority
CN
China
Prior art keywords
neural network
result
linear
tee
network layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110104463.2A
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110104463.2A priority Critical patent/CN112766495A/en
Publication of CN112766495A publication Critical patent/CN112766495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the specification provides a deep learning model privacy protection method and device based on a hybrid environment. The hybrid environment comprises a TEE and a non-TEE, the deep learning model comprises N neural network layers which are connected in sequence and comprise a first neural network layer, the method is executed in the TEE and comprises the following steps of sequentially carrying out calculation processing on the N neural network layers, wherein the calculation processing aiming at the first neural network layer comprises the following steps: acquiring a first parameter set for linear operation and a second parameter set for nonlinear operation, which are obtained by dividing network parameters of the network parameters; acquiring input data of the neural network layer; homomorphic encryption is carried out on the first parameter set and the input data, and an encryption result is sent to the non-TEE; obtaining a first linear calculation result of homomorphic linear operation based on the encryption result from the non-TEE; and determining an output result of the first neural network layer according to at least the first linear calculation result and the second parameter set.

Description

Deep learning model privacy protection method and device based on mixed environment
Technical Field
One or more embodiments of the present disclosure relate to the field of machine learning and the field of data security, and in particular, to a method and an apparatus for protecting privacy of a deep learning model based on a hybrid environment.
Background
The artificial intelligence technology using machine learning model as core has been developed and landed rapidly in recent years. For example, face-brushing payment, intelligent auditing, intelligent travel, urban brains and the like all use artificial intelligence technology. Meanwhile, as the core of the artificial intelligence system, the privacy and security of the machine learning model itself are also receiving wide attention. Once the machine learning model is cracked or stolen, the overall system security is destroyed, and a chain reaction is further caused, and a series of security problems are caused.
Therefore, privacy protection of machine learning models is an urgent problem to be solved.
Disclosure of Invention
Embodiments in this specification aim to provide a more effective method of privacy protection for deep learning models, solving the deficiencies in the prior art.
According to a first aspect, a deep learning model privacy protection method based on a hybrid environment is provided, the hybrid environment comprises a trusted execution environment TEE and an untrusted execution environment, the deep learning model comprises N sequentially connected neural network layers including a first neural network layer, the method is executed in the TEE, the method comprises sequentially performing computing processing on the N neural network layers, wherein the computing processing for the first neural network layer comprises:
acquiring a first parameter set and a second parameter set obtained by dividing network parameters of the first neural network layer, wherein the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation;
acquiring input data of a first neural network layer;
homomorphically encrypting the first parameter set and the input data, and sending an encryption result to an untrusted execution environment;
obtaining a first linear computation result from an untrusted execution environment for homomorphic linear operations based on the encryption result;
and determining an output result of the first neural network layer according to at least the first linear calculation result and the second parameter set.
In one embodiment, the sending the encrypted result to the untrusted execution environment comprises:
compressing the encryption result to obtain first compressed data, and sending the first compressed data to an untrusted execution environment;
obtaining, from an untrusted execution environment, a first linear computation result that is homomorphic linear operated on based on the cryptographic result, comprising:
and receiving second compressed data from the untrusted execution environment, decompressing the second compressed data, and obtaining a first linear calculation result of homomorphic linear operation based on the encryption result.
In one embodiment, the compressing is lossless compression.
In one embodiment, the method further comprises, prior to performing computational processing for the first neural network layer,
dividing the network parameters of the N neural network layers into a parameter part for linear operation and a parameter part for nonlinear operation respectively to obtain parameter division results;
the obtaining of the first parameter set and the second parameter set obtained by dividing the network parameters of the first neural network layer includes reading the first parameter set and the second parameter set obtained by dividing the first neural network layer from the parameter division result.
In one embodiment, the obtaining a first parameter set and a second parameter set obtained by dividing the network parameters of the first neural network layer includes:
acquiring network parameters of the first neural network layer;
the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
In one embodiment, the first set of parameters includes one or more of a weight parameter, a bias parameter, a number of input and output neurons.
In one embodiment, the second set of parameters includes parameters associated with an activation function.
In one embodiment, the method further includes, after performing calculation processing on the N neural network layers in sequence, using an output result of a last layer of the N neural network layers as an output result of the deep learning model.
According to a second aspect, a deep learning model privacy protection method based on a hybrid environment is provided, the hybrid environment comprises a trusted execution environment TEE and an untrusted execution environment, the deep learning model comprises N sequentially connected neural network layers, the N neural network layers comprise a first neural network layer, the method is executed in the untrusted execution environment, and the method comprises:
obtaining a first encryption result from the TEE; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation in a first neural network layer and input data;
performing homomorphic linear operation on the first encryption result to obtain a first linear calculation result;
and sending the first linear calculation result to the TEE, so that the TEE determines an output result of the first neural network layer according to the first linear calculation result and a second parameter set used for nonlinear operation in the first neural network layer.
In one embodiment, obtaining a first encryption result from the TEE includes:
receiving first compressed data from the TEE, and decompressing the first compressed data to obtain a first encryption result;
the sending the first linear computation result to the TEE includes:
and compressing the first linear calculation result to obtain second compressed data, and sending the second compressed data to the TEE.
In one embodiment, the performing homomorphic linear operations includes homomorphic addition operations and/or homomorphic multiplication operations.
In one embodiment, the method further comprises, before sending the first linear computation result to the TEE, performing a dimension check on the first linear computation result, determining whether it is a predetermined dimension, and determining whether to send the first linear computation result to the TEE according to the check result.
According to a third aspect, a deep learning model privacy protection device based on a hybrid environment is provided, the hybrid environment comprises a Trusted Execution Environment (TEE) and an untrusted execution environment, the deep learning model comprises N sequentially connected neural network layers including a first neural network layer, the device is implemented in the TEE, the device comprises,
a calculation processing unit configured to sequentially perform calculation processing on the N neural network layers, and including, for the calculation processing of the first neural network layer:
a dividing result obtaining subunit configured to obtain a first parameter set and a second parameter set obtained by dividing the network parameters of the first neural network layer, where the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation;
an input data acquisition subunit configured to acquire input data of the first neural network layer;
the sending subunit is configured to perform homomorphic encryption on the first parameter set and the input data, and send an encryption result to an untrusted execution environment;
a linear calculation result acquisition subunit configured to acquire, from an untrusted execution environment, a first linear calculation result that is homomorphic linear operation based on the encryption result;
and the layer output determining subunit is configured to determine an output result of the first neural network layer according to at least the first linear calculation result and the second parameter set.
In one embodiment, the transmitting subunit is further configured to,
compressing the encryption result to obtain first compressed data, and sending the first compressed data to an untrusted execution environment;
and the linear calculation result acquisition subunit is further configured to receive second compressed data from the untrusted execution environment, and decompress the second compressed data to obtain a first linear calculation result of performing homomorphic linear operation based on the encryption result.
In one embodiment, the compressing is lossless compression.
In one embodiment, the computing processing unit further comprises:
the parameter dividing subunit is configured to divide the network parameters of the N neural network layers into a parameter part for linear operation and a parameter part for nonlinear operation respectively before calculation processing is performed on a first neural network layer, so as to obtain a parameter dividing result;
and the obtaining division result subunit is further configured to read the first parameter set and the second parameter set obtained by dividing the first neural network layer from the parameter division result.
In one embodiment, the obtaining the division result subunit is further configured to:
acquiring network parameters of the first neural network layer;
the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
In one embodiment, the first set of parameters includes one or more of a weight parameter, a bias parameter, a number of input and output neurons.
In one embodiment, the second set of parameters includes parameters associated with an activation function.
In one embodiment, the apparatus further comprises,
and the model output result determining unit is configured to take the output result of the last layer of the N neural network layers as the output result of the deep learning model after sequentially performing calculation processing on the N neural network layers.
According to a fourth aspect, there is provided a deep learning model privacy protection apparatus based on a hybrid environment, the hybrid environment including a trusted execution environment TEE and an untrusted execution environment, the deep learning model including N neural network layers connected in sequence, the N neural network layers including a first neural network layer, the apparatus being implemented in the untrusted execution environment, the apparatus including:
an acquisition unit configured to acquire a first encryption result from the TEE; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation in a first neural network layer and input data;
the linear calculation unit is configured to perform homomorphic linear operation on the first encryption result to obtain a first linear calculation result;
and the result sending unit is configured to send the first linear calculation result to the TEE, so that the TEE determines an output result of the first neural network layer according to the first linear calculation result and a second parameter set used for nonlinear operation in the first neural network layer.
In one embodiment, the obtaining unit is further configured to,
receiving first compressed data from the TEE, and decompressing the first compressed data to obtain a first encryption result;
and the result sending unit is further configured to compress the first linear calculation result to obtain second compressed data and send the second compressed data to the TEE.
In one embodiment, the performing homomorphic linear operations includes homomorphic addition operations and/or homomorphic multiplication operations.
In one embodiment, the apparatus further comprises,
and the dimension checking unit is configured to perform dimension checking on the first linear calculation result before the first linear calculation result is sent to the TEE, determine whether the first linear calculation result is a predetermined dimension, and determine whether to send the first linear calculation result to the TEE according to a checking result.
According to a fifth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first or second aspect.
According to a sixth aspect, there is provided a computing device comprising a memory and a processor, wherein the memory has stored therein executable code, and the processor, when executing the executable code, implements the method of the first or second aspect.
By using one or more of the method, the device, the computing equipment and the storage medium in the aspects, the efficiency of model operation can be improved more effectively on the premise of not influencing the privacy security of the deep learning model.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a method for deep learning model privacy protection based on a hybrid environment according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method for deep learning model privacy protection based on a hybrid environment in accordance with an embodiment of the present description;
FIG. 3 illustrates a flow diagram of yet another hybrid environment-based deep learning model privacy preserving method in accordance with an embodiment of the present description;
FIG. 4 is a block diagram of a hybrid environment based deep learning model privacy preserving apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of another hybrid environment based deep learning model privacy preserving apparatus according to an embodiment of the present disclosure;
Detailed Description
The solution provided by the present specification will be described below with reference to the accompanying drawings.
As mentioned above, the machine learning model is widely applied to many fields, and if the machine learning model itself is cracked or stolen, the overall security of the application system will be destroyed, thereby causing a chain reaction and a series of security problems. Therefore, privacy protection of the current machine learning model is an urgent problem to be solved.
At present, two types of methods are mainly used for protecting model privacy, and the first type is to encrypt the model. Common methods are model encryption and model obfuscation. This approach does not allow an attacker to know the specific structure and parameters of the model even if the attacker has access to the model. However, the method generally uses fixed encryption algorithms, the security level is not high enough, and the method is easy to crack violently. The second method is to put the model in a Trusted Execution Environment (TEE) environment to run. The TEE environment itself ensures the security of the model. Although the safety of the method is guaranteed, the efficiency of model operation is greatly reduced, and the operation time is more than ten times higher than that of an external environment except a TEE environment.
In order to solve the contradiction between the privacy security of the current machine learning model and the model operation efficiency, the inventor proposes a deep learning model privacy protection method and a device thereof based on a hybrid environment in the embodiment in the present specification. After research, the inventor finds that the linear operation of the external deep learning model can not affect the security of the encryption algorithm of the TEE environment, and by using the characteristic, the linear operation part in the machine learning model is put in the non-TEE external environment, so that the running speed of the machine learning model is improved. And the nonlinear operation part in the machine learning model still remains in the TEE environment, so that the running efficiency of the model is improved on the premise of not influencing the privacy security of the model.
The basic idea of the method is further explained below.
Fig. 1 is a schematic diagram illustrating a principle of a deep learning model privacy protection method based on a hybrid environment according to an embodiment of the present disclosure. As shown in fig. 1, the deep learning model includes N sequentially connected neural network layers. The method loads the deep learning model in a trusted execution environment TEE, and sequentially carries out calculation processing on N neural network layers according to the connection sequence of the N neural network layers. The calculation process of any one of the neural network layers (e.g. the first neural network layer in fig. 1) follows the following process: first, the parameters of the neural network layer are divided into parameters W1 for linear operation and parameters W2 for nonlinear operation according to the linear computation correlation. Then, the input parameter D1 of the neural network layer and the parameter W1 for linear operation are homomorphically encrypted and sent to the untrusted execution environment for linear computation. The homomorphic encryption algorithm has the calculation property that the calculation is carried out after data is encrypted and the calculation property is completely the same as the calculation result that the encryption is carried out after the calculation. The current homomorphic encryption algorithm generally has homomorphic property only aiming at linear operation, namely only supports linear homomorphic operation. Therefore, by using the computing property, homomorphic linear computation can be performed on the ciphertext sent by the TEE in the untrusted execution environment, and a linear computation result Enc in an encrypted state is obtained (D2). Then, after the untrusted execution environment obtains Enc (D2), it sends it back to the TEE environment where it can be decrypted, obtaining a decrypted linear computation result D2, and obtaining an output result of the first neural network layer from the decrypted linear computation result and the set of nonlinear operations.
The method is used for calculating each neural network layer of the deep learning model, on one hand, an encryption mechanism of TEE for calculation and data exchange is followed, and therefore feasibility and safety of the deep learning model calculation process using the method are guaranteed. On the other hand, richer computing resources and higher computing efficiency in the non-feasible execution environment relative to the TEE environment are utilized, and the execution efficiency of the whole deep learning model is greatly improved.
The specific process of the method is further described below.
Fig. 2 illustrates a flow diagram of a hybrid environment-based deep learning model privacy preserving method according to an embodiment of the present specification. The hybrid environment comprises a Trusted Execution Environment (TEE) and an untrusted execution environment (NTE), the deep learning model comprises N sequentially connected neural network layers including a first neural network layer, and the method is executed in the TEE and at least comprises the following steps:
in step 21, N neural network layers are sequentially subjected to calculation processing, that is, input data of each network layer is sequentially processed by using the network parameters of the network layer to obtain output data of the network layer.
In this step, a deep learning model is loaded in the trusted execution environment TEE, and the N neural network layers included in the model are sequentially subjected to calculation processing according to the connection order of the N neural network layers. In one embodiment, the entire deep learning model may be loaded at once in the TEE. In another embodiment, the deep learning model may also be loaded in the TEE multiple times according to a predetermined rule. For example, in one example, the N neural network layers may be loaded multiple times in the order of their connections, with one or more of the neural network layers being loaded at a time.
A Trusted Execution Environment (TEE), which is a secure area in the main processor (CPU), acts as an isolated execution environment that ensures that internally loaded code and data are protected in terms of confidentiality and integrity. For example, a TEE provides the integrity of applications executed through TEE isolation and the confidentiality of their resources. In other words, the TEE provides an execution space that provides higher security than the Operating System (OS) for trusted applications running on the device.
In different embodiments, the TEE may be implemented on different host processors, with different implementations of the TEE being implemented on different host processors. For example, an SGX scheme implementing TEE on Intel CPU, a TrustZone scheme implementing TEE on ARM CPU, etc. The description does not limit the particular implementation of the TEE.
In the TEE environment described above, the processing procedure for any first neural network layer of the N neural network layers is shown in step 211-215, and the processing procedure specifically includes:
in step 211, a first parameter set and a second parameter set obtained by dividing the network parameters of the first neural network layer are obtained, where the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation.
According to an embodiment, before performing calculation processing on the N neural network layers, the network parameters of the N neural network layers are divided into a parameter part for linear operation and a parameter part for nonlinear operation in advance, so as to obtain a parameter division result. In the process of sequentially performing calculation processing on each neural network layer, for each current neural network layer to be processed currently, the linear parameter part and the nonlinear parameter part corresponding to each current neural network layer can be read from the parameter division result. Accordingly, when the first neural network layer is taken as the current neural network layer, in this step 211, the first parameter set and the second parameter set obtained by dividing the first neural network layer may be read from the pre-formed parameter division result.
According to another embodiment, the parameter division can also be performed when the calculation processing is performed for the first neural network layer. Specifically, the step 211 may include obtaining a network parameter of a first neural network layer; the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
It can be seen that the difference between the two embodiments is mainly that the parameters are divided when the calculation processing is performed on each neural network layer, or the parameters are divided before the calculation processing is started on each neural network layer. However, any method or a mixed form of the two methods, that is, a part of the neural network layer adopts division before calculation processing, and a part of the neural network layer adopts division during calculation processing, should belong to the protection scope of the present invention.
In the calculation of the neural network layer, the parameters related to the linear operation include various neural network layer parameters. Thus, in one embodiment, the first set of parameters may include one or more of a weight parameter, a bias parameter, a number of input, output neurons.
In the calculation at the neural network layer, the main factor associated with the nonlinear operation is the parameters associated with the activation function, for example, the parameters specifying the specific kind of activation function of the neuron. Thus, in one embodiment, the second set of parameters may include parameters associated with the activation function. In one specific example, the activation function may be a ReLU function. In another specific example, the activation function may be a Sigmoid function.
At step 212, input data of a first neural network layer is acquired;
in this step, the obtained input data is input data of the first neural network layer. In different examples, the input data of the first neural network layer may be input data of the deep learning model or output data of the last neural network layer in the connection order according to the position of the first neural network layer in the N neural network layers. In one example, the first neural network layer is the first neural network layer of the N neural network layers, and thus its input data may be that of the deep learning model. The input data of the deep learning model can be characteristic data of a business sample to be processed. The service sample may be, for example, a picture, text, audio; or business objects such as users, merchants, and goods. In another example, the first neural network layer is a neural network layer after the first neural network layer, and the input data thereof may be output data of the last neural network layer.
In step 213, the first set of parameters and the input data are homomorphically encrypted, and the encrypted result is sent to the untrusted execution environment.
As already discussed above, the first set of parameters of the neural network layer is related to a linear calculation. The linear computation of the neural network layer is also related to the input data thereof, so that in the step, after homomorphic encryption is carried out on the first parameter set and the input data, the encryption result is sent to the untrusted execution environment, so that in the subsequent step, the linear computation can be carried out in the untrusted execution environment.
The TEE environment performs encryption/decryption when exchanging data with an external environment (untrusted execution environment). In order to enable the untrusted execution environment to directly perform operations based on the ciphertext, in an embodiment of the present specification, the TEE environment uses a homomorphic encryption algorithm to encrypt the first parameter set and the input data. The homomorphic encryption algorithm has the following operational properties:
Enc(x)+Enc(y)=Enc(x+y) (1)
Enc(x)*Enc(y)=Enc(x*y) (2)
wherein, Enc () is a homomorphic encryption function, and x and y are data to be encrypted.
It should be noted that "+" and "+" in the formulas (1) and (2), Enc (x) + Enc (y), respectively, may also be algorithms, i.e., algorithms that can obtain the calculation results of Enc (x + y) and Enc (x y) after limited operations according to Enc (x), Enc (y), respectively.
As can be seen from the formulas (1) and (2), the homomorphic encryption of the data to be encrypted is followed by linear operations (including addition and multiplication), which is exactly the same as the homomorphic encryption of the data to be encrypted after linear operations. Therefore, the encrypted data can be directly subjected to linear operation in an external environment (untrusted execution environment) without decryption, so that the security of private data in the TEE environment is ensured, and meanwhile, the calculation correctness in the external environment is ensured.
In order to reduce the data transmission time between the TEE and the external environment and improve the transmission efficiency, in one embodiment, the encryption result may be further compressed to obtain first compressed data, and the first compressed data is sent to the untrusted execution environment. To avoid that compression may cause damage to the data characteristics of the compressed object, in one embodiment, the encryption results may be compressed based on lossless compression. In one example, the lossless compression may be ZIP compression.
At step 214, a first linear computation result that performs a homomorphic linear operation based on the cryptographic result is obtained from the untrusted execution environment.
In this step, the first linear computation result obtained from the untrusted execution environment is obtained after performing a homomorphic linear operation in the untrusted execution environment based on the encryption result sent by the trusted execution environment to the untrusted execution environment in step 213. As discussed above, the computation process for obtaining the computation result in the untrusted execution environment conforms to the operational nature of homomorphic linear operations, so that the correctness of the computation result obtained from the untrusted execution environment can be guaranteed.
Similarly, in order to reduce the data transmission time, the external environment may also transmit data to the TEE in a manner of compressing the data. Thus, in one embodiment, the second compressed data may be obtained from the untrusted execution environment, decompressed, and the first linear computation result may be obtained as a homomorphic linear operation based on the encrypted result. In one embodiment, the second compressed data may be generated based on lossless compression. In one example, the lossless compression may be ZIP compression.
At step 215, an output of the first neural network layer is determined based at least on the first linear computation and the second set of parameters.
In this step, a nonlinear calculation may be performed according to the first linear calculation result and the second parameter set, so as to determine an output result of the first neural network layer.
Since the first linear calculation result is homomorphic encryption data, in an embodiment, the first linear calculation result may be decrypted based on a key corresponding to a homomorphic encryption algorithm that is used before to obtain a second linear calculation result, where the second linear calculation result is a plaintext linear calculation result obtained according to the input data and the linear operation parameter of the neural network layer. And then, carrying out nonlinear calculation according to the second linear calculation result and the second parameter set, and determining the output result of the first neural network layer. In one example, the second set of parameters includes parameters that specify activation functions of neurons of the neural network layer, and the output result of the first neural network layer may be determined by performing a non-linear operation based on the second linear computation result and the activation functions of the neurons of the neural network layer.
In the embodiment of the present invention, the non-linear operation is performed in the TEE environment, because the homomorphic encryption algorithm does not have homomorphic properties (as shown in equations 1 and 2) as those of the linear operation for the non-linear operation (such as the function operation of the activation functions ReLU and Sigmoid). Therefore, the encrypted data cannot be sent from the TEE to the external environment, and then the encrypted calculation result of the nonlinear operation is obtained in the external environment and then sent back to the TEE for decryption to obtain the desired calculation result of the nonlinear operation. That is, the correctness and safety of the calculation result cannot be ensured at the same time. Therefore, the non-linear operation cannot be performed in an external environment.
The output result of the deep learning model is usually the output result of the last layer of the plurality of neural network layers included in the deep learning model. Therefore, according to one embodiment, after the calculation processing is sequentially performed on the N neural network layers, the output result of the last layer of the N neural network layers may be used as the output result of the deep learning model.
Fig. 3 shows a flowchart of a method for protecting privacy of a deep learning model based on a hybrid environment including a trusted execution environment TEE and an untrusted execution environment according to an embodiment of the present specification, where the deep learning model includes N neural network layers connected in series, the N neural network layers including a first neural network layer, and the method is performed in the untrusted execution environment, and the method includes:
accepting a first encryption result from the TEE at step 31; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation of the first neural network layer and the input data.
In this step, the untrusted execution environment receives the first encryption result from the TEE, where a specific implementation of the first encryption result generated in the TEE is substantially the same as the above description of the encryption result obtained in step 213 in fig. 2, and is not described herein again.
In step 32, performing homomorphic linear operation on the first encryption result to obtain a first linear calculation result;
in this step, the specific implementation of obtaining the first linear computation result is substantially the same as the above description of the specific implementation of obtaining the first linear computation result in the untrusted execution environment in step 214 in fig. 2, and is not described herein again.
In one embodiment, performing homomorphic linear operations may include homomorphic addition operations and/or homomorphic multiplication operations.
In this step, based on the encryption result, the specific implementation of performing homomorphic linear operation is substantially the same as the description in step 214, and is not described herein again.
At step 33, the first linear calculation result is sent to the TEE.
As mentioned above, in the data transmission between the TEE and the external environment, the data compression method can be used to increase the transmission efficiency. Thus, in one embodiment, the first compressed data may be accepted from the TEE, decompressed, and the first encrypted result obtained. In another embodiment, the first linear calculation result may be further compressed to obtain second compressed data, and the second compressed data may be sent to the TEE.
To further ensure correctness of data transmission, in one embodiment, before sending the first linear computation result to the trusted execution environment, the first linear computation result may be subjected to a dimension check to determine whether the first linear computation result is a predetermined dimension, and whether sending the first linear computation result to the trusted execution environment is performed is determined according to a check result.
Fig. 4 is a block diagram illustrating a hybrid environment-based deep learning model privacy protection apparatus according to an embodiment of the present disclosure. The hybrid environment includes a trusted execution environment TEE and an untrusted execution environment, the deep learning model includes N neural network layers connected in sequence, including a first neural network layer, the apparatus is implemented in the TEE, as shown in fig. 4, the apparatus 400 includes:
a calculation processing unit 41 configured to sequentially perform calculation processing on the N neural network layers, and including, for the calculation processing of the first neural network layer:
an obtaining division result subunit 411 configured to obtain a first parameter set and a second parameter set obtained by dividing the network parameters of the first neural network layer, where the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation;
an input data acquisition subunit 412 configured to acquire input data of the first neural network layer;
a sending subunit 413, configured to perform homomorphic encryption on the first parameter set and the input data, and send an encryption result to an untrusted execution environment;
a linear computation result obtaining subunit 414 configured to obtain, from the untrusted execution environment, a first linear computation result that is homomorphic linear operation based on the encryption result;
a layer output determination subunit 415 configured to determine an output result of the first neural network layer at least according to the first linear calculation result and the second parameter set.
In one embodiment, the transmitting subunit may be further configured to,
compressing the encryption result to obtain first compressed data, and sending the first compressed data to an untrusted execution environment;
the linear computation result obtaining subunit may be further configured to receive second compressed data from the untrusted execution environment, and decompress the second compressed data to obtain a first linear computation result of performing homomorphic linear operation based on the encryption result.
In one embodiment, the compressing may be lossless compression.
In one embodiment, the calculation processing unit may further include:
the parameter dividing subunit is configured to divide the network parameters of the N neural network layers into a parameter part for linear operation and a parameter part for nonlinear operation respectively before calculation processing is performed on a first neural network layer, so as to obtain a parameter dividing result;
the obtaining of the division result subunit may be further configured to read the first parameter set and the second parameter set obtained by dividing the first neural network layer from the parameter division result.
In one embodiment, the obtaining of the division result subunit may be further configured to:
acquiring network parameters of the first neural network layer;
the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
In one embodiment, the first set of parameters may include one or more of a weight parameter, a bias parameter, a number of input and output neurons.
In one embodiment, the second set of parameters may include parameters associated with the activation function.
In one embodiment, the apparatus may further comprise,
and the model output result determining unit 42 is configured to take the output result of the last layer of the N neural network layers as the output result of the deep learning model after sequentially performing calculation processing on the N neural network layers.
Fig. 5 is a block diagram illustrating a hybrid environment-based deep learning model privacy protection apparatus according to an embodiment of the present disclosure. The hybrid environment includes a trusted execution environment TEE and an untrusted execution environment, the deep learning model includes N neural network layers connected in sequence, the N neural network layers include a first neural network layer, the apparatus is implemented in the untrusted execution environment, as shown in fig. 5, the apparatus 500 includes:
an obtaining unit 51 configured to obtain a first encryption result from the TEE; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation in a first neural network layer and input data;
a linear computing unit 52, configured to perform homomorphic linear operation on the first encryption result to obtain a first linear computing result;
the result sending unit 53 is configured to send the first linear calculation result to the TEE, so that the TEE determines an output result of the first neural network layer according to the first linear calculation result and the second parameter set for the nonlinear operation in the first neural network layer.
In one embodiment, the obtaining unit may be further configured to,
receiving first compressed data from the TEE, and decompressing the first compressed data to obtain a first encryption result;
the result sending unit may be further configured to compress the first linear calculation result to obtain second compressed data, and send the second compressed data to the TEE.
In one embodiment, the performing homomorphic linear operations may include homomorphic addition operations and/or homomorphic multiplication operations.
In one embodiment, the apparatus may further comprise,
and the dimension checking unit is configured to perform dimension checking on the first linear calculation result before the first linear calculation result is sent to the TEE, determine whether the first linear calculation result is a predetermined dimension, and determine whether to send the first linear calculation result to the TEE according to a checking result.
Another aspect of the present specification provides a computer readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform any one of the above methods.
Another aspect of the present specification provides a computing device comprising a memory having stored therein executable code, and a processor that, when executing the executable code, implements any of the methods described above.
It is to be understood that the terms "first," "second," and the like, herein are used for descriptive purposes only and not for purposes of limitation, to distinguish between similar concepts.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (26)

1. A deep learning model privacy protection method based on a hybrid environment, wherein the hybrid environment comprises a Trusted Execution Environment (TEE) and an untrusted execution environment, the deep learning model comprises N neural network layers which are sequentially connected, wherein the N neural network layers comprise a first neural network layer, the method is executed in the TEE, the method comprises the following steps of sequentially carrying out calculation processing on the N neural network layers, wherein the calculation processing aiming at the first neural network layer comprises the following steps:
acquiring a first parameter set and a second parameter set obtained by dividing network parameters of the first neural network layer, wherein the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation;
acquiring input data of a first neural network layer;
homomorphically encrypting the first parameter set and the input data, and sending an encryption result to an untrusted execution environment;
obtaining a first linear computation result from an untrusted execution environment for homomorphic linear operations based on the encryption result;
and determining an output result of the first neural network layer according to at least the first linear calculation result and the second parameter set.
2. The method of claim 1, wherein sending the encrypted result to the untrusted execution environment comprises:
compressing the encryption result to obtain first compressed data, and sending the first compressed data to an untrusted execution environment;
obtaining, from an untrusted execution environment, a first linear computation result that is homomorphic linear operated on based on the cryptographic result, comprising:
and receiving second compressed data from the untrusted execution environment, decompressing the second compressed data, and obtaining a first linear calculation result of homomorphic linear operation based on the encryption result.
3. The method of claim 2, wherein said compressing is lossless compression.
4. The method as claimed in any one of claims 1-3, wherein prior to performing computational processing for the first neural network layer, the method further comprises:
dividing the network parameters of the N neural network layers into a parameter part for linear operation and a parameter part for nonlinear operation respectively to obtain parameter division results;
the obtaining of the first parameter set and the second parameter set obtained by dividing the network parameters of the first neural network layer includes reading the first parameter set and the second parameter set obtained by dividing the first neural network layer from the parameter division result.
5. The method according to any one of claims 1 to 3, wherein the obtaining a first parameter set and a second parameter set obtained by dividing network parameters of the first neural network layer comprises:
acquiring network parameters of the first neural network layer;
the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
6. The method of claim 1, wherein the first set of parameters includes one or more of a weight parameter, a bias parameter, a number of input, output neurons.
7. The method of claim 1, wherein the second set of parameters includes parameters related to an activation function.
8. The method according to claim 1, further comprising, after the calculation processing is sequentially performed on the N neural network layers, taking an output result of a last layer of the N neural network layers as an output result of the deep learning model.
9. A deep learning model privacy protection method based on a hybrid environment, wherein the hybrid environment comprises a Trusted Execution Environment (TEE) and an untrusted execution environment, the deep learning model comprises N neural network layers which are connected in sequence, the N neural network layers comprise a first neural network layer, the method is executed in the untrusted execution environment, and the method comprises the following steps:
obtaining a first encryption result from the TEE; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation in a first neural network layer and input data;
performing homomorphic linear operation on the first encryption result to obtain a first linear calculation result;
and sending the first linear calculation result to the TEE, so that the TEE determines an output result of the first neural network layer according to the first linear calculation result and a second parameter set used for nonlinear operation in the first neural network layer.
10. The method of claim 9, wherein obtaining the first encryption result from the TEE comprises:
receiving first compressed data from the TEE, and decompressing the first compressed data to obtain a first encryption result;
the sending the first linear computation result to the TEE includes:
and compressing the first linear calculation result to obtain second compressed data, and sending the second compressed data to the TEE.
11. The method of claim 9, wherein said performing homomorphic linear operations comprises homomorphic addition operations and/or homomorphic multiplication operations.
12. The method of claim 9, further comprising, prior to sending the first linear computation result to the TEE, performing a dimension check on the first linear computation result to determine whether it is a predetermined dimension, and determining whether to send the first linear computation result to the TEE based on the check result.
13. An apparatus for protecting privacy of deep learning model based on mixed environment, the mixed environment comprises a Trusted Execution Environment (TEE) and an untrusted execution environment, the deep learning model comprises N neural network layers which are connected in sequence, wherein the first neural network layer is included, the apparatus is implemented in the TEE, the apparatus comprises,
a calculation processing unit configured to sequentially perform calculation processing on the N neural network layers, and including, for the calculation processing of the first neural network layer:
a dividing result obtaining subunit configured to obtain a first parameter set and a second parameter set obtained by dividing the network parameters of the first neural network layer, where the first parameter set is used for linear operation, and the second parameter set is used for nonlinear operation;
an input data acquisition subunit configured to acquire input data of the first neural network layer;
the sending subunit is configured to perform homomorphic encryption on the first parameter set and the input data, and send an encryption result to an untrusted execution environment;
a linear calculation result acquisition subunit configured to acquire, from an untrusted execution environment, a first linear calculation result that is homomorphic linear operation based on the encryption result;
and the layer output determining subunit is configured to determine an output result of the first neural network layer according to at least the first linear calculation result and the second parameter set.
14. The apparatus of claim 13, wherein the transmitting subunit is further configured to,
compressing the encryption result to obtain first compressed data, and sending the first compressed data to an untrusted execution environment;
and the linear calculation result acquisition subunit is further configured to receive second compressed data from the untrusted execution environment, and decompress the second compressed data to obtain a first linear calculation result of performing homomorphic linear operation based on the encryption result.
15. The apparatus of claim 14, wherein the compressing is lossless compression.
16. The apparatus according to any one of claims 13-15, wherein the computing processing unit further comprises:
the parameter dividing subunit is configured to divide the network parameters of the N neural network layers into a parameter part for linear operation and a parameter part for nonlinear operation respectively before calculation processing is performed on a first neural network layer, so as to obtain a parameter dividing result;
and the obtaining division result subunit is further configured to read the first parameter set and the second parameter set obtained by dividing the first neural network layer from the parameter division result.
17. The apparatus of any of claims 13-15, wherein the obtain division result subunit is further configured to:
acquiring network parameters of the first neural network layer;
the network parameters are divided into a first set of parameters for linear operations and a second set of parameters for non-linear operations.
18. The apparatus of claim 13, wherein the first set of parameters includes one or more of a weight parameter, a bias parameter, a number of input and output neurons.
19. The apparatus of claim 13, wherein the second set of parameters comprises parameters related to an activation function.
20. The apparatus of claim 13, further comprising,
and the model output result determining unit is configured to take the output result of the last layer of the N neural network layers as the output result of the deep learning model after sequentially performing calculation processing on the N neural network layers.
21. A deep learning model privacy protection apparatus based on a hybrid environment, the hybrid environment including a trusted execution environment TEE and an untrusted execution environment, the deep learning model including N neural network layers connected in sequence, the N neural network layers including a first neural network layer, the apparatus being implemented in the untrusted execution environment, the apparatus comprising:
an acquisition unit configured to acquire a first encryption result from the TEE; the first encryption result is obtained by homomorphic encryption based on a first parameter set used for linear operation in a first neural network layer and input data;
the linear calculation unit is configured to perform homomorphic linear operation on the first encryption result to obtain a first linear calculation result;
and the result sending unit is configured to send the first linear calculation result to the TEE, so that the TEE determines an output result of the first neural network layer according to the first linear calculation result and a second parameter set used for nonlinear operation in the first neural network layer.
22. The apparatus of claim 21, wherein the obtaining unit is further configured to,
receiving first compressed data from the TEE, and decompressing the first compressed data to obtain a first encryption result;
and the result sending unit is further configured to compress the first linear calculation result to obtain second compressed data and send the second compressed data to the TEE.
23. The apparatus of claim 21, wherein the performing homomorphic linear operations comprises homomorphic addition operations and/or homomorphic multiplication operations.
24. The apparatus of claim 21, further comprising,
and the dimension checking unit is configured to perform dimension checking on the first linear calculation result before the first linear calculation result is sent to the TEE, determine whether the first linear calculation result is a predetermined dimension, and determine whether to send the first linear calculation result to the TEE according to a checking result.
25. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-12.
26. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, performs the method of any of claims 1-12.
CN202110104463.2A 2021-01-26 2021-01-26 Deep learning model privacy protection method and device based on mixed environment Pending CN112766495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110104463.2A CN112766495A (en) 2021-01-26 2021-01-26 Deep learning model privacy protection method and device based on mixed environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110104463.2A CN112766495A (en) 2021-01-26 2021-01-26 Deep learning model privacy protection method and device based on mixed environment

Publications (1)

Publication Number Publication Date
CN112766495A true CN112766495A (en) 2021-05-07

Family

ID=75705772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110104463.2A Pending CN112766495A (en) 2021-01-26 2021-01-26 Deep learning model privacy protection method and device based on mixed environment

Country Status (1)

Country Link
CN (1) CN112766495A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569265A (en) * 2021-09-16 2021-10-29 支付宝(杭州)信息技术有限公司 Data processing method, system and device
WO2023115996A1 (en) * 2021-12-24 2023-06-29 中国银联股份有限公司 Model protection method and apparatus, data processing method and apparatus, and device and medium
CN116701831A (en) * 2023-02-28 2023-09-05 华为云计算技术有限公司 Method, device and storage medium for processing data
CN117688595A (en) * 2024-02-04 2024-03-12 南湖实验室 Homomorphic encryption performance improving method and system based on trusted execution environment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070028088A1 (en) * 2005-08-01 2007-02-01 Coskun Bayrak Polymorphic encryption method and system
CN104346363A (en) * 2013-07-30 2015-02-11 贵州电网公司信息通信分公司 Method for improving storage and transmission efficiency of power grid database
CN104657494A (en) * 2015-03-06 2015-05-27 四川智羽软件有限公司 Access method for website database
US20150312031A1 (en) * 2014-04-23 2015-10-29 Samsung Electronics Co., Ltd. Encryption apparatus, method for encryption and computer-readable recording medium
CN105956840A (en) * 2016-05-30 2016-09-21 广东电网有限责任公司 Electricity charge payment method and device, and bank and power supply enterprise networking system
CN107135004A (en) * 2017-04-20 2017-09-05 中国科学技术大学 A kind of adaptive real-time lossless compression method to earthquake data flow
CN110704850A (en) * 2019-09-03 2020-01-17 华为技术有限公司 Artificial intelligence AI model operation method and device
CN112106076A (en) * 2018-06-25 2020-12-18 国际商业机器公司 Privacy-enhanced deep learning cloud service using trusted execution environments
US20200403781A1 (en) * 2019-06-18 2020-12-24 International Business Machines Corporation Compressible (F)HE with Applications to PIR

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070028088A1 (en) * 2005-08-01 2007-02-01 Coskun Bayrak Polymorphic encryption method and system
CN104346363A (en) * 2013-07-30 2015-02-11 贵州电网公司信息通信分公司 Method for improving storage and transmission efficiency of power grid database
US20150312031A1 (en) * 2014-04-23 2015-10-29 Samsung Electronics Co., Ltd. Encryption apparatus, method for encryption and computer-readable recording medium
CN104657494A (en) * 2015-03-06 2015-05-27 四川智羽软件有限公司 Access method for website database
CN105956840A (en) * 2016-05-30 2016-09-21 广东电网有限责任公司 Electricity charge payment method and device, and bank and power supply enterprise networking system
CN107135004A (en) * 2017-04-20 2017-09-05 中国科学技术大学 A kind of adaptive real-time lossless compression method to earthquake data flow
CN112106076A (en) * 2018-06-25 2020-12-18 国际商业机器公司 Privacy-enhanced deep learning cloud service using trusted execution environments
US20200403781A1 (en) * 2019-06-18 2020-12-24 International Business Machines Corporation Compressible (F)HE with Applications to PIR
CN110704850A (en) * 2019-09-03 2020-01-17 华为技术有限公司 Artificial intelligence AI model operation method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569265A (en) * 2021-09-16 2021-10-29 支付宝(杭州)信息技术有限公司 Data processing method, system and device
CN113569265B (en) * 2021-09-16 2021-12-24 支付宝(杭州)信息技术有限公司 Data processing method, system and device
WO2023115996A1 (en) * 2021-12-24 2023-06-29 中国银联股份有限公司 Model protection method and apparatus, data processing method and apparatus, and device and medium
CN116701831A (en) * 2023-02-28 2023-09-05 华为云计算技术有限公司 Method, device and storage medium for processing data
CN117688595A (en) * 2024-02-04 2024-03-12 南湖实验室 Homomorphic encryption performance improving method and system based on trusted execution environment

Similar Documents

Publication Publication Date Title
Vellela et al. Strategic Survey on Security and Privacy Methods of Cloud Computing Environment
CN112766495A (en) Deep learning model privacy protection method and device based on mixed environment
Yang et al. Provable data possession of resource-constrained mobile devices in cloud computing
CN112395643B (en) Data privacy protection method and system for neural network
CN111417121A (en) Multi-malware hybrid detection method, system and device with privacy protection function
CN111950030A (en) Data sharing storage method based on block chain, terminal equipment and storage medium
US9251325B2 (en) Verifying passwords on a mobile device
CN111783129A (en) Data processing method and system for protecting privacy
Chou et al. Privacy-preserving phishing web page classification via fully homomorphic encryption
CN111259440B (en) Privacy protection decision tree classification method for cloud outsourcing data
Mohamed et al. Cryptography concepts: integrity, authentication, availability, access control, and non-repudiation
CN113055153B (en) Data encryption method, system and medium based on fully homomorphic encryption algorithm
JP2014137474A (en) Tamper detection device, tamper detection method, and program
Singh et al. Security enhancement of the cloud paradigm using a novel optimized crypto mechanism
CN108449317B (en) Access control system for security verification based on SGX and homomorphic encryption and implementation method thereof
Santos et al. Enhancing medical data security on public cloud
CN111475690B (en) Character string matching method and device, data detection method and server
US20220172647A1 (en) System and method for processing boolean and garbled circuits in memory-limited environments
Zhao Improvement of cloud computing medical data protection technology based on symmetric encryption algorithm
Saxena et al. Collaborative approach for data integrity verification in cloud computing
CN113051587B (en) Privacy protection intelligent transaction recommendation method, system and readable medium
Youn et al. Design of additive homomorphic encryption with multiple message spaces for secure and practical storage services over encrypted data
KR20150002821A (en) Method for protecting confidentiality of a file distributed and stored at a plurality of storage service providers
Yuan et al. Secure integrated circuit design via hybrid cloud
CN111931204A (en) Encryption and de-duplication storage method and terminal equipment for distributed system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507