CN111144547A - Neural network model prediction method and device based on trusted execution environment - Google Patents
Neural network model prediction method and device based on trusted execution environment Download PDFInfo
- Publication number
- CN111144547A CN111144547A CN201911264309.0A CN201911264309A CN111144547A CN 111144547 A CN111144547 A CN 111144547A CN 201911264309 A CN201911264309 A CN 201911264309A CN 111144547 A CN111144547 A CN 111144547A
- Authority
- CN
- China
- Prior art keywords
- model prediction
- data
- model
- predicted
- execution environment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the specification provides a model prediction method and a model prediction device based on a trusted execution environment. In the model prediction method, encrypted private data to be predicted provided by a data provider is received; decrypting the received encrypted private data to be predicted in the trusted execution environment; providing the decrypted data to be predicted to a neural network model in a trusted execution environment for multiple model predictions, wherein the neural network model is provided with a Dropout layer and the Dropout layer is opened at each model prediction; determining a model prediction result of data to be predicted based on the multiple model prediction results; and sending the model prediction result to the data provider. By the model prediction method, the privacy security of private data at a data provider and the model security of a neural network model at the model prediction device can be ensured, and the resistance to sample attack can be better resisted.
Description
Technical Field
Embodiments of the present disclosure relate generally to the field of computers, and more particularly, to a method and apparatus for neural network model prediction based on a trusted execution environment.
Background
When a model service provider provides a service, data of a data owner is required to be used for model prediction according to a neural network model at the model service provider, so that the model prediction service is provided. Here, the data provider may be a company or enterprise, or may be an individual user. The data provided by the data owner may be uniformly collected customer data, such as user data and business data. The user data may comprise, for example, user identity data or the like. The business data may include, for example, business data occurring on business applications provided by a company, such as commodity transaction data on Taobao, and the like. The data provided by the data owner may also be individual user data. Data is an important and private asset for data owners, and data privacy protection is needed.
In addition, machine learning models typically relate to business policies of a company or enterprise, such as business operations policies, business risk identification policies, and the like. Once the machine learning model is revealed, business strategies of the company or enterprise are reversely deduced according to the machine learning model. In addition, the machine learning model is the most important asset of the model service provider, so that the model service provider also needs to protect the model security of the machine learning model.
In view of such circumstances, a model prediction method capable of protecting data privacy of a data provider and model security of a model service provider has been proposed.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present specification provide a method and an apparatus for neural network model prediction based on a trusted execution environment, which can provide a model prediction service while ensuring data privacy of a data provider and security of a neural network model.
According to an aspect of embodiments of the present specification, there is provided a model prediction method based on a trusted execution environment, including: receiving encrypted private data to be predicted provided by a data provider; decrypting the received encrypted private data to be predicted in the trusted execution environment; providing the decrypted data to be predicted to a neural network model located in the trusted execution environment for multiple model predictions, wherein the neural network model is provided with a Dropout layer, and the Dropout layer is opened during each model prediction; determining a model prediction result of the data to be predicted based on multiple model prediction results; and sending the model prediction result to the data provider.
Optionally, in an example of the above aspect, determining a model prediction result of the data to be predicted based on a plurality of times of model prediction results may include: and calculating the average value of the multiple model prediction results to serve as the model prediction result of the data to be predicted.
Optionally, in an example of the above aspect, the model prediction method may further include: calculating the variance of the multiple model predictions and sending the model predictions to the data provider may include: and when the calculated variance is not larger than a preset threshold value, sending the model prediction result to the data provider.
Optionally, in an example of the above aspect, the trusted execution environment includes an sgx (software guard extensions) based trusted execution environment or a TrustZone based trusted execution environment.
Optionally, in an example of the above aspect, the number of model predictions may be determined according to computational power for model prediction, prediction timeliness and/or prediction accuracy requirements required by an application scenario.
Optionally, in an example of the above aspect, the private data to be predicted includes image data, voice data, or text data, or the private data to be predicted includes user feature data.
According to another aspect of embodiments of the present specification, there is provided a model prediction apparatus based on a trusted execution environment, including: the data receiving unit is used for receiving the encrypted private data to be predicted, which is provided by the data provider; the data decryption unit is used for decrypting the received encrypted private data to be predicted in the trusted execution environment; the model prediction unit is used for providing the decrypted private data to be predicted to a neural network model positioned in the trusted execution environment for multiple times of model prediction, the neural network model is provided with a Dropout layer, and the Dropout layer is opened at each time of model prediction; a model prediction result determination unit that determines a model prediction result of the data to be predicted based on a plurality of times of model prediction results; and a model prediction result transmission unit that transmits the model prediction result to the data provider.
Optionally, in an example of the above aspect, the model predicting device may further include: a variance calculation unit that calculates a variance of the plurality of times of model prediction results, and the model prediction result transmission unit transmits the model prediction result to the data provider when the calculated variance is not more than a predetermined threshold.
Optionally, in an example of the above aspect, the model predicting device may further include: and a model prediction frequency determining unit for determining the model prediction frequency according to the calculation force for model prediction, the prediction timeliness and/or prediction precision required by the application scene.
According to another aspect of embodiments of the present specification, there is provided a model prediction system based on a trusted execution environment, including: the data provider device provides private data to be predicted; and a model prediction device having a trusted execution environment including the model prediction means as described above, the trusted execution environment having therein a neural network model provided with a Dropout layer, and the Dropout layer being opened at each model prediction.
According to another aspect of embodiments of the present specification, there is provided an electronic apparatus including: one or more processors, and a memory coupled with the one or more processors, the memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform a neural network model prediction method as described above.
According to another aspect of embodiments herein, there is provided a machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform a neural network model prediction method as described above.
Drawings
A further understanding of the nature and advantages of the contents of the embodiments of the specification may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals.
FIG. 1 illustrates a block diagram of a trusted execution environment based neural network model prediction system in accordance with embodiments of the present description;
FIG. 2 illustrates an example schematic diagram of a neural network model in accordance with embodiments of the present description;
FIG. 3 illustrates a flow diagram of one example of a method for neural network model prediction based on a trusted execution environment in accordance with embodiments of the present description;
FIG. 4 illustrates a flow diagram of another example of a method of neural network model prediction based on a trusted execution environment in accordance with an embodiment of the present description;
FIG. 5 illustrates a block diagram of one example of a model prediction apparatus in accordance with embodiments of the present description;
FIG. 6 illustrates another example block diagram of a model prediction apparatus in accordance with an embodiment of this specification; and
FIG. 7 illustrates a block diagram of an electronic device for implementing trusted execution environment based neural network model prediction in accordance with embodiments of the present description.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand and thereby implement the subject matter described herein, and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the embodiments of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may also be combined in other examples.
As used herein, the term "include" and its variants mean open-ended terms in the sense of "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment". The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
Fig. 1 shows a block diagram of a trusted execution environment based neural network model prediction system 1 according to an embodiment of the present description.
As shown in fig. 1, the neural network model prediction system 1 includes a data provider device 10 and a model prediction device 20.
The data provider device 10 is used to provide the private data to be predicted that is needed by the model prediction device 20. The data provider device 10 may be any data provider device provided at a model user, such as an internet of things device in an internet of things to which a neural network model is applied, a client device installed with a business application client, and the like. Data provider device 10 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile computing devices, smart phones, tablet computers, cellular phones, Personal Digital Assistants (PDAs), handheld devices, wearable computing devices, consumer electronics, and so forth.
The model prediction device 20 has a trusted execution environment 210 disposed thereon. The term "Trusted Execution Environment (TEE)" is a secure area within the host processor. It runs in a separate environment and in parallel with the operating system. Thus, by using both hardware and software to protect data and code, it can be ensured that the confidentiality and integrity of both the code and data loaded in the TEE are protected. Trusted applications running in the TEE can access all functions of the device main processor and memory, while hardware isolation protects these components from user-installed applications running in the main operating system. Software and cryptographic isolation in the TEE may protect different trusted applications.
In this description, the trusted execution environment 210 may include, for example, an SGX-based trusted execution environment or a TrustZone-based trusted execution environment. For example, the model prediction device 20 may comprise an SGX device or a TrustZone device.
The SGX device is a trusted computing device to which an Intel SGX architecture is applied. The Intel SGX architecture is an extension of the Intel architecture. The SGX architecture extends a new set of instruction sets and memory access mechanisms over the original architecture. These extensions allow applications to implement a container called enclave, which partitions a protected area in the application's address space, providing confidentiality and integrity protection for code and data in the enclave container from malware with special rights. The SGX architecture takes hardware security as mandatory guarantee, does not depend on the security state of firmware and software, and can provide a trusted execution environment of a user space. Different from other trusted computing technologies, the SGX-based Trusted Computing Base (TCB) only includes hardware, so that the defects of software security vulnerabilities and threats existing in the software-based TCB are overcome, and the computing security is greatly improved. In addition, the SGX architecture can guarantee a trusted execution environment during operation, malicious codes cannot access and tamper with protected contents during operation of other programs, and system security is further enhanced.
Furthermore, a TrustZone device is a trusted computing device that is capable of supporting ARM TrustZone technology.
The trusted execution environment 210 has a neural network model therein. Upon receiving the private data to be predicted from the data-owner device 10, the model prediction device 20 supplies the received private data to be predicted to the neural network model in the trusted execution environment 210 to perform model prediction in the trusted execution environment 210. Then, the model prediction device 20 supplies the obtained model prediction result to the data provider 10. The structure and operation of the model predicting device 20 will be described in detail below with reference to the accompanying drawings.
In this specification, the neural network model is provided with a Dropout layer. In addition, in the training process and the prediction process of the neural network model according to the embodiment of the present specification, the Dropout layer is not set to be closed.
By using the Dropout layer, part of the neural network units (e.g., part of hidden layer nodes) in the neural network model can be randomly selected to be ignored according to a certain probability in the prediction process of the neural network model. According to the method, when the neural network model is used for prediction each time, part of different hidden layer nodes can be ignored, so that the neural network model cannot be reversely derived based on the prediction result, and further the leakage of the neural network model can be effectively prevented.
Fig. 2 shows a schematic diagram of an example of the neural network model 2.
As shown in fig. 2, the neural network model 2 includes an input layer 201, a first hidden layer 202, a second hidden layer 203, a third hidden layer 204, and an output layer 205.
The input layer 201 includes 3 input nodes N1、N2And N3And a bias term b1. Three input nodes N1、N2And N3Data from three different data owners are received, respectively. In this specification, the first hidden layer 202 includes 2 hidden layer nodes N4And N5And a bias term b2. Hidden layer node N4And N53 input nodes N of the input layer 201 respectively1、N2And N3And a bias term b1And (4) fully connecting. Input node N1And hidden layer node N4And N5Each weight between is W1,4And W1,5. Input node N2And hidden layer node N4And N5Each weight between is W2,4And W2,5. Input node N3And hidden layer node N4And N5Each weight between is W3,4And W3,5。
The second hidden layer 203 comprises 2 hidden layer nodes N6And N7And a bias term b3. Hidden layer node N6And N72 hidden layer nodes N respectively connected with the first hidden layer 2024And N5And a bias term b2And (4) fully connecting. Hidden layer node N4And hidden layer node N6And N7Each weight between is W4,6And W4,7. Hidden layer node N5And hidden layer node N6And N7Each weight between is W5,6And W5,7。
The third hidden layer 204 includes 2 hidden layer nodes N8And N9And a bias term b4. Hidden layer node N8And N92 hidden layer nodes N of the second hidden layer 203 respectively6And N7And a bias term b3And (4) fully connecting. Hidden layer node N6And hidden layer node N8And N9Each weight between is W6,8And W6,9. Hidden layer node N7And hidden layer node N8And N9Each weight between is W7,8And W7,9。
The output layer 205 includes an output node N10. Output node N102 hidden layer nodes N with the third hidden layer 2048And N9And a bias term b4And (4) fully connecting. Hidden layer node N8And an output node N10Weight in between is W8,10. Hidden layer node N9And an output node N10Weight in between is W9,10。
In the neural network model shown in fig. 2, the weight W1,4、W1,5、W2,4、W2,5、W3,4、W3,5、W4,6、W4,7、W5,6、W5,7、W6,8、W6,9、W7,8、W7,9、W8,10And W9,10Are model parameters of each layer of the neural network model. Further, before the partial layer model shown in fig. 2, Dropout layers (not shown) are provided, for example, an input layer 201 and a first hidden layer 202. It is noted that the manner of disposing the Dropout layer in the neural network model may be determined according to the specific structure of the neural network model.
When performing the feedforward calculation, the input node N of the input layer 201 is turned on because the Dropout layer before the input layer 201 is opened1、N2And N3Some nodes in (e.g., N) are ignored1) Then, the hidden layer nodes N of the first hidden layer 202 are obtained through calculation4And N5Input Z of1And Z2Wherein Z is1=W2,4*X2+W3,4*X3+b1And Z2=W2,5*X2+W3,5*X3+b1. Then, respectively for Z1And Z2Performing activation function processing to obtain a hidden node N4And N5Outputs a1 and a 2. The feedforward calculation is performed layer by layer in the above manner, as shown in fig. 2, and finally the output a7 of the neural network model is obtained.
Further, in the present specification, the private data to be predicted may include image data, voice data, or text data. Accordingly, the neural network model may be applied to business risk recognition, business classification, or business decision based on image data, voice data, or text data, respectively. Alternatively, the private data to be predicted may include user characteristic data. Accordingly, the neural network model may be applied to business risk identification, business classification, business recommendation or business decision, etc. based on user feature data.
FIG. 3 illustrates a flow diagram of one example of a method 300 for neural network model prediction based on a trusted execution environment in accordance with embodiments of the present description.
As shown in fig. 3, the data provider 10 encrypts owned private data to be predicted at 310, and transmits the encrypted private data to be predicted to the model prediction device 20 at 320.
Upon receiving the encrypted private data to be predicted, the model prediction device 20 decrypts the received encrypted private data to be predicted in the trusted execution environment 210 at 330. Here, it is noted that the encryption/decryption methods used in 310 and 330 may employ any encryption/decryption method that is applicable to the trusted execution environment 210.
At block 340, the model prediction device 20 provides the decrypted private data to be predicted to a neural network model in the trusted execution environment 210, which is provided with a Dropout layer, for multiple model predictions. And during each model prediction, the Dropout layer in the neural network model is opened, and because the Dropout layer randomly selects and ignores part of different neural network units in the neural network model each time, a plurality of different model prediction results can be obtained.
Next, at 350, based on the obtained multiple model predictions, a model prediction result of the data to be predicted is determined. For example, in one example, determining the model prediction result of the data to be predicted based on the obtained multiple model prediction results may include: and calculating the average value of the multiple model prediction results to serve as the model prediction result of the data to be predicted.
The model prediction results are then sent to the data provider at 360.
Further, it is to be noted that the number of times of model prediction performed in 340 may be a predetermined number of times (empirical value) set in advance. Alternatively, the model prediction times can be determined according to the computational power for model prediction, prediction timeliness and/or prediction accuracy requirements required by the application scenario.
With the model prediction method shown in fig. 3, private data of the data provider is encrypted and provided to the trusted execution environment at the model prediction device, and is decrypted in the trusted execution environment and provided to the neural network model for use, so that the private data of the data provider can be protected from being leaked. In addition, when the neural network model with the Dropout layer is used for model prediction, part of the neural network units (for example, part of hidden layer nodes) in the neural network model can be randomly selected to be ignored according to a certain probability. According to the method, when the neural network model is used for prediction each time, part of different hidden layer nodes can be ignored, so that the client device cannot reversely derive the neural network model based on the prediction result, and further the neural network model can be prevented from being leaked. In addition, by using multiple times of model prediction and obtaining a final model prediction result based on the multiple times of model prediction results, the model prediction precision can be effectively improved.
FIG. 4 shows a flowchart of another example of a method 400 for neural network model prediction based on trusted execution environments, according to embodiments of the present description
As shown in fig. 4, the data provider 10 encrypts the owned private data to be predicted at 410, and transmits the encrypted private data to be predicted to the model prediction device 20 at 420.
Upon receiving the encrypted private data to be predicted, the model prediction device 20 decrypts 430 the received encrypted private data to be predicted in the trusted execution environment 210. Here, it is noted that the encryption/decryption methods used in 410 and 430 may employ any encryption/decryption method that is applicable to the trusted execution environment 210.
At block 440, the model prediction device 20 provides the decrypted private data to be predicted to the neural network model in the trusted execution environment 210 for multiple model predictions. And at each model prediction, opening a Dropout layer in the neural network model, thereby obtaining a model prediction result for multiple times.
Next, at 450, based on the obtained multiple model predictions, a model prediction result and a model prediction variance of the data to be predicted are determined. For example, in one example, determining the model prediction result of the data to be predicted based on the obtained multiple model prediction results may include: and calculating the average value of the multiple model prediction results to serve as the model prediction result of the data to be predicted.
At 460, it is determined whether the determined model prediction variance is greater than a predetermined threshold. If greater than the predetermined threshold, the model prediction results are discarded at 480.
If not, the model prediction results are sent to the data provider at 470.
Also, it is noted that the number of times of model prediction performed in 440 may be a predetermined number of times (empirical value) set in advance. Alternatively, the model prediction times can be determined according to the computational power for model prediction, prediction timeliness and/or prediction accuracy requirements required by the application scenario.
With the model prediction method shown in fig. 4, it is possible to determine the model prediction variance for a plurality of times of model predictions, and to provide the model prediction result to the data provider only when the model prediction variance does not exceed a predetermined threshold. Since the prediction result of the confrontation sample is often in a large error, the confrontation sample can be identified by the scheme, so that the confrontation sample attack can be effectively resisted.
Fig. 5 illustrates a block diagram of a model prediction apparatus 500 according to an embodiment of the present description. The model prediction means 500 is provided at the model service provider. As shown in fig. 5, the model prediction apparatus 500 includes a data receiving unit 510, a data decrypting unit 520, a model prediction unit 530, a model prediction result determining unit 540, and a model prediction result transmitting unit 550.
The data receiving unit 510 is configured to receive encrypted private data to be predicted provided by a data provider.
The data decryption unit 520 is configured to decrypt the received encrypted private data to be predicted in the trusted execution environment.
The model prediction unit 530 is configured to provide the decrypted private data to be predicted to a neural network model located in the trusted execution environment, which is provided with a Dropout layer, for multiple model predictions. Further, at each model prediction, the Dropout layer set is opened.
The model prediction result determination unit 540 is configured to determine a model prediction result of the data to be predicted based on the multiple times of model prediction results. For example, in one example, the model prediction result determination unit 540 may calculate an average of the plurality of times of model prediction results as the model prediction result of the data to be predicted.
The model prediction result transmission unit 550 is configured to transmit the model prediction result to the data provider.
FIG. 6 illustrates another example block diagram of a model prediction apparatus 600 in accordance with embodiments of this disclosure. As shown in fig. 6, the model prediction apparatus 600 includes a data reception unit 610, a data decryption unit 620, a prediction number determination unit 630, a model prediction unit 640, a model prediction result determination unit 650, a variance calculation unit 660, and a model prediction result transmission unit 670.
The data receiving unit 610 is configured to receive encrypted private data to be predicted provided by a data provider.
The data decryption unit 620 is configured to decrypt the received encrypted private data to be predicted in the trusted execution environment.
The prediction number determination unit 630 is configured to determine the number of model predictions according to the calculation power for model prediction, prediction timeliness and/or prediction accuracy requirements required for an application scenario.
The model prediction unit 640 is configured to provide the decrypted private data to be predicted to a neural network model located in the trusted execution environment, which is provided with a Dropout layer, for multiple model predictions. Further, at each model prediction, the Dropout layer set is opened.
The model prediction result determination unit 650 is configured to determine a model prediction result of the data to be predicted based on the multiple times of model prediction results. For example, in one example, the model prediction result determination unit 540 may calculate an average of the plurality of times of model prediction results as the model prediction result of the data to be predicted.
The variance calculation unit 660 is configured to calculate a variance of the multiple times model prediction result.
The model prediction result transmission unit 550 is configured to transmit the model prediction result to the data provider when the calculated variance is not greater than a predetermined threshold. Further, when the calculated variance is greater than a predetermined threshold, the calculated model prediction result is discarded without being transmitted to the data provider.
As described above with reference to fig. 1 to 6, embodiments of a model prediction method and a model prediction apparatus according to embodiments of the present specification are described. The above model prediction means may be implemented by hardware, or may be implemented by software, or a combination of hardware and software.
FIG. 7 illustrates a block diagram of an electronic device 700 for implementing trusted execution environment based neural network model training, in accordance with embodiments of the present description.
As shown in fig. 7, electronic device 700 may include at least one processor 710, storage (e.g., non-volatile storage) 720, memory 730, communication interface 740, and internal bus 760, with at least one processor 710, storage 720, memory 730, and communication interface 740 connected together via bus 760. The at least one processor 710 executes at least one computer-readable instruction (i.e., an element described above as being implemented in software) stored or encoded in a computer-readable storage medium.
In one embodiment, stored in the memory are computer-executable instructions that, when executed, cause the at least one processor 710 to: receiving encrypted private data to be predicted provided by a data provider; decrypting the received encrypted private data to be predicted in the trusted execution environment; providing the decrypted data to be predicted to a neural network model in a trusted execution environment for multiple model predictions, wherein the neural network model is provided with a Dropout layer, and the Dropout layer is opened during each model prediction; determining a model prediction result of data to be predicted based on the multiple model prediction results; and sending the model prediction result to the data provider.
It should be appreciated that the computer-executable instructions stored in the memory, when executed, cause the at least one processor 710 to perform the various operations and functions described above in connection with fig. 1-6 in the various embodiments of the present description.
In embodiments of the present description, the electronic device 700 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile computing devices, smart phones, tablet computers, cellular phones, Personal Digital Assistants (PDAs), handheld devices, wearable computing devices, consumer electronics, and so forth.
According to one embodiment, a program product, such as a non-transitory machine-readable medium, is provided. A non-transitory machine-readable medium may have instructions (i.e., elements described above as being implemented in software) that, when executed by a machine, cause the machine to perform various operations and functions as described above in connection with fig. 1-6 in various embodiments of the present specification.
Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and causes a computer or processor of the system or apparatus to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present invention.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or from the cloud via a communications network.
It will be understood by those skilled in the art that various changes and modifications may be made in the above-disclosed embodiments without departing from the spirit of the invention. Accordingly, the scope of the invention should be determined from the following claims.
It should be noted that not all steps and units in the above flows and system structure diagrams are necessary, and some steps or units may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by a plurality of physical entities, or some units may be implemented by some components in a plurality of independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware units or processors may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent circuit, or temporarily set circuit) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments but does not represent all embodiments that may be practiced or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (12)
1. A trusted execution environment based model prediction method, comprising:
receiving encrypted private data to be predicted provided by a data provider;
decrypting the received encrypted private data to be predicted in the trusted execution environment;
providing the decrypted data to be predicted to a neural network model located in the trusted execution environment for multiple model predictions, wherein the neural network model is provided with a Dropout layer, and the Dropout layer is opened during each model prediction;
determining a model prediction result of the data to be predicted based on multiple model prediction results; and
and sending the model prediction result to the data provider.
2. The method of claim 1, wherein determining a model prediction result for the data to be predicted based on a plurality of model prediction results comprises:
and calculating the average value of the multiple model prediction results to serve as the model prediction result of the data to be predicted.
3. The model prediction method of claim 1, further comprising:
calculating a variance of the multiple model predictions, an
Sending the model prediction results to the data provider includes:
and when the calculated variance is not larger than a preset threshold value, sending the model prediction result to the data provider.
4. The model prediction method of claim 1, wherein the trusted execution environment comprises an SGX-based trusted execution environment or a TrustZone-based trusted execution environment.
5. The model prediction method of claim 1, wherein the number of model predictions is determined according to computational power for model prediction, prediction timeliness and/or prediction accuracy requirements required by an application scenario.
6. The model prediction method according to any one of claims 1 to 5, wherein the private data to be predicted includes image data, voice data, or text data, or the private data to be predicted includes user feature data.
7. A trusted execution environment based model prediction apparatus, comprising:
the data receiving unit is used for receiving the encrypted private data to be predicted, which is provided by the data provider;
the data decryption unit is used for decrypting the received encrypted private data to be predicted in the trusted execution environment;
the model prediction unit is used for providing the decrypted private data to be predicted to a neural network model positioned in the trusted execution environment for multiple times of model prediction, the neural network model is provided with a Dropout layer, and the Dropout layer is opened at each time of model prediction;
a model prediction result determination unit that determines a model prediction result of the data to be predicted based on a plurality of times of model prediction results; and
and a model prediction result transmitting unit that transmits the model prediction result to the data provider.
8. The model prediction device of claim 7, further comprising:
a variance calculating unit that calculates a variance of the multiple times of model prediction results, an
The model prediction result transmission unit transmits the model prediction result to the data provider when the calculated variance is not greater than a predetermined threshold.
9. The model prediction device of claim 7, further comprising:
and a model prediction frequency determining unit for determining the model prediction frequency according to the calculation force for model prediction, the prediction timeliness and/or prediction precision required by the application scene.
10. A trusted execution environment based model prediction system, comprising:
the data provider device provides private data to be predicted; and
model prediction device with a trusted execution environment comprising model prediction means according to one of claims 7 to 9, with a neural network model in the trusted execution environment, the neural network model being provided with a Dropout layer, and the Dropout layer being opened at each model prediction.
11. An electronic device, comprising:
one or more processors, and
a memory coupled with the one or more processors, the memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
12. A machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911264309.0A CN111144547A (en) | 2019-12-11 | 2019-12-11 | Neural network model prediction method and device based on trusted execution environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911264309.0A CN111144547A (en) | 2019-12-11 | 2019-12-11 | Neural network model prediction method and device based on trusted execution environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111144547A true CN111144547A (en) | 2020-05-12 |
Family
ID=70518001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911264309.0A Pending CN111144547A (en) | 2019-12-11 | 2019-12-11 | Neural network model prediction method and device based on trusted execution environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111144547A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095507A (en) * | 2021-04-02 | 2021-07-09 | 支付宝(杭州)信息技术有限公司 | Method, device, equipment and medium for training and predicting machine learning model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107229944A (en) * | 2017-05-04 | 2017-10-03 | 青岛科技大学 | Semi-supervised active identification method based on cognitive information particle |
CN108960036A (en) * | 2018-04-27 | 2018-12-07 | 北京市商汤科技开发有限公司 | 3 D human body attitude prediction method, apparatus, medium and equipment |
CN109416721A (en) * | 2016-06-22 | 2019-03-01 | 微软技术许可有限责任公司 | Secret protection machine learning |
-
2019
- 2019-12-11 CN CN201911264309.0A patent/CN111144547A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109416721A (en) * | 2016-06-22 | 2019-03-01 | 微软技术许可有限责任公司 | Secret protection machine learning |
CN107229944A (en) * | 2017-05-04 | 2017-10-03 | 青岛科技大学 | Semi-supervised active identification method based on cognitive information particle |
CN108960036A (en) * | 2018-04-27 | 2018-12-07 | 北京市商汤科技开发有限公司 | 3 D human body attitude prediction method, apparatus, medium and equipment |
Non-Patent Citations (1)
Title |
---|
SYBILW: "《dropout总结》", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/53936240》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095507A (en) * | 2021-04-02 | 2021-07-09 | 支付宝(杭州)信息技术有限公司 | Method, device, equipment and medium for training and predicting machine learning model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111061963B (en) | Machine learning model training and predicting method and device based on multi-party safety calculation | |
WO2021143466A1 (en) | Method and device for using trusted execution environment to train neural network model | |
US10958678B2 (en) | Identity based behavior measurement architecture | |
US12041067B2 (en) | Behavior detection and verification | |
US9514317B2 (en) | Policy-based trusted inspection of rights managed content | |
US9917817B1 (en) | Selective encryption of outgoing data | |
US20150381370A1 (en) | Systems and methods for validated secure data access | |
CN106980793B (en) | TrustZone-based universal password storage and reading method, device and terminal equipment | |
Dinadayalan et al. | Data security issues in cloud environment and solutions | |
Bhatia et al. | Growing aspects of cyber security in e-commerce | |
Ismail et al. | Mobile cloud database security: problems and solutions | |
Asif et al. | Cloud computing in healthcare-investigation of threats, vulnerabilities, future challenges and counter measure | |
Goel et al. | Security issues in cloud computing | |
CN111144547A (en) | Neural network model prediction method and device based on trusted execution environment | |
Xie et al. | Network security analysis for cloud computing environment | |
Mowbray et al. | Protecting personal information in cloud computing | |
US11216565B1 (en) | Systems and methods for selectively encrypting controlled information for viewing by an augmented reality device | |
GR et al. | Investigational analysis of security measures effectiveness in cloud computing: A study | |
Begna et al. | Security Analysis in Context-Aware Distributed Storage and Query Processing in Hybrid Cloud Framework | |
Latha et al. | Secure cloud web application in an industrial environment: a study | |
Arogundade | Addressing Cloud Computing Security and Visibility Issues | |
US20240205249A1 (en) | Protection of cloud storage devices from anomalous encryption operations | |
Sen et al. | Security and privacy issues for cloud computing and its challenges | |
Paudyal et al. | Secure Data Mobility in Cloud Computing for e-Governance Application | |
Manjith et al. | A framework for data and device protection on mobile devices using logic encryption |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200512 |