CN112508200A - Method, apparatus, device, medium, and program for processing machine learning model file - Google Patents
Method, apparatus, device, medium, and program for processing machine learning model file Download PDFInfo
- Publication number
- CN112508200A CN112508200A CN202011511171.2A CN202011511171A CN112508200A CN 112508200 A CN112508200 A CN 112508200A CN 202011511171 A CN202011511171 A CN 202011511171A CN 112508200 A CN112508200 A CN 112508200A
- Authority
- CN
- China
- Prior art keywords
- machine learning
- learning model
- file
- encrypted file
- parameter values
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 256
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000012545 processing Methods 0.000 title claims abstract description 36
- 238000004422 calculation algorithm Methods 0.000 claims description 54
- 238000004590 computer program Methods 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 238000011084 recovery Methods 0.000 claims description 11
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 12
- 230000000670 limiting effect Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 230000002441 reversible effect Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- General Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Storage Device Security (AREA)
Abstract
The present disclosure discloses a method, apparatus, device, medium, and program for processing a machine learning model file, and relates to the field of artificial intelligence, and in particular, to the field of deep learning. The specific implementation scheme is as follows: obtaining a file of a machine learning model, the file of the machine learning model including a set of parameter values available for the machine learning model; adjusting at least one parameter value of a set of parameter values based on a predetermined rule to obtain an adjusted file of a machine learning model; and encrypting the adjusted file of the machine learning model to obtain an encrypted file of the machine learning model. By the method, the safety of the machine learning model can be effectively ensured, and the model is prevented from being acquired by other users.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly, to a method, apparatus, device, medium, and program for processing a machine learning model file in the field of deep learning.
Background
The neural network model is a mathematical model formed by using the working principle of a biological neural network as a reference. Neural network models can enable the processing of large amounts of data. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously. Therefore, neural network models are increasingly used in the computer field.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, medium, and program for processing files of a machine learning model.
According to a first aspect of the present disclosure, a method for processing files of a machine learning model is provided. The method includes obtaining a file of a machine learning model, the file of the machine learning model including a set of parameter values available for the machine learning model; adjusting at least one parameter value of a set of parameter values based on a predetermined rule to obtain an adjusted file of a machine learning model; and encrypting the adjusted file of the machine learning model to obtain an encrypted file of the machine learning model.
According to a second aspect of the present disclosure, a method for processing files of a machine learning model is provided. The method includes obtaining an encrypted file of a machine learning model; decrypting the encrypted file of the machine learning model to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model comprising a set of parameter values; and performing a restoration operation on at least one parameter value in the set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
According to a third aspect of the present disclosure, there is provided an apparatus for processing files of a machine learning model. The apparatus includes a file acquisition module configured to acquire a file of a machine learning model, the file of the machine learning model including a set of parameter values available for the machine learning model; a first adjustment module configured to adjust at least one parameter value of a set of parameter values based on a predetermined rule to obtain an adjusted file of a machine learning model; and a file encryption module configured to encrypt the adjusted file of the machine learning model to obtain an encrypted file of the machine learning model.
According to a fourth aspect of the present disclosure, there is provided an apparatus for processing files of a machine learning model. The apparatus includes a file acquisition module configured to acquire an encrypted file of a machine learning model; a first file decryption module configured to decrypt an encrypted file of the machine learning model to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model comprising a set of parameter values; and a restoration module configured to perform a restoration operation on at least one parameter value of the set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
According to a fifth aspect of the present disclosure, an electronic device is provided. The electronic device includes at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to the first aspect of the disclosure.
According to a sixth aspect of the present disclosure, an electronic device is provided. The electronic device includes at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to the second aspect of the disclosure.
According to a seventh aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to the first aspect of the present disclosure.
According to an eighth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to the second aspect of the present disclosure.
According to a ninth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to the first aspect of the present disclosure.
According to a tenth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to the second aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 illustrates a schematic diagram of an environment 100 in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a method 200 for processing files of a machine learning model according to some embodiments of the present disclosure;
FIG. 3 illustrates a flow diagram of a method 300 for processing files of a machine learning model according to some embodiments of the present disclosure;
FIG. 4 illustrates a flow diagram of a document method 400 for processing a machine learning model according to some embodiments of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus 500 for processing files of a machine learning model according to some embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of an apparatus 600 for processing files of a machine learning model according to some embodiments of the present disclosure; and
fig. 7 illustrates a block diagram of a device 700 capable of implementing multiple embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
The neural network framework of the neural network model is divided into a static graph framework and a dynamic graph framework. When using the static graph framework, the graph structure of the network needs to be defined first. Therefore, the calculation graph is constructed during the first operation, and the calculation graph is not required to be constructed during the second operation, so that the operation speed is high.
The pre-training model is a model created to solve similar problems. When the similar problem is solved, the model does not need to be trained from the beginning, and the model can be directly retrained again based on the pre-training model so as to quickly obtain the corresponding inference model. Most open source pre-trained models are not usually encrypted, but the effect is not optimal. However, the open-source pre-trained model is not the best model for the industry. For the pre-training model with good effect, the general network structure is deep, the number of network parameters is huge, and massive data is needed for training. Therefore, producing a good pre-trained model consumes a lot of cost, including a lot of computing resources. If provided directly to the user, there is a possibility that the user is provided to his user for use, resulting in an unsafe pre-trained model.
In addition, the inference model obtained by training the pre-training model generally needs to be encrypted, and when the model performs inference, decryption operation is performed to generate a temporary plaintext model file to a disk, and then the temporary plaintext model file is loaded to a memory from the disk, so that model inference is completed. However, the cryptographic implementation for the inference model is relatively simple and the generated model file is at risk of "leakage". Furthermore, the inference model of this arrangement cannot support operations that a user wants to fine-tune using his own data.
In order to solve at least the above problems, according to an embodiment of the present disclosure, an improvement is proposed. In this approach, a computing device obtains a file of a machine learning model that includes a set of parameter values that are available for the machine learning model. The computing device then adjusts at least one parameter value of the set of parameter values based on a predetermined rule to obtain an adjusted file of the machine learning model. The computing device obtains an encrypted file of the machine learning model by encrypting the adjusted file of the machine learning model. By the method, the safety of the machine learning model can be effectively ensured, and the model is prevented from being acquired by other users.
Fig. 1 illustrates a schematic diagram of an environment 100 in which various embodiments of the present disclosure can be implemented. The example environment 100 includes a computing device 104.
The computing device 104 can encrypt the file of the machine learning model 102 to generate an encrypted file of the machine learning model 110 or decrypt the encrypted file of the generated machine learning model 110 to obtain the file of the machine learning model 102.
The file 102 of the machine learning model is a file that stores the machine learning model. Within this document is included a program of machine learning models and a set of parameter values for the parameters of the machine learning models. In some embodiments, the machine learning model is a pre-trained model. In some embodiments, the machine learning model is an inference model. In some embodiments, the set of parameters are weight parameters. In some embodiments, the set of parameters may be any parameters for a machine learning model. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.
FIG. 1 shows a computing device 104 receiving a file 102 of a machine learning model. For example, the computing device 104 receives the file 102 of the machine learning model from other computing devices or storage connected to the computing device 104. Fig. 1 shows that the computing device 104 receives the file 102 of the machine learning model is merely an example, and the computing device 104 may retrieve the file 102 of the machine learning model from its internal storage.
The computing device 104 adjusts at least one number of the set of parameters in the file 102 of machine learning models to generate adjusted parameter values 106 such that an adjusted file of machine learning models is formed that includes the adjusted parameter values 106.
The computing device 104 then performs an encryption operation on the adjusted file of the machine learning model to generate an encrypted file 110 of the machine learning model. In some embodiments, the encryption algorithm that encrypts the adapted file of the machine learning model is a symmetric encryption algorithm. In some embodiments, the encryption algorithm that encrypts the adapted file of the machine learning model is an asymmetric encryption algorithm. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure. The adjusted file of the machine learning model may be encrypted by one skilled in the art using any suitable encryption algorithm.
The computing device 104 may also decrypt the encrypted file 110 of the machine learning model to obtain a decrypted file of the machine learning model, the decrypted file being an adjusted file. After obtaining the encrypted file 110 of the machine learning model, the computing device 104 decrypts the file using a decryption algorithm corresponding to the encryption algorithm of the encrypted file that generated the machine learning model. Such as a symmetric decryption algorithm or an asymmetric decryption algorithm corresponding to a symmetric encryption algorithm/an asymmetric encryption algorithm.
The computing device 104 then performs a recovery operation on some parameter values in the decrypted file of the neural network model. The computing device 104 selects the at least one parameter value in the same manner as the parameter value was selected when generating the encrypted file 110 of the machine learning model. A recovery operation is then performed on the at least one parameter value, as opposed to when the parameter was adjusted, to obtain a recovered parameter value 108. For example, if the parameter value is increased by a predetermined value when the parameter value is adjusted, the predetermined value is decreased when the recovery operation is performed. If the parameter value is adjusted by increasing the parameter by a predetermined factor, the parameter is reduced by the predetermined factor in the recovery operation. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.
After the adjusted parameter values are restored, a file 102 of the machine learning model is generated for use by the user.
The encryption and decryption operations performed by the computing device 104 on the machine learning model file are shown in fig. 1, which is an example only and not a specific limitation of the present disclosure. Those skilled in the art may implement the above-described functions on different computing devices as desired. In some embodiments, the computing device that encrypts the file of machine learning models 102 and the computing device that decrypts the encrypted machine learning model file 110 are two different computing devices.
In some implementations, the computing device 104 also determines the identification value of the encrypted file of the machine learning model after generating the encrypted file of the machine learning model 110. The identification value may be used to verify an encrypted file of the machine learning model to determine whether the file encrypted by the machine learning model was modified. In some embodiments, the identification value of the encrypted machine learning model file is determined using a Message Digest ("MD") algorithm, such as the MD5, MD4 algorithm. In some embodiments, the identification value may be determined using any suitable identification value generation method that can uniquely determine the encrypted machine learning model file. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.
In some embodiments, the computing device also verifies the decrypted machine learning model file prior to decrypting the encrypted machine learning model file 110. The determination of whether the machine learning model has been altered is made, for example, using a comparison of the identification value obtained when the encrypted file 110 of the machine learning model was generated and the identification value recalculated for the encrypted file of the machine learning model. If so, the encrypted machine learning model file is not processed. If not, it is decrypted.
In some embodiments, the computing device 104 may, after obtaining the file 102 of the machine learning model, train it further with sample data to obtain an inference model. After obtaining the inference model, the inference model may be encrypted using a symmetric or asymmetric encryption algorithm, and then an identification value may be generated for the encrypted file that can be verified and stored at the head of the encrypted inference model file to facilitate verification of the encrypted inference model. Encrypted version information may also be stored in the header of the file. Upon obtaining the encrypted inference model, the identification value may be obtained from a file header to verify the inference model. And then decrypted using a decryption process that is the reverse of the encryption process.
By the method, the safety of the machine learning model can be effectively ensured, the encrypted model can be trained to obtain the encrypted model with updated parameters, and the safety and the accuracy of the model are improved.
Fig. 1 above illustrates a schematic diagram of an environment 100 in which various embodiments of the present disclosure can be implemented. A flow diagram of a method 200 for processing files of a machine learning model according to some embodiments of the present disclosure is described below in conjunction with fig. 2. Method 200 in fig. 2 is performed by computing device 104 in fig. 1 or any suitable computing device.
At block 202, a file of a machine learning model is obtained that includes a set of parameter values that are available for the machine learning model. For example, the computing device 104 obtains a file 102 of the machine learning model. The file 102 of the machine learning model includes a program of the machine learning model and a set of parameter values for parameters of the machine learning model. In some embodiments, the machine learning model is a pre-trained model. In some embodiments, the machine learning model is an inference model. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.
At block 204, at least one parameter value in a set of parameter values is adjusted based on a predetermined rule to obtain an adjusted file of the machine learning model. For example, the computing device 104 adjusts at least one parameter value of a set of parameter values based on a predetermined rule to obtain an adjusted file of the machine learning model.
In some embodiments, the computing device 104 determines at least one parameter value from the set of parameter values based on an order of locations of the parameter values in the set of parameter values. For example, the 1 st and/or 3 rd parameter value in a set of parameter values is selected. The computing device 104 then adjusts the size of the at least one parameter value based on the predetermined rule. In one example, the predetermined rule is a obfuscation algorithm. The computing device 104 utilizes a hashing algorithm to adjust the size of the parameter values. In another example, the predetermined rule is to increase or decrease the parameter value by a predetermined value, or to enlarge/reduce by a predetermined factor. In some embodiments, each of the at least one parameter values is changed by the same amount. In some embodiments, each of the at least one parameter values is changed by a different magnitude. Alternatively or additionally, the predetermined rule is known to both encrypt and decrypt the file, upon decryption, the parameter values may be recovered based on the predetermined rule. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure. The size of the parameters can be adjusted as desired by those skilled in the art. By the method, the safety of the machine learning model file can be higher, and the file leakage is prevented.
In some embodiments, a suitable algorithm may be employed to select a parameter value of a set of parameter values to adjust, and then adjust the parameter value. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure. The skilled person may select the at least one parameter value using any suitable parameter value selection method as required.
At block 206, the adjusted file of the machine learning model is encrypted to obtain an encrypted file of the machine learning model. For example, the computing device 104 encrypts the adjusted file of the machine learning model to obtain an encrypted file 110 of the machine learning model.
In some embodiments, the adjusted file of the machine learning model is encrypted using a symmetric encryption algorithm or an asymmetric encryption algorithm. In some embodiments, the adapted file of the machine learning model may be encrypted using any suitable encryption algorithm. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure. By the method, the safety level of the file of the machine learning model can be improved, and the file is ensured not to be acquired by other users.
In some embodiments, based on the encrypted file of the machine learning model, an identification value of the encrypted file is generated for use in verifying the encrypted file. For example, the MD5 value of the encrypted file of the machine learning model is calculated using the MD5 algorithm, and then the MD5 value can be used to verify whether the encrypted file of the machine learning model has been modified. By the method, whether the encrypted file is modified or not can be quickly determined.
By the method, the safety of the machine learning model can be effectively ensured, and the model is prevented from being acquired by other users.
A flowchart of a method 200 for processing files of a machine learning model according to some embodiments of the present disclosure is described above in connection with fig. 2. A flow diagram of a method 300 for processing files of a machine learning model is described below in conjunction with FIG. 3. Method 300 in fig. 3 is performed by computing device 104 in fig. 1 or any suitable computing device.
Fig. 2 above describes the process of encrypting the file of the machine learning model, and fig. 3 below describes the process of decrypting the encrypted file of the machine learning model.
At block 302, an encrypted file of a machine learning model is obtained. For example, the computing device 104 obtains an encrypted file 110 of the machine learning model. In some embodiments, the encrypted file 110 of the machine learning model may be obtained by the computing device 104 from other computing devices. In some embodiments, the computing device 104 retrieves the encrypted file 110 of the machine learning model from its storage. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.
At block 304, the encrypted file of the machine learning model is decrypted to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model including a set of parameter values. For example, the computing device 104 decrypts the encrypted file 110 of the machine learning model, the decrypted file being an adjusted file of the machine learning model, the adjusted file of the machine learning model including a set of parameter values.
In some embodiments, the computing device 104 obtains a first identification value corresponding to an encrypted file of the machine learning model. For example, a first identification value, such as an MD5 value, is obtained when the encrypted file is generated. The computing device 104 then generates a second identification value for the machine learning model encrypted file 110 from the machine learning model encrypted file 110. For example, re-process the encrypted file with the MD5 algorithm to generate a new MD5 value. The computing device 104 then compares the first identification value to the second identification value. If equal, indicating that the encrypted file has not been modified, the encrypted file 110 of the machine learning model is decrypted. If not, indicating that the encrypted file was modified, no operation is performed on the file. By the method, whether the encrypted file is modified or not can be quickly verified.
In some embodiments, the encrypted file of the machine learning model is decrypted using a symmetric decryption algorithm or an asymmetric decryption algorithm. Alternatively or additionally, the symmetric decryption algorithm or the asymmetric decryption algorithm corresponds to a symmetric encryption algorithm and an asymmetric encryption algorithm when encrypting the file. In this way, the security of the files of the machine learning model may be determined.
At block 306, a recovery operation is performed on at least one parameter value in the set of parameter values based on a predetermined rule to obtain a file of the machine learning model. For example, the computing device 104 performs a recovery operation on at least one parameter value in a set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
In some embodiments, the computing device 104 determines at least one parameter value from the set of parameter values based on an order of locations of the parameter values in the set of parameter values. The parameter values are selected, for example, according to the same selection method as in encryption to find at least one parameter value that was modified in encryption. The computing device then performs a recovery operation on the at least one parameter value based on a predetermined rule. The predetermined rule corresponds to a rule at the time of encryption. Based on a predetermined rule, an operation reverse to the adjustment operation at the time of encryption is performed. For example, if a predetermined value is increased for a parameter value at the time of encryption, the predetermined value is decreased for the parameter at that time. By the method, the adjusted parameters can be quickly determined, and the data processing efficiency is improved.
In some embodiments, the computing device 104 may also train a machine learning model corresponding to a file of machine learning models with the sample data to obtain a target machine learning model. For example, when the machine learning model is a pre-training model, the inference model can be obtained by adopting sample training. The computing device 104 then determines a file of the target machine learning model. The file includes at least the programs and parameter values of the target machine learning model. The computing device 104 encrypts the file of the target machine learning model to generate an encrypted file of the target machine learning model. The computing device 104 then computes the encrypted file of the target machine learning model to generate an identification value of the encrypted file of the target machine learning model for use in verifying the encrypted file of the target machine learning model. For example, the MD5 value of the encrypted file is calculated. Alternatively or additionally, the security of the target machine learning model can be increased by adjusting parameters in the process of encrypting and decrypting the target machine learning model. By the method, the safety of the trained target machine learning model can be ensured.
In some embodiments, the computing device 104 encrypts the file of the target machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm. By the method, the safety of the target machine learning model can be ensured.
In some embodiments, the computing device 104 may also decrypt the encrypted file of the target machine learning model, e.g., using a symmetric decryption algorithm or an asymmetric decryption algorithm. Alternatively or additionally, before decryption, a comparison is also required to verify whether the encrypted file has been modified based on the identification value generated when the encrypted file was generated and the identification value obtained by reprocessing the encrypted file of the target machine learning model. When the two identification values are the same, the encrypted file is not modified, and then the decryption operation is performed. Otherwise, the encrypted file is not operated.
By the method, the safety of the machine learning model can be effectively ensured, the encrypted model can be trained to obtain the encrypted model with updated parameters, and the safety and the accuracy of the model are improved.
A method 300 for processing files of a machine learning model is described above in connection with fig. 3. An example of a method 400 for processing a machine learning model file is described below in conjunction with FIG. 4. The data in method 400 in fig. 4 is performed by computing device 104 in fig. 1 or any suitable computing device.
In fig. 4, the file of the machine learning model is a file of a pre-trained model. At 402, the computing device 104 obtains a file of the pre-trained model. The computing device 104 then performs data content obfuscation processing on the model parameters in the file of the pre-trained model at 404. For example, the at least one model parameter value is sized such that the pre-trained model does not function properly when using the adjusted parameter value. The obfuscated file of the pre-trained model is then encrypted using a symmetric or asymmetric encryption algorithm at block 406. The encrypted pre-training model file is then subjected to a model verification process at block 408 to generate an identification value of the encrypted file of the pre-training model, which can be used to verify the encrypted file of the pre-training model. For example, the identification value is the MD5 value of the encrypted file of the pre-trained model. An encrypted file of the pre-trained model is then obtained at block 410.
The encrypted file of the pre-trained model may be decrypted as shown in fig. 4 using the reverse operation of the encryption process. After obtaining the encrypted file of the pre-trained model at block 410, then at block 408, a model check may be utilized to verify whether the encrypted file of the pre-trained model was modified. For example, the identification value obtained when the encrypted file of the pre-trained model is obtained is compared with the identification value recalculated using the encrypted file of the pre-trained model to verify whether the encrypted file of the pre-trained model has been modified. If so, the model is not processed. If not, a decryption operation is performed using a symmetric or asymmetric decryption algorithm at block 406. The adjusted parameter values in the decrypted file of the model are then data content defrobulated at block 402, i.e. the parameter values adjusted by the obfuscation process are restored to their original values. A file of the pre-trained model is then obtained.
After obtaining the file of the pre-trained model, the file of the pre-trained model may be trained to obtain a file of the inference model, as shown at block 412. The computing device 104 then encrypts the file of the inference model using a symmetric or asymmetric encryption algorithm at block 414. At block 416, the computing device 104 performs a model checking process on the encrypted file of the inference model, the process operable to generate an identification value for the encrypted file of the inference model, which may be used to check whether the encrypted file of the inference model is modified. For example, the MD5 value of the encrypted file that generated the inference model. An encrypted file of the inference model is then obtained at block 418. The encrypted files 410 of the training model and the encrypted files 418 of the inference model are then available to a deep learning framework 420.
The computing device 104 may also perform decryption on the encrypted inference model file, which is the reverse of the encryption process of the inference model file, i.e., perform verification first, and perform decryption after verification to obtain a usable inference model file.
By the method, the safety of the machine learning model can be effectively guaranteed, the encrypted model can be trained to obtain the encrypted model with updated parameters, and the safety and the accuracy of the model are improved.
FIG. 5 shows a schematic block diagram of an apparatus 500 for processing files of a machine learning model according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes a file acquisition module 502 configured to acquire a file of a machine learning model including a set of parameter values available for the machine learning model. The apparatus 500 further comprises a first adjustment module 504 configured to adjust at least one parameter value of the set of parameter values based on a predetermined rule to obtain an adjusted file of the machine learning model. The apparatus 500 also includes a file encryption module 506 configured to encrypt the adapted file of the machine learning model to obtain an encrypted file of the machine learning model.
In some embodiments, the first adjustment module 504 includes a parameter value determination module configured to determine at least one parameter value from a set of parameter values based on a positional order of the parameter values in the set of parameter values; and a second adjustment module configured to adjust a size of the at least one parameter value based on a predetermined rule.
In some embodiments, file encryption module 506 comprises an adapted file encryption module configured to encrypt an adapted file of the machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm.
In some embodiments, the apparatus 500 further comprises an identification value generation module configured to generate an identification value of the encrypted file for use in verifying the encrypted file based on the encrypted file of the machine learning model.
FIG. 6 shows a schematic block diagram of an apparatus 600 for processing files of a machine learning model according to an embodiment of the present disclosure. As shown in fig. 6, the apparatus 600 includes a file acquisition module 602 configured to acquire an encrypted file of a machine learning model. The apparatus 600 further includes a first file decryption module 604 configured to decrypt the encrypted file of the machine learning model to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model including a set of parameter values. The apparatus 600 further comprises a restoration module 606 configured to perform a restoration operation on at least one parameter value of the set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
In some embodiments, the first file decryption module 604 includes a first identification value acquisition module configured to acquire a first identification value corresponding to an encrypted file of the machine learning model; a second identification value generation module configured to generate a second identification value of the encrypted file of the machine learning model based on the encrypted file of the machine learning model; and a comparison determination module configured to decrypt the encrypted file of the machine learning model if it is determined that the first identification value is equal to the second identification value.
In some embodiments, the first file decryption module 604 comprises a second file decryption module configured to decrypt the encrypted file of the machine learning model using a symmetric decryption algorithm or an asymmetric decryption algorithm.
In some embodiments, the recovery module 606 includes a parameter value determination module configured to determine at least one parameter value from a set of parameter values based on a positional order of the parameter values in the set of parameter values; a restoration operation execution module configured to execute a restoration operation on the at least one parameter value based on a predetermined rule.
In some embodiments, the apparatus 600 further comprises a training module configured to train a machine learning model corresponding to a file of the machine learning model with the sample data to obtain a target machine learning model; a document determination module configured to determine a document of a target machine learning model; a first file encryption module configured to encrypt a file of a target machine learning model to generate an encrypted file of the target machine learning model; and an identification value generation module configured to generate an identification value of the encrypted file of the target machine learning model for use in verifying the encrypted file of the target machine learning model based on the encrypted file of the target machine learning model.
In some embodiments, the first file encryption module includes a second file encryption module configured to encrypt the file of the target machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Device 700 may be used to implement computing device 104 in fig. 1. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (26)
1. A method for processing files of a machine learning model, comprising:
obtaining a file of the machine learning model, the file of the machine learning model including a set of parameter values available for the machine learning model;
adjusting at least one parameter value of the set of parameter values based on a predetermined rule to obtain an adjusted file of the machine learning model; and
encrypting the adjusted file of the machine learning model to obtain an encrypted file of the machine learning model.
2. The method of claim 1, wherein adjusting the at least one parameter value comprises:
determining the at least one parameter value from the set of parameter values based on an order of positions of the parameter values in the set of parameter values; and
adjusting a size of the at least one parameter value based on a predetermined rule.
3. The method of claim 1, wherein encrypting the adapted file of the machine learning model comprises:
encrypting the adjusted file of the machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm.
4. The method of claim 1, further comprising:
based on an encrypted file of the machine learning model, generating an identification value of the encrypted file for use in verifying the encrypted file.
5. A method for processing a machine learning model, comprising:
obtaining an encrypted file of a machine learning model;
decrypting an encrypted file of the machine learning model to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model comprising a set of parameter values; and
performing a recovery operation on at least one parameter value of the set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
6. The method of claim 5, wherein decrypting an encrypted file of the machine learning model comprises:
obtaining a first identification value corresponding to an encrypted file of the machine learning model;
generating a second identification value of the encrypted file of the machine learning model based on the encrypted file of the machine learning model; and
decrypting the encrypted file of the machine learning model if it is determined that the first identification value is equal to the second identification value.
7. The method of claim 5, wherein decrypting an encrypted file of the machine learning model comprises:
decrypting the encrypted file of the machine learning model using a symmetric decryption algorithm or an asymmetric decryption algorithm.
8. The method of claim 5, wherein performing a recovery operation on the at least one parameter value comprises
Determining the at least one parameter value from the set of parameter values based on an order of positions of the parameter values in the set of parameter values;
performing a restoration operation on the at least one parameter value based on a predetermined rule.
9. The method of claim 5, further comprising:
training a machine learning model corresponding to a file of the machine learning model using sample data to obtain a target machine learning model;
determining a file of the target machine learning model;
encrypting the file of the target machine learning model to generate an encrypted file of the target machine learning model; and
based on the encrypted file of the target machine learning model, generating an identification value of the encrypted file of the target machine learning model for use in verifying the encrypted file of the target machine learning model.
10. The method of claim 9, wherein encrypting the file of the target machine learning model comprises:
and encrypting the file of the target machine learning model by adopting a symmetric encryption algorithm or an asymmetric encryption algorithm.
11. An apparatus for processing files of a machine learning model, comprising:
a file acquisition module configured to acquire a file of the machine learning model, the file of the machine learning model including a set of parameter values available to the machine learning model;
a first adjustment module configured to adjust at least one parameter value of the set of parameter values based on a predetermined rule to obtain an adjusted file of the machine learning model; and
a file encryption module configured to encrypt the adjusted file of the machine learning model to obtain an encrypted file of the machine learning model.
12. The apparatus of claim 11, wherein the first adjustment module comprises:
a parameter value determination module configured to determine the at least one parameter value from the set of parameter values based on a positional order of the parameter values in the set of parameter values; and
a second adjustment module configured to adjust a size of the at least one parameter value based on a predetermined rule.
13. The apparatus of claim 11, wherein the file encryption module comprises:
an adjusted file encryption module configured to encrypt an adjusted file of the machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm.
14. The apparatus of claim 11, further comprising:
an identification value generation module configured to generate an identification value of the encrypted file for use in verifying the encrypted file based on the encrypted file of the machine learning model.
15. An apparatus for processing a machine learning model, comprising:
a file acquisition module configured to acquire an encrypted file of a machine learning model;
a first file decryption module configured to decrypt an encrypted file of the machine learning model to obtain an adjusted file of the machine learning model, the adjusted file of the machine learning model comprising a set of parameter values; and
a restoration module configured to perform a restoration operation on at least one parameter value of the set of parameter values based on a predetermined rule to obtain a file of the machine learning model.
16. The apparatus of claim 15, wherein the first file decryption module comprises:
a first identification value acquisition module configured to acquire a first identification value corresponding to an encrypted file of the machine learning model;
a second identification value generation module configured to generate a second identification value of the encrypted file of the machine learning model based on the encrypted file of the machine learning model; and
a comparison determination module configured to decrypt the encrypted file of the machine learning model if it is determined that the first identification value is equal to the second identification value.
17. The apparatus of claim 15, wherein the first file decryption module comprises:
a second file decryption module configured to decrypt the encrypted file of the machine learning model using a symmetric decryption algorithm or an asymmetric decryption algorithm.
18. The apparatus of claim 15, wherein the recovery module comprises
A parameter value determination module configured to determine the at least one parameter value from the set of parameter values based on a positional order of the parameter values in the set of parameter values;
a restoration operation execution module configured to execute a restoration operation on the at least one parameter value based on a predetermined rule.
19. The apparatus of claim 15, further comprising:
a training module configured to train a machine learning model corresponding to a file of the machine learning model with sample data to obtain a target machine learning model;
a document determination module configured to determine a document of the target machine learning model;
a first file encryption module configured to encrypt a file of the target machine learning model to generate an encrypted file of the target machine learning model; and
an identification value generation module configured to generate an identification value of the encrypted file of the target machine learning model based on the encrypted file of the target machine learning model for use in verifying the encrypted file of the target machine learning model.
20. The apparatus of claim 19, wherein the first file encryption module comprises:
a second file encryption module configured to encrypt a file of the target machine learning model using a symmetric encryption algorithm or an asymmetric encryption algorithm.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
22. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 5-10.
23. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 5-10.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-4.
26. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 5-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011511171.2A CN112508200B (en) | 2020-12-18 | 2020-12-18 | Method, apparatus, device, medium, and program for processing machine learning model file |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011511171.2A CN112508200B (en) | 2020-12-18 | 2020-12-18 | Method, apparatus, device, medium, and program for processing machine learning model file |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112508200A true CN112508200A (en) | 2021-03-16 |
CN112508200B CN112508200B (en) | 2024-01-16 |
Family
ID=74922613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011511171.2A Active CN112508200B (en) | 2020-12-18 | 2020-12-18 | Method, apparatus, device, medium, and program for processing machine learning model file |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112508200B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113448545A (en) * | 2021-06-23 | 2021-09-28 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium, and program product for machine learning model servitization |
CN114218166A (en) * | 2021-11-04 | 2022-03-22 | 北京百度网讯科技有限公司 | Data processing method and device, electronic equipment and readable storage medium |
CN115146237A (en) * | 2022-09-05 | 2022-10-04 | 南湖实验室 | Deep learning model protection method based on confidential calculation |
CN115344886A (en) * | 2022-07-22 | 2022-11-15 | 西安深信科创信息技术有限公司 | Model encryption method, model decryption method and model decryption device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815710A (en) * | 2018-12-14 | 2019-05-28 | 开放智能机器(上海)有限公司 | A kind of guard method of intelligent algorithm model file |
CN110062014A (en) * | 2019-06-11 | 2019-07-26 | 苏州思必驰信息科技有限公司 | The encryption and decryption method and system of network model |
RU2018104438A3 (en) * | 2018-02-06 | 2019-08-06 | ||
CN110619220A (en) * | 2019-08-09 | 2019-12-27 | 北京小米移动软件有限公司 | Method and device for encrypting neural network model and storage medium |
CN111241559A (en) * | 2020-01-07 | 2020-06-05 | 深圳壹账通智能科技有限公司 | Training model protection method, device, system, equipment and computer storage medium |
CN111859415A (en) * | 2020-06-18 | 2020-10-30 | 上海艾麒信息科技有限公司 | Neural network model encryption system and method |
CN111898135A (en) * | 2020-02-12 | 2020-11-06 | 北京京东尚科信息技术有限公司 | Data processing method, data processing apparatus, computer device, and medium |
-
2020
- 2020-12-18 CN CN202011511171.2A patent/CN112508200B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2018104438A3 (en) * | 2018-02-06 | 2019-08-06 | ||
CN109815710A (en) * | 2018-12-14 | 2019-05-28 | 开放智能机器(上海)有限公司 | A kind of guard method of intelligent algorithm model file |
CN110062014A (en) * | 2019-06-11 | 2019-07-26 | 苏州思必驰信息科技有限公司 | The encryption and decryption method and system of network model |
CN110619220A (en) * | 2019-08-09 | 2019-12-27 | 北京小米移动软件有限公司 | Method and device for encrypting neural network model and storage medium |
CN111241559A (en) * | 2020-01-07 | 2020-06-05 | 深圳壹账通智能科技有限公司 | Training model protection method, device, system, equipment and computer storage medium |
CN111898135A (en) * | 2020-02-12 | 2020-11-06 | 北京京东尚科信息技术有限公司 | Data processing method, data processing apparatus, computer device, and medium |
CN111859415A (en) * | 2020-06-18 | 2020-10-30 | 上海艾麒信息科技有限公司 | Neural network model encryption system and method |
Non-Patent Citations (1)
Title |
---|
曹贤龙: "基于Alexnet 卷积神经网络的加密芯片抗功耗攻击方法", 电子制作 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113448545A (en) * | 2021-06-23 | 2021-09-28 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium, and program product for machine learning model servitization |
CN113448545B (en) * | 2021-06-23 | 2023-08-08 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and program product for machine learning model servitization |
CN114218166A (en) * | 2021-11-04 | 2022-03-22 | 北京百度网讯科技有限公司 | Data processing method and device, electronic equipment and readable storage medium |
CN115344886A (en) * | 2022-07-22 | 2022-11-15 | 西安深信科创信息技术有限公司 | Model encryption method, model decryption method and model decryption device |
CN115344886B (en) * | 2022-07-22 | 2023-11-24 | 安徽深信科创信息技术有限公司 | Model encryption method, model decryption method and device |
CN115146237A (en) * | 2022-09-05 | 2022-10-04 | 南湖实验室 | Deep learning model protection method based on confidential calculation |
Also Published As
Publication number | Publication date |
---|---|
CN112508200B (en) | 2024-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112508200B (en) | Method, apparatus, device, medium, and program for processing machine learning model file | |
US20160359921A1 (en) | Secure local web application data manager | |
AU2019232799A1 (en) | Service processing method and apparatus | |
WO2023065632A1 (en) | Data desensitization method, data desensitization apparatus, device, and storage medium | |
CN111783124B (en) | Data processing method, device and server based on privacy protection | |
CN112394974B (en) | Annotation generation method and device for code change, electronic equipment and storage medium | |
CN113435583A (en) | Countermeasure generation network model training method based on federal learning and related equipment thereof | |
CN110266484B (en) | Data encryption method, device, equipment and medium | |
CN111783038A (en) | Risk assessment method, device, equipment, system and medium based on intelligent learning | |
CN113794706B (en) | Data processing method and device, electronic equipment and readable storage medium | |
US11366893B1 (en) | Systems and methods for secure processing of data streams having differing security level classifications | |
US11290475B2 (en) | System for technology resource centric rapid resiliency modeling | |
US11394733B2 (en) | System for generation and implementation of resiliency controls for securing technology resources | |
CN114884714B (en) | Task processing method, device, equipment and storage medium | |
CN115442164A (en) | Multi-user log encryption and decryption method, device, equipment and storage medium | |
CN113992345B (en) | Webpage sensitive data encryption and decryption method and device, electronic equipment and storage medium | |
CN113609156B (en) | Data query and write method and device, electronic equipment and readable storage medium | |
CN112559497B (en) | Data processing method, information transmission method, device and electronic equipment | |
CN114398678A (en) | Registration verification method and device for preventing electronic file from being tampered, electronic equipment and medium | |
CN113761576A (en) | Privacy protection method and device, storage medium and electronic equipment | |
CN113591127B (en) | Data desensitization method and device | |
CN115758368B (en) | Prediction method and device for malicious cracking software, electronic equipment and storage medium | |
CN115150196A (en) | Ciphertext data-based anomaly detection method, device and equipment under normal distribution | |
CN118503695A (en) | Secondary number processing method and device based on improved naive Bayes classifier | |
CN118194039A (en) | Training method, data processing method, device, electronic equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |