WO2023040390A1 - 一种模型保护方法及装置 - Google Patents
一种模型保护方法及装置 Download PDFInfo
- Publication number
- WO2023040390A1 WO2023040390A1 PCT/CN2022/099851 CN2022099851W WO2023040390A1 WO 2023040390 A1 WO2023040390 A1 WO 2023040390A1 CN 2022099851 W CN2022099851 W CN 2022099851W WO 2023040390 A1 WO2023040390 A1 WO 2023040390A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- operator
- data
- execution
- processing logic
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 238000012545 processing Methods 0.000 claims abstract description 102
- 238000004364 calculation method Methods 0.000 claims description 28
- 230000015654 memory Effects 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 12
- 238000012217 deletion Methods 0.000 claims description 3
- 230000037430 deletion Effects 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 description 40
- 238000010586 diagram Methods 0.000 description 23
- 238000004422 calculation algorithm Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001441724 Tetraodontidae Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000011449 brick Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/088—Usage controlling of secret information, e.g. techniques for restricting cryptographic keys to pre-authorized uses, different access levels, validity of crypto-period, different key- or password length, or different strong and weak cryptographic algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6209—Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Definitions
- the present application relates to the field of artificial intelligence, in particular to a model protection method and device.
- AI artificial intelligence
- DNN Deep Neural Network
- Machine learning service providers provide training platforms and query interfaces for using models, and users can query some instances through these interfaces.
- FIG. 1 shows a schematic diagram of a model protection method in the related art.
- the model owner provides the AI application to the user, it will also provide a hardware dongle.
- the AI application first obtains the encrypted AI model (which can be stored in the form of a file), and reads the key and authorization information from the hardware dongle. After the authentication of authorization information is passed, the AI application can use the key to decrypt the encrypted AI model to obtain the decrypted AI model and store it in the memory.
- the AI accelerator card loads the decrypted AI model from the memory for inference or model incremental training.
- the hardware dongle is deployed on the host, which increases the cost and complexity of deployment. How to achieve model protection and reduce system cost without adding additional components has become an urgent problem to be solved.
- the embodiment of the present application provides a model protection method, the method includes: obtaining a plurality of execution operators from the first model, the plurality of execution operators include the first operator, the second An operator is used to indicate the decryption processing logic; executing the multiple execution operators in sequence according to the hierarchical relationship of the multiple execution operators includes: when the first operator is executed, based on the decryption processing logic Decrypting the first data under the first operator to obtain second data, and executing one or more execution operators arranged behind the first operator based on the second data.
- data decryption is implemented in a pure software manner. That is, the protection of the model is achieved without adding additional components. It not only reduces the hardware cost, but also reduces the requirements for the scale of the operating environment and the algorithm.
- the second data is: at least one weight; or, at least one execution operator; or, at least one weight and at least one execution operator son.
- the parameters of the model to be protected can be flexibly selected, which improves flexibility and user-friendliness.
- the decryption of the first data based on the decryption processing logic to obtain the second The second data includes: in response to the key returned based on the key acquisition request, using the key to decrypt the first data to obtain the second data.
- the key is acquired in an interactive manner to realize model protection.
- the first operator is used to indicate the location of the storage space where the first data is located. Address, when executing to the first operator, said decrypting the first data under the first operator based on the decryption processing logic to obtain the second data, including: when executing to the address , based on the decryption processing logic, decrypt the data stored in the address to obtain the second data.
- the stored data is directly decrypted, which improves efficiency.
- the method further includes: after executing the multiple execution operators , deleting the first model, the plurality of execution operators, and the second data.
- the first model is Train the model or infer the model.
- the method further includes: returning an inference result if the first model is an inference model; If the first model is a training model, return the trained model.
- the decryption processing logic is symmetric decryption processing logic or asymmetric decryption processing logic.
- the user can choose the appropriate decryption processing logic according to the actual needs, which improves the flexibility and user-friendliness, realizes personalized customization, and further improves the security of the model.
- the embodiment of the present application provides a model protection method, including: encrypting the first area in the second model; adding the first area to the calculation graph of the second model according to the first area An operator is used to obtain a first model, and the first operator is used to indicate a decryption processing logic; and the first model is sent.
- the data in the first area is: at least one weight; or, at least one execution operator; or, at least one weight and at least An execution operator.
- the method further includes: in response to the key acquisition request from the device processor, Authenticate the device processor; and return a key to the device processor if the authentication is passed.
- the key acquisition request includes the identifier of the first model
- the authenticating the device processor includes: authenticating the device processor based on the identification of the first model and the identification of the device processor.
- the encrypting the first area in the second model includes: adopting the first The second operator encrypts the first area, and the second operator is used to indicate an encryption processing logic.
- the embodiment of the present application provides a model protection device, including: an acquisition module, configured to obtain a plurality of execution operators from the first model, and the plurality of execution operators include the first operator, so The first operator is used to indicate the decryption processing logic; the execution module is used to execute the multiple execution operators acquired by the acquisition module according to the hierarchical relationship of the multiple execution operators; the execution module is specifically used to : When the first operator is executed, the first data under the first operator is decrypted based on the decryption processing logic to obtain the second data, and the second data is executed based on the second data.
- One or more execution operators following an operator is performed.
- the second data is:
- At least one weight and at least one execution operator are provided.
- the executing module is also used for:
- the key In response to the key returned based on the key acquisition request, the key is used to decrypt the first data to obtain the second data.
- the first operator is used to indicate the address of the storage space where the first data is located, and when the first operator is executed, the execution module is further used to:
- the data stored in the address is decrypted based on the decryption processing logic to obtain the second data.
- the device further includes:
- a deletion module configured to delete the first model, the multiple execution operators, and the second data when the execution of the plurality of execution operators is completed.
- the first model is a training model or an inference model.
- the device further includes:
- a returning module configured to return an inference result if the first model is an inference model; and return a trained model if the first model is a training model.
- the decryption processing logic is symmetric decryption processing logic or asymmetric decryption processing logic.
- the embodiment of the present application provides a model protection device, including:
- An encryption module configured to encrypt the first area in the second model
- An adding module configured to add a first operator to the calculation graph of the second model according to the first area encrypted by the encryption module to obtain a first model, and the first operator is used to indicate the decryption processing logic
- a sending module configured to send the first model obtained after the adding module adds the first operator.
- the data in the first area is:
- At least one weight and at least one execution operator are provided.
- the device further includes:
- an authentication module configured to authenticate the device processor in response to a key acquisition request from the device processor
- the key is returned to the device processor.
- the key acquisition request includes the identifier of the first model and the identifier of the device processor, and the authentication module is further configured to:
- the device processor is authenticated based on the identification of the first model and the identification of the device processor.
- the encryption module is also used for:
- the first area is encrypted by using a second operator, and the second operator is used to indicate an encryption processing logic.
- the embodiments of the present application provide an electronic device.
- the terminal device can execute one or more of the model protection methods of the above-mentioned first aspect or multiple possible implementations of the first aspect, or execute The model protection method of the second aspect or one or more of the multiple possible implementations of the second aspect.
- the embodiments of the present application provide a processor, which can execute one or more of the model protection methods of the first aspect or multiple possible implementations of the first aspect, or execute The model protection method of the second aspect or one or more of the multiple possible implementations of the second aspect.
- the embodiments of the present application provide a chip, which can implement the model protection method of the above-mentioned first aspect or one or more of the various possible implementations of the first aspect, or execute the above-mentioned first aspect The model protection method of the second aspect or one or more of the multiple possible implementations of the second aspect.
- the embodiment of the present application provides a model protection system, the system includes a host processor, a storage unit, and a device processor, wherein the host processor is configured to perform Encryption; add a first operator to the calculation graph of the second model according to the first area to obtain a first model, and the first operator is used to indicate the decryption processing logic; send the first model; store A unit, configured to store the first model; a device processor, configured to obtain a plurality of execution operators from the first model, the plurality of execution operators include a first operator, and the first operator It is used to indicate the decryption processing logic; execute the multiple execution operators in sequence according to the hierarchical relationship of the multiple execution operators, including: when the first operator is executed, based on the decryption processing logic, the The first data under the first operator is decrypted to obtain the second data, and one or more execution operators arranged behind the first operator are executed based on the second data.
- the embodiments of the present application provide a readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above first aspect or multiple possible possibilities of the first aspect can be realized.
- the embodiments of the present application provide a computer program product, including computer readable codes, or a non-volatile computer readable storage medium bearing computer readable codes, when the computer readable codes are stored in an electronic
- the processor in the electronic device executes the model protection method of the above-mentioned first aspect or one or more of the multiple possible implementations of the first aspect, or executes the above-mentioned second aspect or the second A model protection method for one or more of the various possible implementations of the aspect.
- FIG. 1 shows a schematic diagram of a model protection method in the related art
- Figure 2 shows a schematic diagram of the architecture of the model protection system provided by the embodiment of the present application
- Figure 3 shows an exemplary schematic diagram of a calculation graph
- Fig. 4 shows a flow chart of the model protection method provided by the embodiment of the present application
- Fig. 5 shows a flow chart of the model protection method provided by the embodiment of the present application.
- Fig. 6 shows an interactive schematic diagram of the model protection method provided by the embodiment of the present application.
- Fig. 7 shows a schematic structural diagram of a model protection device provided by the embodiment of the present application.
- Fig. 8 shows a schematic structural diagram of a model protection device provided by an embodiment of the present application.
- the embodiment of the present application provides a model protection method, which realizes the protection of the model through pure software, does not add additional components, and has low requirements on the scale of the operating environment and algorithms, thereby reducing the system cost.
- the model protection method provided by the embodiment of the present application can be applied to the model calculation process in the end, edge, and cloud scenarios.
- the end refers to the client or device end, such as a mobile phone or computer
- the edge refers to an edge device, such as a router, switch, etc.
- the cloud refers to the cloud, such as a server cluster.
- the model protection method provided by the embodiment of the present application can be used in model inference scenarios and model incremental training scenarios. The embodiment of this application does not limit the application scenario.
- FIG. 2 shows a schematic diagram of the architecture of the model protection system provided by the embodiment of the present application.
- the model protection system includes a host processor 21 , a storage unit 22 and a device processor 23 .
- the host processor 21 is the control center of the host (host), and is used to run AI applications.
- the storage unit 22 may be used to save data such as AI models involved in AI applications.
- the device processor 23 is the control center of the device, and is used to process the AI model involved in the AI application, for example, use the AI model involved in the AI application to perform inference, or perform incremental training on the AI model involved in the AI application.
- the AI model can be used for target detection, image processing, signal control, etc., and the embodiments of the present application do not limit the function of the AI model.
- the AI model can be a convolutional neural network model (Convolutional Neural Networks, CNN), a cyclic neural network model (Recurrent Neural Network, RNN) and a deep neural network (Deep Neural Network, DNN) model, etc. Categories are not limited.
- the management module of the host processor 21 can realize the loading and execution of the AI model by calling the interface provided by the graph executor (Graphy Engine, GE), and by calling the operation management module
- the interface provided by the Runtime realizes the management of the storage unit 22 and the device processor 23, etc., so as to use the computing power of the AI model provided by the device processor 23 to complete the business.
- the host processor 21 may be a processor, or may be a general term for multiple processing elements.
- the host processor 21 can be a central processing unit (Central Processing Unit, CPU), or a specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA), image A processor (Graphics Processing Unit, GPU) or one or more integrated circuits configured in the disclosed embodiments, for example: one or more digital signal processors (Digital Signal Processor, DSP), or, one or more on-site Programmable Gate Array (Field Programmable Gate Array, FPGA).
- the device processor 23 may refer to the host processor 21, which will not be repeated here. It should be noted that the device processor 23 has strong model computing capabilities.
- the storage unit 22 may include a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.
- the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- the volatile memory can be random access memory (RAM), which acts as external cache memory.
- RAM random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- SDRAM synchronous dynamic random access memory
- Double data rate synchronous dynamic random access memory double data date SDRAM, DDR SDRAM
- enhanced SDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous connection dynamic random access memory
- direct rambus RAM direct rambus RAM
- the host processor 21 and the device processor 23 may be located in different devices.
- the host processor 21 may be located in a host device such as an X86 server, an ARM server, or a Windows PC
- the device processor 23 may be installed in a hardware device that can be connected to the host device.
- the host processor 21 and the storage unit 22 are located in a host device, and the device processor 23 is located in a hardware device.
- the host processor 21 and the storage unit 22 can be connected by a bus, wherein the bus can be an industry standard architecture (Industry Standard Architecture, ISA) bus, a peripheral component interconnect (Peripheral Component Interconnect, PCI) bus or an extended industry standard brick system Structure (Extended Industry Standard Architecture, EISA) bus, etc.
- the bus can be divided into address bus, data bus, control bus and so on.
- a high-speed serial computer expansion bus standard (Peripheral Component Interconnect express, PCIe) interface can be set on the host device, and the hardware device can be connected to the host device through the PCIe interface.
- PCIe serial computer expansion bus standard
- the foregoing host device and the foregoing hardware device may be co-located, and are collectively referred to as a host device.
- Fig. 2 shows only an exemplary architecture diagram of the model protection system provided by the embodiment of the present application, and does not constitute a limitation on the model protection system.
- the model protection system may include more or less components, or combinations of certain components, or different arrangements of components.
- AI model refers to the structure fixed by a neural network according to a certain algorithm.
- the AI model includes calculation graphs and weights. Among them, the calculation graph is used to represent the operation process of the algorithm, which is a method to formalize the algorithm.
- the weight is used to represent the data that the operator needs to use during execution.
- the calculation graph includes multiple nodes, which are connected by directed edges, and each node represents an execution operator.
- the input edge entering a node represents the input data of the node corresponding to the execution operator, and the output edge leaving the node represents the output data of the node corresponding to the execution operator.
- the calculation process represented by the calculation graph can be a model reasoning process or a model training process.
- FIG. 3 shows an exemplary schematic diagram of a computation graph.
- a and B are input data
- C and D are execution operators
- E is weight.
- C means multiplication
- D means addition
- E means constant.
- This calculation graph shows the operation process of outputting A*B+E.
- the calculation graph can be serialized into a model file readable by the device processor (for example, the device processor 23 shown in FIG. 2 ) for saving, so that the device processor can run the model file to realize the calculation .
- the model files readable by the device processor include but are not limited to MindIR format files, AIR (Ascend Intermediate Presentation) format files, ONNX (Open Neural Network Exchange) format files, etc. That is to say, Tensorflow, Pytorch or other AI frameworks on the host processor can save the calculation graph and weights of the AI model according to a certain data structure, so as to facilitate subsequent model reasoning or model increment on the device processor train.
- FIG. 4 shows a flowchart of a model protection method provided by an embodiment of the present application. This method can be applied to the device processor 23 shown in FIG. 2 . As shown in Figure 4, the method may include:
- step S401 a plurality of execution operators are obtained from a first model, and the plurality of execution operators include a first operator, and the first operator is used to indicate a decryption processing logic.
- the first model may represent an encrypted AI model.
- the first model may be a training model or an inference model.
- the first model can be used for image classification, speech recognition, object detection or object tracking, and the like.
- the embodiment of the present application does not limit the type and function of the first model.
- the first model includes a computation graph and weights, and the computation graph of the first model includes multiple execution operators.
- the application processor may decompress the first model layer by layer to obtain multiple execution operators.
- the first operator may represent an execution operator for instructing decryption processing logic.
- the multiple execution operators of the first model may include one or more first operators.
- the embodiment of the present application does not limit the number of first operators.
- the application processor may decrypt the input data based on the decryption processing logic indicated by the first operator, and then output the decrypted input.
- the data that needs to be input to the first operator is called the first data under the first operator, and the output data after being operated by the first operator is called the second data. It can be understood that the first data is data that needs to be decrypted, and the second data is decrypted data.
- the second data may be at least one weight; or, at least one execution operator; or at least one weight and at least one execution operator. That is to say, the first data may be at least one encrypted weight, or at least one encrypted execution operator, or at least one encrypted weight and at least one encrypted execution operator.
- the weight M1 is input into the first operator for decryption to obtain the weight M2; the execution operator N1 is input into the first operator for decryption to obtain the execution operator N2; the weight M1 and the execution operator N1 are input into the first After the operator decrypts, the weight M2 and the execution operator N2 are obtained.
- the decryption processing logic may be symmetric decryption processing logic or asymmetric decryption processing logic.
- symmetric decryption processing logic includes but not limited to DES TripleDES algorithm, BlowFish algorithm and RC algorithm, etc.
- asymmetric decryption processing logic includes but not limited to RSA, Elgamal, knapsack algorithm, Rabin algorithm and ECC algorithm, etc.
- the embodiment of the present application does not limit the decryption processing logic. Users can flexibly choose encryption and decryption algorithms to encrypt and decrypt data.
- the first model may be provided by the host processor.
- the host processor 21 can generate the first model and save the first model in the storage unit 22 while the host processor 21 is running the AI application, and the device processor 23 can retrieve the first model from the storage unit 22. Load the first model.
- the transmission of the first model may be implemented by means of memory copy.
- the host processor 21 and the device processor 23 may apply for storage space in the storage unit 22 respectively.
- the host processor 21 stores the first model in its corresponding storage space, realizing the storage of the first model.
- the device processor 23 copies the data in the storage space where the first model is located to the corresponding storage space of the device processor 23 to implement loading of the first model.
- Step S402 execute the multiple execution operators in sequence according to the hierarchical relationship of the multiple execution operators.
- the device processor may execute these execution operators in sequence according to the hierarchical relationship of the multiple execution operators. Since the multiple execution operators of the first model include one or more first operators, when any first operator is executed, the device processor may perform step S403.
- Step S403 when the first operator is executed, decrypt the first data under the first operator based on the decryption processing logic to obtain second data, and execute the ranking in the second data based on the second data
- the first data (including the encrypted execution operator and/or the encrypted weight value) can be used as the input of the first operator.
- the device processor can use the decryption processing logic indicated by the first operator to decrypt the data input to the first operator, so as to obtain the second data (including the decrypted execution operator and /or decrypted weights).
- the weight E shown in FIG. 3 is an encrypted weight. If the device processor directly adopts the weight E or decrypts the weight E incorrectly, the output result will be wrong.
- the weight E is the input of the first operator, and the first operator is arranged before the execution operator D, and the application processor needs to execute the first operator first, and then execute the execution operator D.
- the application processor executes the first operator, it will input the weight E into the first operator, and output the decryption result of the weight E, and then execute the execution operator D, that is, the result of A*B and the weight E The decrypted results are added.
- the execution operator C shown in FIG. 3 is an encrypted execution operator. If the device processor directly uses the execution operator C or the decryption processing of the execution operator C is incorrect, the output result is wrong.
- the execution operator C is the input of the first operator, and the first operator is arranged before the execution operator D, and the application processor needs to execute the first operator before executing the execution operator D.
- the application processor executes the first operator, it will input the execution operator C into the first operator to obtain the decryption result execution operator "*".
- the application processor multiplies the input data A and the input data B to obtain The result of A*B; then, execute the execution operator D, that is, add the result of A*B and the decryption result of the weight E.
- the weight E shown in FIG. 3 is an encrypted weight
- the execution operator C is an encrypted execution operator.
- the first operator 1 is arranged before the first operator 2
- the first operator 2 is arranged before the execution operator D.
- the application processor executes the first operator 1, it inputs the execution operator C into the first operator to obtain the decryption result execution operator "*", and the application processor multiplies the input data A by the input data B to obtain A*B results.
- the application processor executes the first operator 2, at this time, inputs the weight E into the first operator 2, and outputs the decryption result of the weight E.
- the application processor executes the execution operator D, that is, adds the result of A*B to the decryption result of the weight E.
- the first operator can be ranked first among all execution operators of the first model. At this time, the protection of the entire calculation graph and all values can be realized; the first operator ranks in other locations.
- the user can flexibly set the position of the first operator and the number of the first operator, and can also continuously set the first operator (to realize multiple encryption), Thereby improving data security.
- the first operator may also be used to indicate the address of the storage space where the first data is located.
- decrypting the first data under the first operator based on the decryption processing logic to obtain the second data may include: When the address is the address, the data stored in the address is decrypted based on the decryption processing logic to obtain the second data.
- the calculation graph and weights obtained after decompression of the first model are stored in the storage space (for example, the storage unit 22 shown in FIG. 2 ) in the form of a data sequence.
- the application processor reads data in the storage space, and executes the first operator when the data corresponding to the first operator is read. Since the first operator indicates the address of the first data. Therefore, when the application processor executes the first operator, it first reads the first data from the address indicated by the first operator, and then decrypts the first data based on the decryption processing logic to obtain the second data.
- decrypting the first data based on the decryption processing logic to obtain the second data may include: responding to the key returned based on the key acquisition request, using the key to perform the decryption on the first data Decrypt to obtain the second data.
- the device processor may send a key acquisition request to the host processor, so as to acquire the key to decrypt the first data.
- the host processor may authenticate the device processor, and return the key to the device processor if the authentication is passed.
- the key acquisition request is used to acquire the key.
- the key acquisition request may include an identifier of the first model and an identifier of the device processor.
- the host processor may obtain the identifier of the first model and the identifier of the device processor from the key acquisition request, and then authenticate the device processor based on the identifier of the first model and the identifier of the device processor.
- a permission table may be maintained in the host processor, and the permission table may be used to store the identifier of the model, the identifier of the processor, and the association relationship of permissions.
- the model identification can be the name of the model, the number of the model or the category of the model, etc.
- the identification of the processor can be the name of the processor, the number of the processor, the model of the processor, etc.
- the permission can be decryption permission or no Decryption permission.
- the host processor finds in the authority table that the authority associated with the identifier of the first model and the identifier of the device processor has decryption authority, then it is determined that the authentication is passed and the key is returned; if the host processor is not in the authority table If the identification of the first model or the identification of the device processor is found, or the authority associated with the identification of the first model and the identification of the device processor is found to be no decryption authority, then it is determined that the authentication has not passed and the key is not returned .
- authentication may also be performed in other ways, for example, according to the process number, according to the interface number, etc., and the embodiment of the present application does not limit the authentication method.
- the key exchange between the device processor and the host processor may be implemented through an asynchronous API or a proprietary interface, which is not limited in this embodiment of the present application.
- the decryption processing logic decrypts the first data under the first operator, it continues to execute one or more execution operators ranked behind the first operator, so as to realize the operation of the first model.
- data decryption is realized in a pure software manner. That is, the protection of the model is achieved without adding additional components. It not only reduces the hardware cost, but also reduces the requirements on the scale of the operating environment and the algorithm.
- users can choose the appropriate decryption processing logic according to actual needs, which improves flexibility, user-friendliness, and realizes personalized customization, thereby further improving the security of the model.
- the model protection method may further include: after executing the multiple execution operators, deleting the first model, the multiple execution operators, and the second data .
- the device processor deletes the second data after executing multiple execution operators of the first model, which can further Guarantee the security of the model.
- the application processor may return an inference result.
- AI applications in the host processor can use the inference results during runtime.
- the application processor may return the trained model.
- the AI application in the host processor can use the trained model during operation. It can be understood that the trained model returned by the application processor includes a calculation graph and weights of the trained model.
- FIG. 5 shows a flow chart of the model protection method provided by the embodiment of the present application. This method can be applied to the host processor 21 shown in FIG. 2 . As shown in Figure 5, the method may include:
- Step S501 encrypting the first area in the second model.
- the second model represents a model that needs to be protected by encryption.
- the first area may represent an area in the second model that needs to be encrypted.
- the data in the first area is: at least one weight; or, at least one execution operator; or, at least one weight and at least one execution operator.
- the user can flexibly select the area to be encrypted, which improves flexibility and user-friendliness. For example, users can choose to encrypt key data and sensitive data, users can choose to encrypt key operators, or users can encrypt the entire calculation graph.
- the first area is not limited.
- step S501 may include: encrypting the first area by using a second operator, where the second operator is used to indicate an encryption processing logic.
- the host processor may choose to encrypt the first region using the second operator. It can be understood that, in the embodiment of the present application, the encryption processing logic indicated by the second operator corresponds to the decryption processing logic indicated by the first operator, and the encryption algorithm and decryption algorithm used are matched. Of course, the host processor may also choose other methods, such as manual editing, to encrypt the first area, which is not limited in this embodiment of the present application.
- Step S502 adding a first operator to the calculation graph of the second model according to the first region to obtain a first model, where the first operator is used to indicate decryption processing logic.
- Step S503 sending the first model.
- the host processor may directly send the first model to the device processor for processing.
- the host processor may send the first model to the storage unit for storage, and then the device processor may load the first model from the storage unit for processing.
- the first operator can be inserted in any layer of the calculation graph of the second model to obtain the first model, and the user can also flexibly use the first operator to insert a certain
- the weights of one or more layers are randomly encrypted to avoid leakage of core data.
- Fig. 6 shows a schematic diagram of the interaction of the model protection method provided by the embodiment of the present application. This approach can be applied to the system shown in Figure 2. As shown in Figure 6, the method may include:
- Step S601 the host processor encrypts the first area in the second model.
- step S602 the host processor adds the first operator used to indicate the decryption processing logic to the calculation graph of the second model according to the first region to obtain the first model.
- Step S603 the host processor stores the first model in the storage unit.
- Step S604 the device processor loads the first model from the storage unit.
- Step S605 the device processor obtains multiple execution operators from the first model.
- Step S606 the device processor executes the multiple execution operators in sequence according to the hierarchical relationship of the multiple execution operators.
- Step S6061 when the device processor executes the first operator, it sends a key acquisition request to the host processor.
- step S6062 the host processor authenticates the device processor in response to the key acquisition request from the device processor.
- the key acquisition request includes an identifier of the first model and an identifier of the device processor.
- Step S608 may include: the host processor authenticating the device processor based on the identifier of the first model and the identifier of the device processor.
- Step S6063 if the authentication is passed, the host processor returns the key to the device processor.
- Step S6064 the device processor responds to the key returned based on the key acquisition request, uses the key to decrypt the first data to obtain the second data, and executes one or Multiple execution operators.
- Step S607 the device processor returns the inference result or the trained model.
- step S608 the device processor deletes the first model, the multiple execution operators and the second data after executing the plurality of execution operators.
- the decryption processing logic decrypts the first data under the first operator, it continues to execute one or more execution operators ranked behind the first operator, so as to realize the operation of the first model.
- data decryption is realized in a pure software manner. That is, the protection of the model is achieved without adding additional components. It not only reduces the hardware cost, but also reduces the requirements on the scale of the operating environment and the algorithm.
- users can choose the appropriate decryption processing logic according to actual needs, which improves flexibility, user-friendliness, and realizes personalized customization, thereby further improving the security of the model.
- the decryption of data is performed by the device processor, and it is difficult for an attacker to enter the device side to attack, which improves the security. Since all the implementation logic of the first operator development method is controlled by the user, the user can choose the encryption and decryption algorithm and the key exchange method more flexibly. For example, the user can use the symmetric encryption method to encrypt the data content, and the asymmetric method Based on the public key that transmits the key in the public network environment, the embodiment of the present application does not limit the encryption and decryption algorithm.
- the key protection area of the model needs to be temporarily stored in the storage unit (such as memory) without performing analysis operations, and supports dynamic analysis of the second data after decryption is completed.
- Fig. 7 shows a schematic structural diagram of a model protection device provided by an embodiment of the present application. This means can be applied to the device processor 23 shown in FIG. 2 . As shown in Figure 7, the device 70 may include:
- An obtaining module 71 configured to obtain a plurality of execution operators from the first model, the plurality of execution operators include a first operator, and the first operator is used to indicate the decryption processing logic;
- Executing module 72 is used for executing the plurality of execution operators that described acquisition module 71 obtains according to the hierarchical relationship order of described multiple execution operators;
- the execution module 72 is specifically used for:
- the first operator When the first operator is executed, based on the decryption processing logic, the first data under the first operator is decrypted to obtain the second data, and based on the second data, the first One or more execution operators following the operator.
- the second data is:
- At least one weight and at least one execution operator are provided.
- the executing module is also used for:
- the key In response to the key returned based on the key acquisition request, the key is used to decrypt the first data to obtain the second data.
- the first operator is used to indicate the address of the storage space where the first data is located, and when the first operator is executed, the execution module is further used to:
- the data stored in the address is decrypted based on the decryption processing logic to obtain the second data.
- the device further includes:
- a deletion module configured to delete the first model, the multiple execution operators, and the second data when the execution of the plurality of execution operators is completed.
- the first model is a training model or an inference model.
- the device further includes:
- a returning module configured to return an inference result if the first model is an inference model; and return a trained model if the first model is a training model.
- the decryption processing logic is symmetric decryption processing logic or asymmetric decryption processing logic.
- the decryption processing logic decrypts the first data under the first operator, it continues to execute one or more execution operators ranked behind the first operator, so as to realize the operation of the first model.
- data decryption is realized in a pure software manner. That is, the protection of the model is achieved without adding additional components. It not only reduces the hardware cost, but also reduces the requirements on the scale of the operating environment and the algorithm.
- users can choose the appropriate decryption processing logic according to actual needs, which improves flexibility, user-friendliness, and realizes personalized customization, thereby further improving the security of the model.
- Fig. 8 shows a schematic structural diagram of a model protection device provided by an embodiment of the present application. This device can be applied to the host processor 21 shown in FIG. 2 . As shown in Figure 8, the device 80 may include:
- An encryption module 81 configured to encrypt the first area in the second model
- the adding module 82 is configured to add a first operator to the calculation graph of the second model according to the first area encrypted by the encryption module 81 to obtain a first model, and the first operator is used to indicate the decryption processing logic ;
- the sending module 83 is configured to send the first model obtained after the adding module 82 adds the first operator.
- the data in the first area is:
- At least one weight and at least one execution operator are provided.
- the device further includes:
- an authentication module configured to authenticate the device processor in response to a key acquisition request from the device processor
- the key is returned to the device processor.
- the key acquisition request includes the identifier of the first model and the identifier of the device processor, and the authentication module is further configured to:
- the device processor is authenticated based on the identification of the first model and the identification of the device processor.
- the encryption module is also used for:
- the first area is encrypted by using a second operator, and the second operator is used to indicate an encryption processing logic.
- An embodiment of the present application provides an electronic device, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above method when executing the instructions.
- An embodiment of the present application provides a processor, and the booster is configured to execute the above method.
- Embodiments of the present application provide a chip that can execute the above method.
- the embodiment of the present application provides a model protection system, the architecture of the system is shown in Figure 2, the system includes a host processor, a storage unit and a device processor, wherein the host processor is used to Encrypt the first area of the first area; add a first operator to the calculation graph of the second model according to the first area to obtain the first model, and the first operator is used to indicate the decryption processing logic; send the A first model; a storage unit, configured to store the first model; a device processor, configured to obtain a plurality of execution operators from the first model, and the plurality of execution operators include a first operator, so The first operator is used to indicate the decryption processing logic; executing the multiple execution operators in sequence according to the hierarchical relationship of the multiple execution operators includes: when the first operator is executed, based on the decryption The processing logic decrypts the first data under the first operator to obtain second data, and executes one or more execution operators arranged behind the first operator based on the second data.
- An embodiment of the present application provides a non-volatile computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is realized.
- An embodiment of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium bearing computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
- a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
- a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
- RAM Random Access Memory
- ROM read only memory
- EPROM or flash memory erasable Electrically Programmable Read-Only-Memory
- Static Random-Access Memory SRAM
- Portable Compression Disk Read-Only Memory Compact Disc Read-Only Memory
- CD -ROM Compact Disc Read-Only Memory
- DVD Digital Video Disc
- the computer-readable program instructions or codes described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- Computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
- Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet).
- electronic circuits such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
- These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
- each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented with a combination of hardware and software, such as firmware.
- hardware such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)
- firmware such as firmware
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Storage Device Security (AREA)
Abstract
Description
Claims (30)
- 一种模型保护方法,其特征在于,所述方法包括:从第一模型获得多个执行算子,所述多个执行算子中包括第一算子,所述第一算子用于指示解密处理逻辑;按照所述多个执行算子的层级关系顺序执行所述多个执行算子,包括:在执行到所述第一算子时,基于所述解密处理逻辑对所述第一算子下的第一数据进行解密获得第二数据,并基于所述第二数据执行排在所述第一算子后面的一个或多个执行算子。
- 根据权利要求1所述的方法,其特征在于,所述第二数据为:至少一个权值;或者,至少一个执行算子;或者,至少一个权值以及至少一个执行算子。
- 根据权利要求1或2所述的方法,其特征在于,所述基于所述解密处理逻辑对所述第一数据进行解密获得第二数据,包括:响应于基于密钥获取请求返回的密钥,采用所述密钥对所述第一数据进行解密获得所述第二数据。
- 根据权利要求1或2所述的方法,其特征在于,所述第一算子用于指示所述第一数据所在存储空间的地址,所述在执行到所述第一算子时,所述基于所述解密处理逻辑对所述第一算子下的第一数据进行解密获得第二数据,包括:在执行到所述地址时,基于所述解密处理逻辑对所述地址中存储的数据进行解密获得所述第二数据。
- 根据权利要求1至4中任意一项所述的方法,其特征在于,所述方法还包括:在执行完所述多个执行算子时,删除所述第一模型、所述多个执行算子以及所述第二数据。
- 根据权利要求1至4中任意一项所述的方法,其特征在于,所述第一模型为训练模型或者推理模型。
- 根据权利要求6所述的方法,其特征在于,所述方法还包括:在所述第一模型为推理模型的情况下,返回推理结果;在所述第一模型为训练模型的情况下,返回训练后的模型。
- 根据权利要求1至7中任意一项所述的方法,其特征在于,所述解密处理逻辑为对称式解密处理逻辑或者非对称式解密处理逻辑。
- 一种模型保护方法,其特征在于,所述方法包括:对第二模型中的第一区域进行加密;按照所述第一区域向所述第二模型的计算图中添加第一算子,得到第一模型,所述第一算子用于指示解密处理逻辑;发送所述第一模型。
- 根据权利要求9所述的方法,其特征在于,所述第一区域中的数据为:至少一个权值;或者,至少一个执行算子;或者,至少一个权值以及至少一个执行算子。
- 根据权利要求9或10所述的方法,其特征在于,所述方法还包括:响应于来自设备处理器的密钥获取请求,对所述设备处理器进行鉴权;在鉴权通过的情况下,向所述设备处理器返回密钥。
- 根据权利要求11所述的方法,其特征在于,所述密钥获取请求中包括所述第一模型的标识,以及所述设备处理器的标识,所述对所述设备处理器进行鉴权包括:基于所述第一模型的标识,以及所述设备处理器的标识对所述设备处理器进行鉴权。
- 根据权利要求9至12中任意一项所述的方法,其特征在于,所述对第二模型中的第一区域进行加密包括:采用第二算子对所述第一区域进行加密,所述第二算子用于指示加密处理逻辑。
- 一种模型保护装置,其特征在于,所述装置包括:获取模块,用于从第一模型获得多个执行算子,所述多个执行算子中包括第一算子,所述第一算子用于指示解密处理逻辑;执行模块,用于按照所述多个执行算子的层级关系顺序执行所述获取模块获取的多个执行算子;所述执行模块,具体用于:在执行到所述第一算子时,基于所述解密处理逻辑对所述第一算子下的第一数据进行解密获得第二数据,并基于所述第二数据执行排在所述第一算子后面的一个或多个执行算子。
- 根据权利要求14所述的装置,其特征在于,所述第二数据为:至少一个权值;或者,至少一个执行算子;或者,至少一个权值以及至少一个执行算子。
- 根据权利要求14或15所述的装置,其特征在于,所述执行模块还用于:响应于基于密钥获取请求返回的密钥,采用所述密钥对所述第一数据进行解密获得所述第二数据。
- 根据权利要求14或15所述的装置,其特征在于,所述第一算子用于指示所述第一数据所在存储空间的地址,所述在执行到所述第一算子时,所述执行模块还用于:在执行到所述地址时,基于所述解密处理逻辑对所述地址中存储的数据进行解密获得所述第二数据。
- 根据权利要求14至17中任意一项所述的装置,其特征在于,所述装置还包括:删除模块,用于在执行完所述多个执行算子时,删除所述第一模型、所述多个执行算子以及所述第二数据。
- 根据权利要求14至17中任意一项所述的装置,其特征在于,所述第一模型为训练模型或者推理模型。
- 根据权利要求19所述的装置,其特征在于,所述装置还包括:返回模块,用于在所述第一模型为推理模型的情况下,返回推理结果;以及在所述第一模型为训练模型的情况下,返回训练后的模型。
- 根据权利要求14至20中任意一项所述的装置,其特征在于,所述解密处理逻辑为对称式解密处理逻辑或者非对称式解密处理逻辑。
- 一种模型保护装置,其特征在于,所述装置包括:加密模块,用于对第二模型中的第一区域进行加密;添加模块,用于按照所述加密模块加密的第一区域向所述第二模型的计算图中添加第一 算子,得到第一模型,所述第一算子用于指示解密处理逻辑;发送模块,用于发送所述添加模块添加了所述第一算子后得到的所述第一模型。
- 根据权利要求22所述的装置,其特征在于,所述第一区域中的数据为:至少一个权值;或者,至少一个执行算子;或者,至少一个权值以及至少一个执行算子。
- 根据权利要求22或23所述的装置,其特征在于,所述装置还包括:鉴权模块,用于响应于来自设备处理器的密钥获取请求,对所述设备处理器进行鉴权;在鉴权通过的情况下,向所述设备处理器返回密钥。
- 根据权利要求24所述的装置,其特征在于,所述密钥获取请求中包括所述第一模型的标识,以及所述设备处理器的标识,所述鉴权模块还用于:基于所述第一模型的标识,以及所述设备处理器的标识对所述设备处理器进行鉴权。
- 根据权利要求22至25中任意一项所述的装置,其特征在于,所述加密模块还用于:采用第二算子对所述第一区域进行加密,所述第二算子用于指示加密处理逻辑。
- 一种电子设备,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现权利要求1至8中任意一项所述的方法,或者实现权利要求9至13中任意一项所述的方法。
- 一种模型保护系统,其特征在于,所述系统包括主机处理器、存储单元和设备处理器,其中,主机处理器,用于对第二模型中的第一区域进行加密;按照所述第一区域向所述第二模型的计算图中添加第一算子,得到第一模型,所述第一算子用于指示解密处理逻辑;发送所述第一模型;存储单元,用于存储所述第一模型;设备处理器,用于从所述第一模型获得多个执行算子,所述多个执行算子中包括第一算子,所述第一算子用于指示解密处理逻辑;按照所述多个执行算子的层级关系顺序执行所述 多个执行算子,包括:在执行到所述第一算子时,基于所述解密处理逻辑对所述第一算子下的第一数据进行解密获得第二数据,并基于所述第二数据执行排在所述第一算子后面的一个或多个执行算子。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至8中任意一项所述的方法,或者实现权利要求9至13中任意一项所述的方法。
- 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的计算机可读存储介质,当所述计算机可读代码被处理器执行时实现权利要求1至8中任意一项所述的方法,或者实现权利要求9至13中任意一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22868767.9A EP4339819A1 (en) | 2021-09-16 | 2022-06-20 | Model protection method and apparatus |
US18/415,995 US20240154802A1 (en) | 2021-09-16 | 2024-01-18 | Model protection method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111086393.9A CN115828271A (zh) | 2021-09-16 | 2021-09-16 | 一种模型保护方法及装置 |
CN202111086393.9 | 2021-09-16 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/415,995 Continuation US20240154802A1 (en) | 2021-09-16 | 2024-01-18 | Model protection method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023040390A1 true WO2023040390A1 (zh) | 2023-03-23 |
Family
ID=85515039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/099851 WO2023040390A1 (zh) | 2021-09-16 | 2022-06-20 | 一种模型保护方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240154802A1 (zh) |
EP (1) | EP4339819A1 (zh) |
CN (1) | CN115828271A (zh) |
WO (1) | WO2023040390A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109687952A (zh) * | 2018-11-16 | 2019-04-26 | 创新奇智(重庆)科技有限公司 | 数据处理方法及其装置、电子装置及存储介质 |
US20190334716A1 (en) * | 2018-04-27 | 2019-10-31 | The University Of Akron | Blockchain-empowered crowdsourced computing system |
CN111428887A (zh) * | 2020-03-19 | 2020-07-17 | 腾讯云计算(北京)有限责任公司 | 一种基于多个计算节点的模型训练控制方法、装置及系统 |
CN112749780A (zh) * | 2019-10-31 | 2021-05-04 | 阿里巴巴集团控股有限公司 | 数据的处理方法、装置及设备 |
CN112804184A (zh) * | 2019-11-13 | 2021-05-14 | 阿里巴巴集团控股有限公司 | 数据混淆方法、装置及设备 |
-
2021
- 2021-09-16 CN CN202111086393.9A patent/CN115828271A/zh active Pending
-
2022
- 2022-06-20 EP EP22868767.9A patent/EP4339819A1/en active Pending
- 2022-06-20 WO PCT/CN2022/099851 patent/WO2023040390A1/zh active Application Filing
-
2024
- 2024-01-18 US US18/415,995 patent/US20240154802A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190334716A1 (en) * | 2018-04-27 | 2019-10-31 | The University Of Akron | Blockchain-empowered crowdsourced computing system |
CN109687952A (zh) * | 2018-11-16 | 2019-04-26 | 创新奇智(重庆)科技有限公司 | 数据处理方法及其装置、电子装置及存储介质 |
CN112749780A (zh) * | 2019-10-31 | 2021-05-04 | 阿里巴巴集团控股有限公司 | 数据的处理方法、装置及设备 |
CN112804184A (zh) * | 2019-11-13 | 2021-05-14 | 阿里巴巴集团控股有限公司 | 数据混淆方法、装置及设备 |
CN111428887A (zh) * | 2020-03-19 | 2020-07-17 | 腾讯云计算(北京)有限责任公司 | 一种基于多个计算节点的模型训练控制方法、装置及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP4339819A1 (en) | 2024-03-20 |
CN115828271A (zh) | 2023-03-21 |
US20240154802A1 (en) | 2024-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11196541B2 (en) | Secure machine learning analytics using homomorphic encryption | |
EP3916604B1 (en) | Method and apparatus for processing privacy data of block chain, device, storage medium and computer program product | |
US9020149B1 (en) | Protected storage for cryptographic materials | |
US11546348B2 (en) | Data service system | |
CN107612683B (zh) | 一种加解密方法、装置、系统、设备和存储介质 | |
JP2014126865A (ja) | 暗号処理装置および方法 | |
US11676011B2 (en) | Private transfer learning | |
EP2778953A1 (en) | Encoded-search database device, method for adding and deleting data for encoded search, and addition/deletion program | |
US20170103083A1 (en) | System and method for searching distributed files across a plurality of clients | |
US9755832B2 (en) | Password-authenticated public key encryption and decryption | |
KR20220092811A (ko) | 암호화 데이터를 저장하는 방법 및 장치 | |
Sharma | ENHANCE DATA SECURITY IN CLOUD COMPUTING USING MACHINE LEARNING AND HYBRID CRYPTOGRAPHY TECHNIQUES. | |
US20220271914A1 (en) | System and Method for Providing a Secure, Collaborative, and Distributed Computing Environment as well as a Repository for Secure Data Storage and Sharing | |
WO2023040390A1 (zh) | 一种模型保护方法及装置 | |
US10693628B2 (en) | Enabling distance-based operations on data encrypted using a homomorphic encryption scheme with inefficient decryption | |
US20230344634A1 (en) | Gesture-based authentication tokens for information security within a metaverse | |
CN115766173A (zh) | 数据的处理方法、系统及装置 | |
CN107111635B (zh) | 内容传递方法 | |
US11455404B2 (en) | Deduplication in a trusted execution environment | |
Zhang et al. | Secure deduplication based on Rabin fingerprinting over wireless sensing data in cloud computing | |
CN115843359A (zh) | 计算秘密的管理 | |
US20220351074A1 (en) | Encrypting data in a machine learning model | |
Dwivedi et al. | Cloud Security Enhancement Using Modified Enhanced Homomorphic Cryptosystem | |
CN115333811B (zh) | 一种多关键词搜索功能的安全无信道公钥认证可搜索加密方法及相关装置 | |
Baligodugula et al. | A Comparative Study of Secure and Efficient Data Duplication Mechanisms for Cloud-Based IoT Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22868767 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 22868767.9 Country of ref document: EP Ref document number: 2022868767 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022868767 Country of ref document: EP Effective date: 20231214 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |