CN117093869A - Safe model multiplexing method and system - Google Patents

Safe model multiplexing method and system Download PDF

Info

Publication number
CN117093869A
CN117093869A CN202311098702.3A CN202311098702A CN117093869A CN 117093869 A CN117093869 A CN 117093869A CN 202311098702 A CN202311098702 A CN 202311098702A CN 117093869 A CN117093869 A CN 117093869A
Authority
CN
China
Prior art keywords
model
data
specific target
data center
ciphertext
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311098702.3A
Other languages
Chinese (zh)
Inventor
丁琦
张鲁国
何骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Xinda Jiean Information Technology Co Ltd
Original Assignee
Zhengzhou Xinda Jiean Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Xinda Jiean Information Technology Co Ltd filed Critical Zhengzhou Xinda Jiean Information Technology Co Ltd
Priority to CN202311098702.3A priority Critical patent/CN117093869A/en
Publication of CN117093869A publication Critical patent/CN117093869A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/606Protecting data by securing the transmission between two devices or processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application relates to a safe model multiplexing method and system. The method comprises the steps that a data center performs fine adjustment on the basis of a pre-training model to obtain a plurality of target models, and the pre-training model and the use information of the target models are disclosed to a user side; and establishing a secure communication mechanism between the data center and the user side based on the public key cryptosystem and related security strategies, encrypting layer data and layer information which are changed relative to the pre-training model in a target model requested by the user side by the data center, and then safely transmitting the encrypted layer data and layer information to the user side, decrypting the data and replacing corresponding data of the pre-training model when the user side needs to use the model, so as to obtain a specific target model. According to the technical scheme, the target model meeting various personalized requirements is obtained by utilizing the pre-training model, key data of the target model are protected by utilizing the public key cryptosystem and related security strategies, and the safety of the model and the whole model multiplexing system is improved under the condition that the utilization efficiency of the model is ensured.

Description

Safe model multiplexing method and system
Technical Field
The application belongs to the field of artificial intelligence and information security, and particularly relates to a safe model multiplexing method and system.
Background
With the gradual normalization of artificial intelligence applications, artificial intelligence security issues have received attention. The artificial intelligence security mainly comprises AI model security, AI data security and AI bearing system security. AI models may be subject to destructive threats from attackers during training, operational phases and storage, or may risk model leakage during transmission.
Conventional AI models are mostly built based on supervised learning techniques. Essentially, supervised learning refers to building a Machine Learning (ML) model from scratch, a key method to date driving AI development, which works on the premise of being able to access large data sets and enhance computing power. However, many AI models cannot be built due to lack of resources, in which case it is desirable to have a more efficient method of building models. Migration learning has been developed. The transfer learning refers to a learning process of applying a model learned in an old domain to a new domain by using similarity among data, tasks or models. By utilizing the idea of transfer learning, the contradiction between the universal model and the personalized demand can be well solved. The pre-training model is an application of transfer learning, and based on the pre-training model, a new model suitable for specific requirements can be obtained quickly by fully utilizing the trained model with less data and calculation force.
Therefore, how to build a model multiplexing system with higher safety based on a pre-training model, so as to improve the safety and efficiency of model use is still a problem to be solved.
In addition, the new model only carries out fine adjustment on the pre-training model, so the data volume is still large, thereby limiting the number of the new models which can be stored in each user side and failing to meet various demands of users.
Disclosure of Invention
The application aims at improving the safety and efficiency of AI model use, and provides a safe model multiplexing method and system.
In the technical scheme of the application, a data center carries out fine adjustment on a pre-training model according to different purposes and data sets to obtain a plurality of target models, and discloses the pre-training model and the use information of each target model to a user side; and establishing a secure communication mechanism between the data center and the user side based on the public key cryptosystem and the related password security policy, encrypting the layer data and the layer information which are changed relative to the pre-training model in the requested specific target model by the data center according to the application of the user side to form a model ciphertext, and safely transmitting the model ciphertext to the user side.
Compared with the prior art, the application has substantial characteristics and progress, in particular:
1. according to the technical scheme, the model is reused based on the pre-training model, the target model meeting various personalized requirements can be obtained by using less calculation force under the condition of relatively less data, and when the model is stored and distributed, only partial layers of the target model are often changed relative to the pre-training model, so that only the data and information of the changed layers are needed to be encrypted and transmitted, and the transmission efficiency of the model is improved.
2. The technical scheme of the application utilizes the public key cryptosystem to protect the key data of the target model, ensures the safety of the model during transmission and storage, avoids the model from being acquired, tampered or damaged by an attacker, and improves the safety of the model and the whole model multiplexing system under the condition of ensuring the utilization efficiency of the model.
3. The user side stores the data and information of the pre-training model and the layer of the specific target model which is changed relative to the pre-training model, and the specific target model can be obtained by replacing the layer data with the changed layer data in the pre-training model when the user side is used; because the data volume of the layer of which the specific target model changes relative to the pre-training model is far smaller than that of the specific target model, a user side can store the data and the information of the layer of which the plurality of target models change by using a small storage space, thereby indirectly storing the plurality of specific target models and meeting the diversified demands of users.
Drawings
Fig. 1 is a flow chart of a secure model multiplexing method according to embodiment 1 of the present application.
Fig. 2 is a flow chart of a secure model multiplexing method according to embodiment 2 of the present application.
Fig. 3 is a schematic diagram of a secure model multiplexing system according to embodiment 3 of the present application.
Fig. 4 is a schematic workflow diagram of the model multiplexing system according to fig. 3.
Detailed Description
The pre-training model is an application of transfer learning, and is obtained from large-scale data through self-supervision learning and is irrelevant to specific tasks. The pre-training model and the fine tuning mechanism have good expandability, and when a new task is supported, only the labeling data of the task is needed to be used for fine tuning. And (3) carrying out fine adjustment on the same pre-training model by using different data according to different purposes to obtain a plurality of different target models so as to meet different personalized requirements of different users.
The application provides a safe model multiplexing method and system based on the principle, improves the transmission efficiency and safety of the model, and can meet different personalized requirements of the same user.
Example 1:
the embodiment provides a secure model multiplexing method, which is applied to a user terminal, as shown in fig. 1, and includes:
and receiving a pre-training model disclosed by the data center and use information of each target model, wherein the target model is obtained by the data center after fine tuning the pre-training model according to the purposes and the data set. The usage information includes one or more of application domain, usage scenario, task goal, target user, input data format, output data format, etc.
And establishing a secure communication mechanism with the data center based on the public key cryptosystem and the related cryptosystem security policy, and applying a specific target model, such as a target model A, to the data center based on the secure communication mechanism.
Receiving a model ciphertext sent by a data center, wherein the model ciphertext is obtained by the data center according to a related password security policy, and encrypting layer data and layer information which are changed relative to a pre-training model in a target model A by using the session key K; layer information includes, but is not limited to, the type of layer in the model, structure, number of nodes, connection with other layers, size of input-output matrix, linear function, nonlinear function, size and number of convolution kernels, convolution step size, edge filling mode, pooling step size, pooling mode, etc. The layer data includes, but is not limited to, weight values, function parameters, and the like.
Acquiring a session key K according to a related password security policy, and receiving and storing a model ciphertext; and when the target model A is needed, decrypting the model ciphertext by using the session key K to obtain layer data and layer information, and replacing the layer data with the change layer data in the pre-training model according to the layer information to obtain the target model A.
According to the technical scheme, the model is reused based on the pre-training model, the target model meeting various personalized requirements can be obtained by using less calculation force under the condition of relatively less data, and when the model is stored and distributed, only partial layers of the target model are often changed relative to the pre-training model, so that only the data and information of the changed layers are needed to be encrypted and transmitted, and the transmission efficiency of the model is improved.
It should be noted that the model in this embodiment is mainly an AI model, including an artificial neural network model, and particularly a deep learning model.
In the case of fine tuning of a multiplexed pre-trained model, the original classifier is typically first deleted, then a new classifier is added as appropriate, and the model can then be fine tuned according to one of the following strategies:
(1) The entire model is trained. This requires a large data set and a large amount of computing resources.
(2) Some layers are trained while others are frozen. The lower layers are adapted for general features, while the higher layers are adapted for special features. In general, if there are smaller data sets and a large number of parameters, more layers need to be frozen to avoid overfitting. If the data set is larger, the number of parameters is smaller, and fewer frozen layers may be selected.
(3) Freezing the convolution basis, namely taking the pre-training model as a fixed feature extraction path.
In this embodiment, for different target models, different strategies from the above strategies may be selected to fine tune the pre-training model according to the application and the data set.
Note that, in the trimming, the structure of a part of layers may be changed, and the changed layer structure or the like may be written into the layer information.
Because the data volume of the layer of which the specific target model changes relative to the pre-training model is far smaller than that of the specific target model, a user side can store the data and the information of the layer of which the multiple target models change by using a small storage space, and then the pre-stored pre-training model and different model ciphertext are used for assembling, so that the multiple specific target models can be obtained, which is equivalent to the fact that the user side indirectly stores the multiple specific target models, and the diversified requirements of users can be met.
When the method is used, the user side can apply for a plurality of specific target models to the data center based on the secure communication mechanism at the same time, and can apply for corresponding specific target models to the data center one by one according to requirements.
Further, in order to ensure secure communication with the data center, the client needs to establish a secure communication mechanism with the data center. Specifically, the application adopts a security communication mechanism established based on a public key cryptosystem and related password security policies. The public key cryptosystem (Public Key Cryptography, PKI for short) is also called an asymmetric cryptosystem, and uses different encryption keys and decryption keys. The encryption key PK (public key) is public, while the decryption key SK (secret key), the private key, is required to be kept secret. The design of the public key cryptosystem is based on a trapdoor one-way function, and aims to establish a certificate-based security authentication system and a key exchange protocol according to a related password security policy so as to realize mutual trust relationship and security information interaction between a user and a management center. Wherein the associated cryptographic security policy is used to specify the operating rules of the cryptographic system.
Therefore, the user side and the data center can establish a two-party secure communication mechanism by utilizing the digital certificate and the related password security policy based on the PKI system. Based on public key cryptosystem and related crypto security policy, the two communication parties can obtain the shared session key K through key negotiation.
If a single certificate system is adopted, the user uses a unique certificate and a corresponding public and private key to perform signing, signature verification and encryption and decryption operations. When signing, the private key operates the abstract of the related information to obtain a signature result, and the public key operates the signature result to obtain a verification result. And during encryption and decryption, the public key is used for encrypting the information to obtain ciphertext information, and the private key is used for decrypting the ciphertext to obtain the information.
If a double certificate system is employed, a signature certificate and an encryption certificate are used, respectively. The signature certificate is used for identity verification, and the encryption certificate is used for key agreement.
Furthermore, the data center uses a symmetric encryption algorithm to carry out security protection on the layer data and the layer information which are changed relative to the pre-training model in the specific target model.
Preferably, when encrypting the layer data and the layer information which are changed in the target model A relative to the pre-training model, the data center uses the corresponding session key and/or the corresponding cryptographic algorithm to carry out security protection according to the purpose of the target model A and/or the corresponding user side. It will be appreciated that different uses and/or clients correspond to different session keys and/or cryptographic algorithms. For example, the session keys and/or the cryptographic algorithms may be the same for different purposes but different at the user end, the same at the user end, and different at the user end. The security of the layer data and the layer information with change is ensured by a dense mode of one end (user end) and one piece of user information, so that the security of the target model is indirectly improved.
And when the model ciphertext is transmitted, the data center also safely transmits a decryption method and/or a cryptographic algorithm related to the security policy used by the target model A to the user side so that the user side can decrypt the model ciphertext by using the decryption method and/or the cryptographic algorithm to realize a symmetric encryption algorithm.
It can be understood that the technical scheme of the application utilizes the public key cryptosystem to protect the key data of the target model, ensures the safety of the model during transmission and storage, avoids the model from being acquired, tampered or damaged by an attacker, and improves the safety of the model and the whole model multiplexing system under the condition of ensuring the utilization efficiency of the model.
Example 2
The embodiment also provides a safe model multiplexing method, which is applied to a data center, as shown in fig. 2, and includes:
fine tuning the pre-training model according to different purposes and data sets to obtain a plurality of target models, and disclosing the pre-training model and the use information of each target model to a user side; preferably, when the pre-training model is fine-tuned according to different purposes and data sets, part of layers in the pre-training model are selected for retraining, and other layers are frozen.
And establishing a secure communication mechanism with the user terminal based on the public key cryptosystem and the related cryptosystem security policy, and receiving specific target model application information sent by the user terminal based on the secure communication mechanism. The digital certificate and the related password security policy can be utilized to establish a secure communication mechanism with the user based on the PKI system.
The method comprises the steps that a session key K is obtained according to a relevant password security policy, layer data and layer information, which are changed relative to a pre-training model, in a specific target model are encrypted by using the session key K, a model ciphertext is obtained and sent to a user side, so that the user side obtains the session key K according to the relevant password security policy, and the model ciphertext is received and stored; and when the specific target model is required to be used, decrypting the model ciphertext by using the session key K to obtain layer data and layer information, and replacing the layer data with the changed layer data in the pre-training model according to the layer information to obtain the specific target model.
In specific implementation, the data center uses a symmetric encryption algorithm to secure layer data and layer information that have a change in a specific target model relative to a pre-training model.
Specifically, when the data center encrypts layer data and layer information which are changed relative to the pre-training model in the specific target model, the data center uses corresponding session keys and/or password algorithms to carry out safety protection according to the purpose of the specific target model and/or the corresponding user end, wherein different purpose information and/or user end correspond to different session keys and/or password algorithms.
Example 3:
the embodiment provides a safe model multiplexing system, as shown in fig. 3, which comprises a data center and a user side; the data center comprises a model generation module, a first security module and a first communication module; the user terminal comprises a model use module, a second security module and a second communication module; the data center is in communication connection with the user through the first communication module and the second communication module;
the model generation module is used for fine tuning the pre-training model according to different purposes and data sets to obtain a plurality of target models;
the first security module is used for establishing a security communication mechanism with the user based on the public key cryptosystem and the related cryptosystem security policy; responding to the application of the user side to the specific target model, acquiring a session key K according to a related password security policy, and encrypting layer data and layer information which are changed relative to the pre-training model in the specific target model by using the session key K to obtain a model ciphertext;
the first communication module is used for disclosing a pre-training model and the use information of each target model to the user side; receiving an application of a user side to a specific target model; sending the model ciphertext and/or the key ciphertext of the specific target model to a user side;
the second communication module is used for receiving the pre-training model disclosed by the data center and the use information of each target model; transmitting specific target model application information to a data center; receiving a model ciphertext and/or a key ciphertext of a specific target model;
the second security module is used for establishing a security communication mechanism with the data center based on the public key cryptosystem and the related crypto security policy; a session key K is obtained according to a related password security policy; storing a model ciphertext of the specific target model; when a specific target model is required to be used, decrypting the model ciphertext by using the session key K; the model use module is used for storing the pre-training model and the use information of each target model; generating specific target model application information; and replacing the layer data with the variation in the pre-training model by using the decrypted layer data according to the layer information decrypted by the model ciphertext of the specific target model, so as to obtain the specific target model.
Specifically, as shown in fig. 4, the workflow of the secure model multiplexing system is as follows:
the data center carries out fine adjustment on the pre-training model according to different purposes and data sets to obtain a plurality of target models, and discloses the pre-training model and the use information of each target model to a user side;
the user establishes a secure communication mechanism with the data center based on a public key cryptosystem and a related cryptosystem security policy, and applies for a specific target model to the data center;
the data center acquires a session key K according to a related password security policy, encrypts layer data and layer information which are changed relative to a pre-training model in a specific target model by using the session key K, acquires a model ciphertext and sends the model ciphertext to a user side;
the user side obtains a session key K according to the related password security policy, and receives and stores a model ciphertext; and when the specific target model is required to be used, decrypting the model ciphertext by using the session key K to obtain layer data and layer information, and replacing the layer data with the changed layer data in the pre-training model according to the layer information to obtain the specific target model.
Preferably, the secure model multiplexing system comprises one or more independently operating data centers. Each data center has its own cryptographic services and associated cryptographic security policy system.
Preferably, the secure model multiplexing system includes one or more independent clients.
The above is only for illustrating the technical idea of the present application, and the protection scope of the present application is not limited by this, and any modification made on the basis of the technical scheme according to the technical idea of the present application falls within the protection scope of the claims of the present application.

Claims (10)

1. A secure model multiplexing method applied to a user terminal, comprising:
receiving a pre-training model disclosed by a data center and use information of each target model, wherein the target models are obtained by the data center after fine tuning the pre-training model according to purposes and data sets;
establishing a secure communication mechanism with the data center based on a public key cryptosystem and related cryptosystem security policies, and applying a specific target model to the data center based on the secure communication mechanism;
receiving a model ciphertext sent by a data center, wherein the model ciphertext is obtained by the data center according to a related password security policy, and encrypting layer data and layer information which are changed relative to a pre-training model in a specific target model by using the session key K;
acquiring a session key K according to a related password security policy, and receiving and storing a model ciphertext; and when the specific target model is required to be used, decrypting the model ciphertext by using the session key K to obtain layer data and layer information, and replacing the layer data with the changed layer data in the pre-training model according to the layer information to obtain the specific target model.
2. A secure model multiplexing method according to claim 1, characterized in that: the layer information comprises one or more of the type, structure, node number, connection relation with other layers, size of an input/output matrix, linear function, nonlinear function, size and number of convolution kernels, convolution step length, edge filling mode, pooling step length and pooling mode of the layers in the pre-training model; the layer data includes one or more of a weight value, a function parameter.
3. A secure model multiplexing method applied to a data center, comprising:
fine tuning the pre-training model according to different purposes and data sets to obtain a plurality of target models, and disclosing the pre-training model and the use information of each target model to a user side;
establishing a secure communication mechanism with a user terminal based on a public key cryptosystem and related cryptosystem security policies, and receiving specific target model application information sent by the user terminal based on the secure communication mechanism;
the method comprises the steps that a session key K is obtained according to a relevant password security policy, layer data and layer information, which are changed relative to a pre-training model, in a specific target model are encrypted by using the session key K, a model ciphertext is obtained and sent to a user side, so that the user side obtains the session key K according to the relevant password security policy, and the model ciphertext is received and stored; and when the specific target model is required to be used, decrypting the model ciphertext by using the session key K to obtain layer data and layer information, and replacing the layer data with the changed layer data in the pre-training model according to the layer information to obtain the specific target model.
4. A secure model multiplexing method according to claim 3, characterized in that: the data center uses a symmetric encryption algorithm to carry out security protection on the layer data and the layer information which are changed relative to the pre-training model in the specific target model.
5. The method for secure model multiplexing as defined in claim 4, wherein: when the data center encrypts the layer data and the layer information which are changed relative to the pre-training model in the specific target model, the data center uses corresponding session keys and/or cryptographic algorithms to carry out safety protection according to the purpose of the specific target model and/or the corresponding user end, wherein different purposes and/or user ends correspond to different session keys and/or cryptographic algorithms.
6. A secure model multiplexing method according to claim 3, characterized in that: based on PKI system, the digital certificate and related cipher security policy are utilized to establish a secure communication mechanism with the user.
7. A secure model multiplexing method according to claim 3, characterized in that: when the pre-training model is fine-tuned according to different purposes and data sets, part of layers in the pre-training model are selected for retraining, and other layers are frozen.
8. A secure model multiplexing system, characterized by: the system comprises a data center and a user side; the data center comprises a model generation module, a first security module and a first communication module; the user terminal comprises a model use module, a second security module and a second communication module; the data center is in communication connection with the user through the first communication module and the second communication module;
the model generation module is used for fine tuning the pre-training model according to different purposes and data sets to obtain a plurality of target models;
the first security module is used for establishing a security communication mechanism with the user based on the public key cryptosystem and the related cryptosystem security policy; responding to the application of the user side to the specific target model, acquiring a session key K according to a related password security policy, and encrypting layer data and layer information which are changed relative to the pre-training model in the specific target model by using the session key K to obtain a model ciphertext;
the first communication module is used for disclosing a pre-training model and the use information of each target model to the user side; receiving an application of a user side to a specific target model; sending the model ciphertext and/or the key ciphertext of the specific target model to a user side;
the second communication module is used for receiving the pre-training model disclosed by the data center and the use information of each target model; transmitting specific target model application information to a data center; receiving a model ciphertext and/or a key ciphertext of a specific target model;
the second security module is used for establishing a security communication mechanism with the data center based on the public key cryptosystem and the related crypto security policy; a session key K is obtained according to a related password security policy; storing a model ciphertext of the specific target model; when a specific target model is required to be used, decrypting the model ciphertext by using the session key K;
the model use module is used for storing the pre-training model and the use information of each target model; generating specific target model application information; and replacing the layer data with the variation in the pre-training model by using the decrypted layer data according to the layer information decrypted by the model ciphertext of the specific target model, so as to obtain the specific target model.
9. A secure model multiplexing system in accordance with claim 8 wherein: including one or more independently operating data centers.
10. A secure model multiplexing system in accordance with claim 8 wherein: including one or more individual clients.
CN202311098702.3A 2023-08-29 2023-08-29 Safe model multiplexing method and system Pending CN117093869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311098702.3A CN117093869A (en) 2023-08-29 2023-08-29 Safe model multiplexing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311098702.3A CN117093869A (en) 2023-08-29 2023-08-29 Safe model multiplexing method and system

Publications (1)

Publication Number Publication Date
CN117093869A true CN117093869A (en) 2023-11-21

Family

ID=88774897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311098702.3A Pending CN117093869A (en) 2023-08-29 2023-08-29 Safe model multiplexing method and system

Country Status (1)

Country Link
CN (1) CN117093869A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349869A (en) * 2023-12-05 2024-01-05 深圳市智能派科技有限公司 Method and system for encryption processing of slice data based on model application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117349869A (en) * 2023-12-05 2024-01-05 深圳市智能派科技有限公司 Method and system for encryption processing of slice data based on model application
CN117349869B (en) * 2023-12-05 2024-04-09 深圳市智能派科技有限公司 Method and system for encryption processing of slice data based on model application

Similar Documents

Publication Publication Date Title
Zhang et al. PEFL: A privacy-enhanced federated learning scheme for big data analytics
CN110247767B (en) Revocable attribute-based outsourcing encryption method in fog calculation
CN104158880B (en) User-end cloud data sharing solution
US20230131071A1 (en) Lightweight attribute-based signcryption (absc) method for cloud-fog-assisted internet-of-things (iot)
CN112104454B (en) Data secure transmission method and system
CN110535626B (en) Secret communication method and system for identity-based quantum communication service station
KR20110129961A (en) A method for secure communication in a network, a communication device, a network and a computer program therefor
US20220021526A1 (en) Certificateless public key encryption using pairings
CN109194474A (en) A kind of data transmission method and device
Guo et al. A Secure and Efficient Mutual Authentication and Key Agreement Protocol with Smart Cards for Wireless Communications.
CN113961959A (en) Proxy re-encryption method and system for data sharing community
Senthilkumar et al. Asymmetric Key Blum-Goldwasser Cryptography for Cloud Services Communication Security
CN117093869A (en) Safe model multiplexing method and system
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
Dong et al. Achieving secure and efficient data collaboration in cloud computing
Priyadharshini et al. Efficient Key Management System Based Lightweight Devices in IoT.
Pavani et al. Data Security and Privacy Issues in Cloud Environment
CN109981254B (en) Micro public key encryption and decryption method based on finite lie type group decomposition problem
CN116055152A (en) Grid-based access control encryption and decryption method and system
CN114244567B (en) CP-ABE method for supporting circuit structure in cloud environment
CN115776375A (en) Face information identification encryption authentication and data security transmission method based on Shamir threshold
CN109787772B (en) Anti-quantum computation signcryption method and system based on symmetric key pool
CN114362926B (en) Quantum secret communication network key management communication system and method based on key pool
Jeevitha et al. Data Storage Security and Privacy in Cloud Computing
AlDerai et al. A Study of Image Encryption/Decryption by Using Elliptic Curve Cryptography ECC

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination