CN113761557A - Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm - Google Patents

Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm Download PDF

Info

Publication number
CN113761557A
CN113761557A CN202111025100.6A CN202111025100A CN113761557A CN 113761557 A CN113761557 A CN 113761557A CN 202111025100 A CN202111025100 A CN 202111025100A CN 113761557 A CN113761557 A CN 113761557A
Authority
CN
China
Prior art keywords
server
deep learning
fully homomorphic
encryption algorithm
parameter server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111025100.6A
Other languages
Chinese (zh)
Inventor
郑超
窦凤虎
万俊平
殷丽华
孙哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jizhi Guangzhou Information Technology Co ltd
Original Assignee
Jizhi Guangzhou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jizhi Guangzhou Information Technology Co ltd filed Critical Jizhi Guangzhou Information Technology Co ltd
Priority to CN202111025100.6A priority Critical patent/CN113761557A/en
Publication of CN113761557A publication Critical patent/CN113761557A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a multi-party deep learning privacy protection method based on a fully homomorphic encryption algorithm. The multi-party deep learning privacy protection method based on the fully homomorphic encryption algorithm comprises the following steps: s1, respectively constructing a parameter server, a decryption server and N users Pi={PiI is not less than 1 and not more than N. The invention provides a multi-party deep learning privacy protection method based on an all-homomorphic encryption algorithmThe possibility of information leakage is ensured, and time consumption and excessive resource consumption of a large amount of data encryption and decryption are avoided under the condition that privacy is ensured to be safer.

Description

Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
Technical Field
The invention relates to the field of deep learning and cryptography, in particular to a multi-party deep learning privacy protection method based on a fully homomorphic encryption algorithm.
Background
Homomorphic encryption is an encryption algorithm that can protect data privacy if one algorithm can satisfy: the method is characterized in that the method is a complete homomorphic algorithm, deep learning is a new research direction in the field of machine learning, the final aim of the method is to enable a machine to have the analysis and learning capability like a human, to recognize data such as characters, images and sounds, and the like, and the method is a complex machine learning algorithm, and the effect in the aspect of voice and image recognition is far superior to that of the prior related technology.
With the continuous progress of science and technology, fully homomorphic encryption algorithm and deep learning are widely applied in various industries. For example, medical institutions often need to collect data, model and analyze physiological characteristics of patients to obtain an optimal treatment plan, such deep learning models rely on a large amount of training data for training, and in a traditional centralized deep learning scenario, service providers who train the deep learning models often need to collect large-scale high-quality data from clients and then start training the models. However, if the service provider does not effectively desensitize the data during data gathering, the data may reveal the privacy of the user, and thus protection of the privacy of the user is required.
In the related art, the traditional method is that all users do not need to upload private data, but train the model on local equipment and submit the gradient of the model, a service provider summarizes the gradient uploaded by all users and updates the model, but if the service provider launches member reasoning attack, attribute reasoning attack and the like on the gradient uploaded by the users, the privacy information of the users can still be stolen; in addition, in the prior art, application No. 202110288782, in the convolutional neural network image classification method based on homomorphic encryption, although the problems that existing private information is easy to leak and cannot resist collusion attack are solved, the scheme has the disadvantages that a parameter server and an auxiliary server still need to perform a large amount of homomorphic calculations, and when the scale of a model is large, the time consumption of encryption is difficult to estimate; therefore, the existing scheme usually uses a large amount of homomorphic operation on the model gradient to solve the privacy problem, greatly increases the consumption of time and resources in the large-scale operation process, and enables the performance during actual operation protection not to be further improved.
Therefore, there is a need to provide a multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm to solve the above technical problems.
Disclosure of Invention
The invention provides a multi-party deep learning privacy protection method based on a fully homomorphic encryption algorithm, which solves the problem that the time and resource consumption is greatly increased due to the large-scale operation of the privacy protection method.
In order to solve the technical problem, the multi-party deep learning privacy protection method based on the fully homomorphic encryption algorithm provided by the invention comprises the following steps:
s1, respectively constructing a parameter server, a decryption server and N users Pi={PiI is more than or equal to 1 and less than or equal to N, wherein the parameter server is constructed as a model to be trained, and the model is divided into a feature extractor E, an intermediate layer M and a service classifier C;
s2, the parameter server initializes the model to be trained and distributes the model parameters to each participant Pi
S3, the decryption server initializes the fully homomorphic encryption algorithm, generates a private key sk and a public key pk, and discloses the public key pk:
s4, each user PiRecording data characteristics output by the characteristic extractor E according to the privacy target requirement by using a local data training model
Figure BDA0003243054310000021
Weights of intermediate layersParameter(s)
Figure BDA0003243054310000022
And a training data label;
s5, each user PiEncrypting data features using a fully homomorphic encrypted public key pk
Figure BDA0003243054310000023
Weight parameter of intermediate layer
Figure BDA0003243054310000024
And training data label, get
Figure BDA0003243054310000025
And Enc (label), and uploading to the parameter server;
s6, parameter server aggregating all users PiTransmitted middle layer parameters
Figure BDA0003243054310000026
And all data feature ciphertext
Figure BDA0003243054310000027
Inputting the intermediate layer and outputting the intermediate layer
Figure BDA0003243054310000028
Sending to a decryption server;
s7, the decryption server decrypts the intermediate layer output by using the private key sk encrypted in the fully homomorphic wayMAnd returns it to the parameter server;
s8, outputting output by the middle layer through the parameter serverMInputting the service classifier C to obtain an outputC
S9, parameter server encrypted outputCCalculating a loss function value of the service classifier C by using Enc (label), and sending the loss function value to a decryption server;
s10, the decryption server decrypts the loss function value by using the fully homomorphic encrypted private key sk and returns the loss function value to the parameter server;
s11, the parameter server calculates the gradient according to the loss function value, updates the service classifier C and sends the updated model to each participant Pi
Preferably, the parameter server in S1 is configured to construct a model to be trained and divide the model into a feature extractor E, an intermediate layer M, and a service classifier C; after the model is distributed to a plurality of users, collecting parameter ciphertexts and data feature ciphertexts uploaded by the users and carrying out operation; and requesting a decryption service from the decryption server to obtain a loss function value for updating the service classifier C.
Preferably, the decryption server in S1 is configured to initialize a fully homomorphic encryption algorithm, and disclose the encryption public key to the parameter server and the user, and further provide a decryption service for the parameter server.
Preferably, in S1, the user trains the deep learning model by using local data, sends the encrypted data features and model parameters to the parameter server, and updates the local model according to the result returned by the parameter server.
Preferably, the deep learning model has a specific structure as follows: convolutional layer 1-activation function 1-pooling layer 1-linear layer 1-convolutional layer 2-activation function 2-pooling layer 2-linear layer 2, where the middle layer M is composed of linear layer 1.
Preferably, the fully homomorphic encryption algorithm in S3 uses a CKKS algorithm or a BFV algorithm, and an auxiliary key is generated during initialization of the CKKS algorithm for multiplication and is held by the parameter server.
Preferably, the method for calculating the loss function in S10 includes:
Figure BDA0003243054310000031
and then the parameter server sends Enc (f (x)) to a decryption server, and f (x) is obtained after decryption.
Preferably, when the parameter server updates the service classifier C in S11, the gradient calculation formula used is as follows:
Figure BDA0003243054310000032
where f is the loss function.
Compared with the related technology, the multi-party deep learning privacy protection method based on the fully homomorphic encryption algorithm has the following beneficial effects:
the invention provides a multi-party deep learning privacy protection method based on an all homomorphic encryption algorithm, (1) a deep learning model is jointly trained by constructing ciphertext data uploaded by a plurality of participants, the output value of a neural network is calculated by using the ciphertext on a single-layer neural network by utilizing the all homomorphic encryption characteristic of a parameter server, and the deep learning model trained by a plurality of data sources is integrated, so that the large-scale operation is greatly reduced, the possibility of information leakage can be effectively reduced, the time consumption and excessive resource consumption of a large amount of data encryption and decryption are avoided under the condition of ensuring the privacy to be safer, and the performance in the actual operation process is further improved.
(2) The user when local training data, through setting up the training target that can adjust the feature extractor such as dynamic, utilize the counterwork training for the privacy data characteristic in the feature extractor is filtered out, only uploads the data characteristic relevant with the training target, in addition, still uses homomorphic encryption hidden data, provides dual privacy protection, has promoted privacy protection intensity greatly.
Drawings
FIG. 1 is a flow chart of a fully homomorphic encryption algorithm-based method for protecting privacy from multi-party deep learning according to the present invention;
fig. 2 is an interactive process of a user, a parameter server and a decryption server in the fully homomorphic encryption algorithm-based multi-party deep learning privacy protection method provided by the invention.
Detailed Description
The invention is further described with reference to the following figures and embodiments.
Please refer to fig. 1 and fig. 2 in combination, wherein fig. 1 is a flowchart of a privacy protection method for multi-party deep learning based on a fully homomorphic encryption algorithm according to the present invention; fig. 2 is an interactive process of a user, a parameter server and a decryption server in the fully homomorphic encryption algorithm-based multi-party deep learning privacy protection method provided by the invention. The multi-party deep learning privacy protection method based on the fully homomorphic encryption algorithm comprises the following steps:
s1, respectively constructing a parameter server, a decryption server and N users Pi={PiI is more than or equal to 1 and less than or equal to N), wherein the parameter server is constructed as a model to be trained and divides the model into a feature extractor E, an intermediate layer M and a service classifier C; after the model is distributed to a plurality of users, collecting parameter ciphertexts and data feature ciphertexts uploaded by the users and carrying out operation; requesting a decryption service from a decryption server to obtain a loss function value for updating a service classifier C, wherein the decryption server is used for initializing a fully homomorphic encryption algorithm and disclosing an encryption public key to a parameter server and a user, and also can provide the decryption service for the parameter server, the user trains a deep learning model for using local data, sends encrypted data characteristics and model parameters to the parameter server, and updates the local model according to a result returned by the parameter server, and the deep learning model has the specific structure that: convolutional layer 1-activation function 1-pooling layer 1-linear layer 1-convolutional layer 2-activation function 2-pooling layer 2-linear layer 2, wherein the middle layer M is composed of linear layer 1;
s2, the parameter server initializes the model to be trained and distributes the model parameters to each participant Pi
S3, the decryption server initializes a homomorphic encryption algorithm to generate a private key sk and a public key pk and discloses the public key pk, the homomorphic encryption algorithm uses a CKKS algorithm or a BFV algorithm, an auxiliary key is generated in the CKKS algorithm initialization process for multiplication and is held by the parameter server;
s4, each user PiRecording data characteristics output by the characteristic extractor E according to the privacy target requirement by using a local data training model
Figure BDA0003243054310000051
Weight parameter of intermediate layer
Figure BDA0003243054310000052
And a training data label;
s5, each user PiEncrypting data features using a fully homomorphic encrypted public key pk
Figure BDA0003243054310000053
Weight parameter of intermediate layer
Figure BDA0003243054310000054
And training data label, get
Figure BDA0003243054310000055
And Enc (label), and uploading to the parameter server;
s6, parameter server aggregating all users PiTransmitted middle layer parameters
Figure BDA0003243054310000056
And all data feature ciphertext
Figure BDA0003243054310000057
Inputting the intermediate layer and outputting the intermediate layer
Figure BDA0003243054310000058
Sending to a decryption server;
s7, the decryption server decrypts the intermediate layer output by using the private key sk encrypted in the fully homomorphic wayMAnd returns it to the parameter server;
s8, outputting output by the middle layer through the parameter serverMInputting the service classifier C to obtain an outputC
S9, parameter server encrypted outputCCalculating a loss function value of the service classifier C by using Enc (label), and sending the loss function value to a decryption server;
s10, the decryption server decrypts the loss function value by using the fully homomorphic encrypted private key sk and returns the loss function value to the parameter server, and the calculation method of the loss function is as follows:
Figure BDA0003243054310000059
then the parameter server sends Enc (f (x)) to a decryption server, and f (x) is obtained after decryption;
s11, the parameter server calculates the gradient according to the loss function value, updates the service classifier C and sends the updated model to each participant PiAnd when the parameter server updates the service classifier C, the gradient calculation formula used is as follows:
Figure BDA00032430543100000510
where f is the loss function.
Compared with the related technology, the multi-party deep learning privacy protection method based on the fully homomorphic encryption algorithm has the following beneficial effects:
(1) the method and the device have the advantages that a deep learning model is jointly trained by building a plurality of participants to upload ciphertext data, the output value of the neural network is calculated on a single-layer neural network by using the ciphertext through the characteristic of full homomorphic encryption of the parameter server, the deep learning model trained by a plurality of data sources is integrated, large-scale operation is greatly reduced, the possibility of information leakage can be effectively reduced, time consumption and excessive resource consumption of a large amount of data encryption and decryption are avoided under the condition that privacy is ensured to be safer, and the performance in the actual operation process is further improved.
(2) The user when local training data, through setting up the training target that can adjust the feature extractor such as dynamic, utilize the counterwork training for the privacy data characteristic in the feature extractor is filtered out, only uploads the data characteristic relevant with the training target, in addition, still uses homomorphic encryption hidden data, provides dual privacy protection, has promoted privacy protection intensity greatly.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A multi-party deep learning privacy protection method based on a fully homomorphic encryption algorithm is characterized by comprising the following steps:
s1, respectively constructing a parameter server, a decryption server and N users Pi={PiI is more than or equal to 1 and less than or equal to N, wherein the parameter server is constructed as a model to be trained, and the model is divided into a feature extractor E, an intermediate layer M and a service classifier C;
s2, the parameter server initializes the model to be trained and distributes the model parameters to each participant Pi
S3, the decryption server initializes a fully homomorphic encryption algorithm, generates a private key sk and a public key pk, and discloses the public key pk;
s4, each user PiRecording data characteristics output by the characteristic extractor E according to the privacy target requirement by using a local data training model
Figure FDA0003243054300000011
Weight parameter of intermediate layer
Figure FDA0003243054300000012
And a training data label;
s5, each user PiEncrypting data features using a fully homomorphic encrypted public key pk
Figure FDA0003243054300000013
Weight parameter of intermediate layer
Figure FDA0003243054300000014
And training data label, get
Figure FDA0003243054300000015
And Enc (label), and uploading to the parameter server;
s6, parameter server aggregating all users PiTransmitted middle layer parameters
Figure FDA0003243054300000016
And all data feature ciphertext
Figure FDA0003243054300000017
Inputting the intermediate layer and outputting the intermediate layer
Figure FDA0003243054300000018
Sending to a decryption server;
s7, the decryption server decrypts the intermediate layer output by using the private key sk encrypted in the fully homomorphic wayMAnd returns it to the parameter server;
s8, outputting output by the middle layer through the parameter serverMInputting the service classifier C to obtain an outputC
S9, parameter server encrypted outputCCalculating a loss function value of the service classifier C by using Enc (label), and sending the loss function value to a decryption server;
s10, the decryption server decrypts the loss function value by using the fully homomorphic encrypted private key sk and returns the loss function value to the parameter server;
s11, the parameter server calculates the gradient according to the loss function value, updates the service classifier C and sends the updated model to each participant Pi
2. The privacy protection method for multi-party deep learning based on the fully homomorphic encryption algorithm according to claim 1, wherein the parameter server in S1 is used for constructing and dividing a model to be trained into a feature extractor E, an intermediate layer M and a service classifier C; after the model is distributed to a plurality of users, collecting parameter ciphertexts and data feature ciphertexts uploaded by the users and carrying out operation; and requesting a decryption service from the decryption server to obtain a loss function value for updating the service classifier C.
3. The privacy protection method for multi-party deep learning based on fully homomorphic encryption algorithm as claimed in claim 1, wherein the decryption server in S1 is configured to initialize the fully homomorphic encryption algorithm, and to disclose the encrypted public key to the parameter server and the user, and further to provide decryption service for the parameter server.
4. The privacy protection method for multiparty deep learning based on fully homomorphic encryption algorithm as claimed in claim 1, wherein in S1, in order to train the deep learning model using local data, the user sends the encrypted data features and model parameters to the parameter server, and updates the local model according to the result returned by the parameter server.
5. The privacy protection method for multi-party deep learning based on fully homomorphic encryption algorithm according to claim 4, wherein the deep learning model has a specific structure: convolutional layer 1-activation function 1-pooling layer 1-linear layer 1-convolutional layer 2-activation function 2-pooling layer 2-linear layer 2, where the middle layer M is composed of linear layer 1.
6. The fully homomorphic encryption algorithm-based multi-party deep learning privacy protection method of claim 1, wherein the fully homomorphic encryption algorithm in S3 uses CKKS algorithm or BFV algorithm, and an auxiliary key is generated during initialization of CKKS algorithm for multiplication and held by the parameter server.
7. The privacy protection method for multi-party deep learning based on fully homomorphic encryption algorithm according to claim 1, wherein the calculation method of the loss function in S10 is as follows:
Figure FDA0003243054300000021
and then the parameter server sends Enc (f (x)) to a decryption server, and f (x) is obtained after decryption.
8. The privacy protection method for multi-party deep learning based on fully homomorphic encryption algorithm according to claim 1, wherein when the parameter server updates the service classifier C in S11, the gradient calculation formula used is:
Figure FDA0003243054300000031
where f is the loss function.
CN202111025100.6A 2021-09-02 2021-09-02 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm Withdrawn CN113761557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111025100.6A CN113761557A (en) 2021-09-02 2021-09-02 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111025100.6A CN113761557A (en) 2021-09-02 2021-09-02 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm

Publications (1)

Publication Number Publication Date
CN113761557A true CN113761557A (en) 2021-12-07

Family

ID=78792660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111025100.6A Withdrawn CN113761557A (en) 2021-09-02 2021-09-02 Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm

Country Status (1)

Country Link
CN (1) CN113761557A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832729A (en) * 2020-07-06 2020-10-27 东南数字经济发展研究院 Distributed deep learning reasoning deployment method for protecting data privacy
CN114417427A (en) * 2022-03-30 2022-04-29 浙江大学 Deep learning-oriented data sensitivity attribute desensitization system and method
CN115314211A (en) * 2022-08-08 2022-11-08 济南大学 Privacy protection machine learning training and reasoning method and system based on heterogeneous computing
CN115426206A (en) * 2022-11-07 2022-12-02 中邮消费金融有限公司 Graph anti-fraud capability enabling method and system based on homomorphic encryption technology
CN115580390A (en) * 2022-08-24 2023-01-06 京信数据科技有限公司 Multi-scene mode calculation method and system under safe multi-party calculation
CN116127371A (en) * 2022-12-06 2023-05-16 东北林业大学 Multi-user model joint iteration method integrating prior distribution and homomorphic chaotic encryption
CN116561813A (en) * 2023-07-12 2023-08-08 中汇丰(北京)科技有限公司 Safety management system applied to archive information
TWI832627B (en) * 2022-08-16 2024-02-11 大陸商中國銀聯股份有限公司 A biological feature extraction method and device

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832729A (en) * 2020-07-06 2020-10-27 东南数字经济发展研究院 Distributed deep learning reasoning deployment method for protecting data privacy
CN114417427A (en) * 2022-03-30 2022-04-29 浙江大学 Deep learning-oriented data sensitivity attribute desensitization system and method
CN114417427B (en) * 2022-03-30 2022-08-02 浙江大学 Deep learning-oriented data sensitivity attribute desensitization system and method
CN115314211A (en) * 2022-08-08 2022-11-08 济南大学 Privacy protection machine learning training and reasoning method and system based on heterogeneous computing
CN115314211B (en) * 2022-08-08 2024-04-30 济南大学 Privacy protection machine learning training and reasoning method and system based on heterogeneous computing
TWI832627B (en) * 2022-08-16 2024-02-11 大陸商中國銀聯股份有限公司 A biological feature extraction method and device
CN115580390B (en) * 2022-08-24 2023-08-25 京信数据科技有限公司 Multi-scene mode calculation method and system under safe multi-party calculation
CN115580390A (en) * 2022-08-24 2023-01-06 京信数据科技有限公司 Multi-scene mode calculation method and system under safe multi-party calculation
CN115426206B (en) * 2022-11-07 2023-03-24 中邮消费金融有限公司 Graph anti-fraud capability enabling method and system based on homomorphic encryption technology
CN115426206A (en) * 2022-11-07 2022-12-02 中邮消费金融有限公司 Graph anti-fraud capability enabling method and system based on homomorphic encryption technology
CN116127371A (en) * 2022-12-06 2023-05-16 东北林业大学 Multi-user model joint iteration method integrating prior distribution and homomorphic chaotic encryption
CN116127371B (en) * 2022-12-06 2023-09-08 东北林业大学 Multi-user model joint iteration method integrating prior distribution and homomorphic chaotic encryption
CN116561813A (en) * 2023-07-12 2023-08-08 中汇丰(北京)科技有限公司 Safety management system applied to archive information
CN116561813B (en) * 2023-07-12 2023-09-26 中汇丰(北京)科技有限公司 Safety management system applied to archive information

Similar Documents

Publication Publication Date Title
CN113761557A (en) Multi-party deep learning privacy protection method based on fully homomorphic encryption algorithm
CN109684855B (en) Joint deep learning training method based on privacy protection technology
Zhang et al. Homomorphic encryption-based privacy-preserving federated learning in iot-enabled healthcare system
CN110572253B (en) Method and system for enhancing privacy of federated learning training data
Wu et al. An adaptive federated learning scheme with differential privacy preserving
CN111259443B (en) PSI (program specific information) technology-based method for protecting privacy of federal learning prediction stage
CN109194507B (en) Non-interactive privacy protection neural network prediction method
CN108712260A (en) The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment
CN113836556B (en) Federal learning-oriented decentralized function encryption privacy protection method and system
CN113221105B (en) Robustness federated learning algorithm based on partial parameter aggregation
CN112118099B (en) Distributed multi-task learning privacy protection method and system for resisting inference attack
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
Lyu et al. Towards fair and decentralized privacy-preserving deep learning with blockchain
CN111460478A (en) Privacy protection method for collaborative deep learning model training
CN115392487A (en) Privacy protection nonlinear federal support vector machine training method and system based on homomorphic encryption
CN115225405B (en) Matrix decomposition method based on security aggregation and key exchange under federal learning framework
CN113240129A (en) Multi-type task image analysis-oriented federal learning system
WO2023134070A1 (en) Decentralized federated clustering method and apparatus, and electronic device and storage medium
CN116167088A (en) Method, system and terminal for privacy protection in two-party federal learning
CN111901328B (en) Attribute-based encryption method based on prime order group
CN111159727B (en) Multi-party cooperation oriented Bayes classifier safety generation system and method
CN116865938A (en) Multi-server federation learning method based on secret sharing and homomorphic encryption
CN112101555A (en) Method and device for multi-party combined training model
CN116561799A (en) Multiparty privacy set operation method based on cloud server
CN111581663B (en) Federal deep learning method for protecting privacy and facing irregular users

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211207