CN111260081B - Non-interactive privacy protection multi-party machine learning method - Google Patents

Non-interactive privacy protection multi-party machine learning method Download PDF

Info

Publication number
CN111260081B
CN111260081B CN202010092237.2A CN202010092237A CN111260081B CN 111260081 B CN111260081 B CN 111260081B CN 202010092237 A CN202010092237 A CN 202010092237A CN 111260081 B CN111260081 B CN 111260081B
Authority
CN
China
Prior art keywords
data
training
trainer
machine learning
service provider
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010092237.2A
Other languages
Chinese (zh)
Other versions
CN111260081A (en
Inventor
李进
李同
向晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202010092237.2A priority Critical patent/CN111260081B/en
Publication of CN111260081A publication Critical patent/CN111260081A/en
Application granted granted Critical
Publication of CN111260081B publication Critical patent/CN111260081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/008Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols involving homomorphic encryption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of computer security, and relates to a non-interactive privacy protection multi-party machine learning method, which constructs a framework comprising a data owner, a data service provider and a trainer; the data service provider generates public parameters, initializes a public key encryption scheme with safety parameters, generates a private key and public key pair for each data owner, and generates an encryption key pair for each trainer; the data owner encrypts each record in the data set by using a public key encryption scheme, and uploads the encrypted data set to the trainer; a plurality of training rounds of privacy protection training protocols are operated between a trainer and a data service provider to obtain a trained machine learning classifier model, and the data service provider cannot reveal the content of the machine learning classifier model in a plaintext form. The invention reduces the communication and calculation overhead of the data owner caused by frequent interaction on the basis of ensuring the data privacy of the data owner.

Description

Non-interactive privacy protection multi-party machine learning method
Technical Field
The invention belongs to the field of computer security, and particularly relates to a non-interactive privacy protection multi-party machine learning method.
Background
The multi-party machine learning technology and the federal learning technology of the private data have good effects in practical application. These work on privacy preserving machine learning is oriented to many popular machine learning models, such as random decision trees, naive bayes classification, k-means clustering, neural networks, and the like. However, these solutions do not take into account the security requirements of the data owner and trainer. Although data owners (e.g., patients who have health records) do not necessarily have the need and objective conditions to engage in training, these solutions require that the data owner engage in training in an interactive manner, resulting in efficiency issues in which the data owner must face the high computational or communication overhead associated with frequent interactions.
An invention patent CN109934004a disclosed in 2019, 6, month and 25 discloses a method for protecting privacy in a machine learning service system, which comprises the following steps: step 1, learning and expressing raw data: expressing high-dimensional original data by using a low-dimensional eigenspace; step 2, learning and expressing attacker data: expressing all query data with high probability classification results by using a low-dimensional eigenspace as attacker data; step 3, comparing and judging whether to answer the current query: and comparing the similarity of the attacker data and the original data, if the similarity is greater than a preset threshold value, confirming that privacy is revealed when the current query is answered, refusing to answer the current query, and otherwise, allowing to answer the current query. The method can protect privacy problems caused by repeated inquiry, can decide to answer or refuse to answer the inquiry service by learning and modeling the knowledge of an attacker, solves the privacy problems of the machine learning inquiry service caused by excessive inquiry, and does not influence the service quality because the method does not change the model per se.
In general, although the existing privacy protection machine learning method solves the corresponding technical problem to a certain extent, the participation of a data owner is needed to complete the multi-party machine learning task, and the problems of overhigh communication and calculation overhead are brought.
Disclosure of Invention
In order to solve the problem that in the prior art, too high communication and calculation expenses are generated due to the fact that data owners need to participate in a multi-party machine learning task, the invention provides a non-interactive privacy protection multi-party machine learning method.
The invention is realized by adopting the following technical scheme: a non-interactive privacy protection multi-party machine learning method constructs a non-interactive privacy protection machine learning architecture, and the architecture comprises three entities:
the data owner is an entity which has a data set and provides training data for training of the machine learning classifier and does not need to obtain a training result;
the data service provider is an untrusted auxiliary server and is used for carrying out encryption and operation operations during training, wherein the operations comprise issuing public parameters, issuing encryption keys to a data owner and performing training by cooperating with a trainer;
the trainer collects data from the data owner as a training data set, and trains and establishes a machine learning classifier model by using the training data set;
the method comprises the following steps:
s1, a data service provider generates public parameters, initializes a public key encryption scheme with safety parameters, generates a private key and public key pair for each data owner and distributes the public key to the data owner; each trainer generates an encryption key pair and publishes an encryption public key of the trainer;
s2, encrypting each record in the data set by the data owner by using a public key encryption scheme, uploading the encrypted data set to a trainer by the data owner, and taking the encrypted data set as a part of a training data set after the data owner collects the encrypted data set;
and S3, running a plurality of training round privacy protection training protocols between the trainer and the data service provider, and finally obtaining the trained machine learning classifier model by the trainer, wherein the data service provider cannot reveal the content of the machine learning classifier model in a plaintext form.
Compared with the prior art, the invention has the following beneficial effects:
the architecture designed by the invention supports a multi-party machine learning task with privacy protection without participation of a data owner. By utilizing the privacy-protecting multi-party machine learning method constructed by the framework, a trainer can perform combined training on encrypted data to obtain a machine learning model under the condition of not needing participation of a data owner, so that the multi-party machine learning system has higher communication efficiency and more reliable safety. The invention also provides a multi-layer neural network based on the multi-party machine learning. The non-interactive privacy protection multi-party machine learning framework provided by the invention can effectively solve the problem of privacy security of sensitive data in the multi-party machine learning process, and reduces the communication and calculation cost of data owners in a non-interactive mode on the basis.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
fig. 2 is a scheme flow diagram of the present invention.
Detailed Description
In order to make the purpose and technical solution of the present invention more clearly understood, the present invention is described in detail below with reference to the accompanying drawings and embodiments; the described embodiments are only some embodiments of the invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The non-interactive privacy protection multi-party machine learning method constructs a non-interactive privacy protection machine learning framework, and is shown in figure 1. The architecture includes three entities, namely a data owner, a trainer and a password service provider (also called a data service provider, such as a server):
(1) The data owner is an entity that owns the data set and provides training data for training of the machine learning classifier, and does not need to obtain training results. That is, the owner of the data refers to the owner of the data. To protect its data privacy, the data owner needs to encrypt the data in a protected form. The data owner is offline after uploading the data and does not participate in the training of the machine learning classifier.
(2) The data service provider is an untrusted secondary server that undertakes some of the necessary encryption and computational operations during training, including issuing public parameters, issuing encryption keys to the data owner, and training in cooperation with the trainer.
(3) Unlike the data owner, the trainer has no data. The trainer collects data from the data owner as a training data set, and trains and establishes a machine learning classifier model by using the data set. After several training rounds in cooperation with the data service provider, the trainer will obtain the final model. Given that the model may have commercial value, the trainer needs the privacy of the model to be protected.
That is, the data owner provides training data for training the machine learning classifier, the trainer uses the data to train to obtain a machine learning classifier model, and the password service provider provides assistance for training. In the framework, a trainer can train on encrypted data of a data owner to obtain a machine learning classifier model, and online interaction with the data owner is not needed in the process. Therefore, the privacy-preserving multi-party machine learning system based on the architecture is more communication and calculation-efficient for data owners than the privacy-preserving multi-party machine learning system based on the general architecture.
As shown in fig. 2, the implementation process of the non-interactive privacy protection multi-party machine learning method of the present invention includes the following steps:
and S1, initializing. The cryptographic service provider firstly generates a public parameter n, initializes a public key encryption scheme PKE with a security parameter lambda, then generates a key pair { pk, sk } for each data owner, and distributes the public key pk to the data owner; each trainer generates a Paillier encryption key pair { pk p ,sk p And publishes its own encrypted public key pk p
The public key encryption scheme PKE has a special format and is used for encrypting data before the data owner uploads the data, and the public key encryption scheme PKE plays a role in protecting the data privacy of the data owner in a framework. The public key encryption scheme is a public key encryption scheme PKE = { Gen, enc, dec } based on CCA-2 security, and the modulus of the public key encryption scheme is n; when a training sample instance x and its corresponding label y need to be encrypted, the data owner needs to encrypt using the public key pk:
(1) Randomly selecting a pair of reciprocal a-order nonsingular matrices A, each element of which is on a cyclic group of order n, and setting A to z (1)
(2) A is to be -1 * Enc is PKE pk (A -1 * x) and is set to z (2)
(3) Let z = z (1) ||z (2) As a ciphertext.
The Paillier encryption is an addition homomorphic encryption scheme, which can be implemented on an addition group. Homomorphic encryption is a cryptographic technique based on the computational complexity theory of mathematical problems, processes homomorphic encrypted data to obtain an output, decrypts the output, and has a result that is consistent with the output result obtained by processing unencrypted original data in the same method.
And S2, encrypting and collecting data. The data owner remembers each of its own data setsThe record is encrypted by using a public key encryption scheme PKE, the data owner uploads the encrypted data set to a trainer, and the trainer takes the encrypted data set as a part of a training data set after collecting the encrypted data set. For example, a data size of m, for a data set { { x { (X) } 1 ,y 1 },...,{x m ,y m After encryption, the data owner encrypts the encrypted data set { z over a secure channel 1 ,y 1 },...,{z m ,y m And is uploaded to the corresponding trainer.
In this step, a public key encryption scheme PKE is used to protect the data confidentiality of the data owners, and the key pairs used in the encryption scheme are generated by the cryptographic service provider and distributed to each data owner. That is, in the data encryption stage, before the data owner uploads its data to the trainer, each data record in its data set is encrypted using a public key encryption scheme PKE.
And S3, privacy protection training. A plurality of training round privacy protection training protocols are operated between a trainer and a password service provider, the trainer finally obtains a trained machine learning classifier model, and meanwhile, the password service provider cannot reveal the content of the machine learning classifier model in a plaintext form.
The privacy preserving training protocol is used in a non-interactive privacy preserving machine learning architecture to complete training of a machine learning classifier. According to the protocol, a trainer and a password service provider jointly execute a plurality of training rounds, finally, the trainer can obtain a trained machine learning classifier model, and meanwhile, the password service provider cannot reveal the content of the machine learning classifier model in a plaintext form.
In this embodiment, the trainer collects the encrypted form of the training data { { z { { 1 ,y 1 After the password protection training protocol is executed, the password protection training protocol is executed together with a password service provider. Trainer initialization classifier model theta 0 A number of training rounds of interaction with the cryptographic service provider are initiated.
The invention designs a blinding algorithm, so that in each training wheel, a trainer selects part of the collected training sets by using a random gradient descent method, hides the current model by using the blinding algorithm to obtain a blinding model, and sends a training request to a password service provider; after receiving the request, the password service provider performs gradient calculation on the blinding model and the encryption training set, and then returns the result to the trainer; and (3) the trainer performs de-blinding on the returned result to obtain a current gradient, updates the current machine learning classifier model by using the current gradient, and once the maximum number of training rounds is reached or the machine learning classifier model is converged, ends the privacy protection training protocol and obtains the final machine learning classifier model. Specifically, in each training round t, the following steps are performed:
(1) The trainer compares the current classifier model theta t-1 Blinding to blinded model θ t-1 ' to protect privacy and to select a small batch of encrypted data set z by stochastic gradient descent, and then to encrypt a portion of the data set z (2) And the blinding model θ t-1 ' transmit to a cryptographic service provider as a request;
(2) The data service provider completes gradient calculation in a privacy protection mode by using a request uploaded by the trainer to obtain a blinded gradient G 'and returns the blinded gradient G' to the trainer, wherein the data service provider cannot solve blindness;
(3) The trainer blindly removes the blinding gradient G' to obtain the gradient G of the current training wheel, and updates the model theta of the current training wheel by using a gradient descent method t =θ t-1 - η G, where η is the learning rate.
Once the maximum number of training rounds is reached or the machine learning classifier model converges, the training protocol ends, the trainer and the final classifier model θ is obtained.
Wherein, the random gradient descending means that the gradient of the model parameter can be gradually reduced according to the training data. Common gradient descent algorithms include batch gradient descent and random gradient descent. Batch gradient decreases in larger data set training, which is inefficient. The stochastic gradient descent algorithm requires only a small portion of the data set to complete the gradient descent. Definition E S For the loss function after training batch S, theta is the weight set of the multi-layer perceptron, eta is the learning rate, theta t-1 Training the weight set for the t round. The definition of the random gradient update is:
Figure BDA0002384073870000051
deep learning based on a neural network is one of the most popular machine learning techniques at present, and this embodiment is implemented by adopting an architecture oriented to a multilayer perceptron neural network, and the classifier model is a neural network model. Neural network learning aims at extracting features from high dimensional data and using them to generate a model that is output from an input map. The multi-layer perceptron is the most common neural network model. In the multi-layer perceptron, the input to each hidden layer node is the output of the previous layer network (with the bias applied). Each hidden node calculates its weighted input mean. The output of the hidden node is the operation result of the nonlinear activation function. Weight learning for neural networks is a nonlinear optimization problem. In supervised learning, the objective function is the error output that the training example is propagating forward. Gradient descent algorithms are often used to solve this optimization problem. In each round of training of the user, the trainer calculates the gradient of the non-linear objective function by means of the training data and updates the weights to reduce the gradient. Through multiple rounds of training, the model will reach local optimum.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (4)

1. A non-interactive privacy protection multi-party machine learning method is characterized in that a non-interactive privacy protection machine learning framework is constructed, and the framework comprises three entities:
the data owner is an entity which has a data set and provides training data for training of the machine learning classifier and does not need to obtain a training result;
the data service provider is an untrusted auxiliary server and is used for carrying out encryption and operation operations during training, wherein the operations comprise issuing public parameters, issuing encryption keys to a data owner and performing training by cooperating with a trainer;
a trainer collects data from a data owner as a training data set, trains by using the training data set and establishes a machine learning classifier model;
the method comprises the following steps:
s1, a data service provider generates public parameters, initializes a public key encryption scheme with safety parameters, generates a private key and public key pair for each data owner and distributes the public key to the data owner; each trainer generates an encryption key pair and publishes an encryption public key of the trainer;
s2, encrypting each record in the data set by the data owner by using a public key encryption scheme, uploading the encrypted data set to a trainer by the data owner, and taking the encrypted data set as a part of a training data set after the data owner collects the encrypted data set;
s3, running a plurality of training round privacy protection training protocols between the trainer and the data service provider, and finally obtaining a trained machine learning classifier model by the trainer, wherein the data service provider cannot reveal the content of the machine learning classifier model in a plaintext form;
in the step S1, the public key encryption scheme is a public key encryption scheme PKE = { Gen, enc, dec } based on CCA-2 security, and the modulus is n; when a training sample instance x and its corresponding label y need to be encrypted, the data owner needs to encrypt using the public key pk:
(1) Randomly selecting a pair of reciprocal a-order nonsingular matrices A, each element of which is on a cyclic group of order n, and setting A to z (1)
(2) A is prepared from -1 * Enc is PKE pk (A -1 * x) and is set to z (2)
(3) Let z = z (1) ||z (2) As a ciphertext;
s3, in each training wheel, selecting a part of the collected training sets by a trainer by using a random gradient descent method, hiding the current model by using a blinding algorithm to obtain a blinding model, and sending a training request to a data service provider; after receiving the request, the data service provider performs gradient calculation on the blinding model and the encryption training set, and then returns the result to the trainer; and (3) the trainer performs de-blinding on the returned result to obtain a current gradient, updates the current machine learning classifier model by using the current gradient, and once the maximum number of training rounds is reached or the machine learning classifier model is converged, ends the privacy protection training protocol and obtains the final machine learning classifier model.
2. The non-interactive privacy preserving multi-party machine learning method according to claim 1, wherein the encryption key pair generated by each trainer in step S1 is a Paillier encryption key pair.
3. The non-interactive privacy preserving multi-party machine learning method of claim 2, wherein the Paillier encryption is an additive homomorphic encryption scheme, a homomorphic encryption scheme implemented on an additive group.
4. The non-interactive privacy preserving multi-party machine learning method of claim 1, wherein in each training round t, the following steps are performed:
(1) The trainer compares the current classifier model theta t-1 Blinding to blinding model θ t-1 ' to protect privacy and to select a small batch of encrypted data set z by stochastic gradient descent, and then to encrypt a portion of the data set z (2) And the blinding model θ t-1 ' transmitting to a data service provider as a request;
(2) The data service provider completes gradient calculation in a privacy protection mode by using a request uploaded by the trainer to obtain a blinded gradient G 'and returns the blinded gradient G' to the trainer;
(3) The trainee blindly removes the blinded gradient G' to obtain the gradient G of the current training wheel, and updates the gradient G by using a gradient descent methodModel θ to the current training wheel t =θ t-1 - η G, where η is the learning rate.
CN202010092237.2A 2020-02-14 2020-02-14 Non-interactive privacy protection multi-party machine learning method Active CN111260081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010092237.2A CN111260081B (en) 2020-02-14 2020-02-14 Non-interactive privacy protection multi-party machine learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092237.2A CN111260081B (en) 2020-02-14 2020-02-14 Non-interactive privacy protection multi-party machine learning method

Publications (2)

Publication Number Publication Date
CN111260081A CN111260081A (en) 2020-06-09
CN111260081B true CN111260081B (en) 2023-03-14

Family

ID=70949264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092237.2A Active CN111260081B (en) 2020-02-14 2020-02-14 Non-interactive privacy protection multi-party machine learning method

Country Status (1)

Country Link
CN (1) CN111260081B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859267B (en) * 2020-06-22 2024-04-26 复旦大学 Operation method of privacy protection machine learning activation function based on BGW protocol
CN111756848B (en) * 2020-06-28 2021-05-11 河海大学 QoS optimization method based on federal learning and mobile perception under mobile edge environment
US11847217B2 (en) * 2020-06-30 2023-12-19 Mcafee, Llc Methods and apparatus to provide and monitor efficacy of artificial intelligence models
CN111966875B (en) * 2020-08-18 2023-08-22 中国银行股份有限公司 Sensitive information identification method and device
CN112201342B (en) * 2020-09-27 2024-04-26 博雅正链(北京)科技有限公司 Medical auxiliary diagnosis method, device, equipment and storage medium based on federal learning
CN112270415B (en) * 2020-11-25 2024-03-22 矩阵元技术(深圳)有限公司 Training data preparation method, device and equipment for encryption machine learning
CN112487481B (en) * 2020-12-09 2022-06-10 重庆邮电大学 Verifiable multi-party k-means federal learning method with privacy protection
CN113810168A (en) * 2020-12-30 2021-12-17 京东科技控股股份有限公司 Training method of machine learning model, server and computer equipment
CN112906052B (en) * 2021-03-09 2022-12-23 西安电子科技大学 Aggregation method of multi-user gradient permutation in federated learning
CN112949741B (en) * 2021-03-18 2023-04-07 西安电子科技大学 Convolutional neural network image classification method based on homomorphic encryption
WO2024074226A1 (en) * 2022-10-06 2024-04-11 Telefonaktiebolaget Lm Ericsson (Publ) Training an ensemble of models

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259158A (en) * 2018-01-11 2018-07-06 西安电子科技大学 Efficient and secret protection individual layer perceptron learning method under a kind of cloud computing environment
CN110011784A (en) * 2019-04-04 2019-07-12 东北大学 Support the KNN classified service system and method for secret protection
CN110059501A (en) * 2019-04-16 2019-07-26 广州大学 A kind of safely outsourced machine learning method based on difference privacy

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259158A (en) * 2018-01-11 2018-07-06 西安电子科技大学 Efficient and secret protection individual layer perceptron learning method under a kind of cloud computing environment
CN110011784A (en) * 2019-04-04 2019-07-12 东北大学 Support the KNN classified service system and method for secret protection
CN110059501A (en) * 2019-04-16 2019-07-26 广州大学 A kind of safely outsourced machine learning method based on difference privacy

Also Published As

Publication number Publication date
CN111260081A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111260081B (en) Non-interactive privacy protection multi-party machine learning method
Liu et al. Blockchain and federated learning for collaborative intrusion detection in vehicular edge computing
CN112149160B (en) Homomorphic pseudo-random number-based federated learning privacy protection method and system
CN111600707B (en) Decentralized federal machine learning method under privacy protection
Chen et al. Impulsive synchronization of reaction–diffusion neural networks with mixed delays and its application to image encryption
CN113420232B (en) Privacy protection-oriented federated recommendation method for neural network of graph
CN113298268B (en) Vertical federal learning method and device based on anti-noise injection
Qin et al. Federated learning-based network intrusion detection with a feature selection approach
CN113128701A (en) Sample sparsity-oriented federal learning method and system
CN112862001A (en) Decentralized data modeling method under privacy protection
CN110209994B (en) Matrix decomposition recommendation method based on homomorphic encryption
CN111104968B (en) Safety SVM training method based on block chain
CN108959891B (en) Electroencephalogram identity authentication method based on secret sharing
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN113240129A (en) Multi-type task image analysis-oriented federal learning system
CN116708009A (en) Network intrusion detection method based on federal learning
Kumar Technique for security of multimedia using neural network
CN114978533A (en) Verifiable security aggregation method based on weighted layered asynchronous federated learning
CN112560059A (en) Vertical federal model stealing defense method based on neural pathway feature extraction
Cheng et al. SecureAD: A secure video anomaly detection framework on convolutional neural network in edge computing environment
CN111581648A (en) Method of federal learning to preserve privacy in irregular users
Asad et al. Secure and efficient blockchain-based federated learning approach for VANETs
CN113326947A (en) Joint learning model training method and system
CN117294469A (en) Privacy protection method for federal learning
He et al. Cryptoeyes: Privacy preserving classification over encrypted images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant