CN114363043B - Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network - Google Patents

Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network Download PDF

Info

Publication number
CN114363043B
CN114363043B CN202111657350.1A CN202111657350A CN114363043B CN 114363043 B CN114363043 B CN 114363043B CN 202111657350 A CN202111657350 A CN 202111657350A CN 114363043 B CN114363043 B CN 114363043B
Authority
CN
China
Prior art keywords
model
client
training
local
clients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111657350.1A
Other languages
Chinese (zh)
Other versions
CN114363043A (en
Inventor
张磊
高圆圆
姚鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202111657350.1A priority Critical patent/CN114363043B/en
Publication of CN114363043A publication Critical patent/CN114363043A/en
Application granted granted Critical
Publication of CN114363043B publication Critical patent/CN114363043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention discloses an asynchronous federal learning method based on verifiable aggregation and differential privacy in a peer-to-peer network. The federal learning method mainly comprises five stages: the system comprises a system initialization stage, a registration stage, a local model training stage, a model distribution stage and a model aggregation stage. Aiming at the problems of data privacy protection and model performance in asynchronous federation learning, the invention provides a verifiable federation learning scheme based on local data set testing and cosine value detection. Before local model updating is carried out, effective model updating is screened out through a model verification scheme, and model updating which is poor in performance in verification test is abandoned, so that the performance of an aggregation model is improved. Meanwhile, a privacy protection method combined with a local differential privacy method is designed in the scheme so as to ensure the safety of user data. Under the asynchronous federal learning scene, the invention achieves the design targets of high reliability, high safety and high performance and has stronger practical application value.

Description

Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
Technical Field
The invention relates to the field of information security, and relates to the problems of privacy protection and model quality verification in asynchronous federal learning, in particular to a scheme for data privacy protection, model update verification and aggregation of asynchronous federal learning; and more particularly to an asynchronous federal learning method based on verifiable aggregate and differential privacy in peer-to-peer networks.
Background
With the rapid development of internet of things technology, a large amount of user application data is generated, and privacy data of some users are often contained in the data. In order to process and analyze network data with increasingly huge volumes, machine learning methods have been widely used in various fields. However, due to the requirements of data security, user privacy protection, and regulatory approaches, it has been difficult to collect data and perform calculations by conventional centralized machine learning methods, i.e., by a central server. The European Union general data protection regulations claim that organizations cannot share sensitive data to third parties for data training, and that higher requirements are placed on data resource protection. Meanwhile, the data volume of a single device or organization is small, and the performance of the model obtained through training is difficult to guarantee.
Federal learning is a recently proposed distributed learning method that breaks the barrier between data sets, enables data owners to train the data sets locally, protects the privacy of the data owners, and achieves edge intelligence. In federal learning, a typical architecture is a client-server model that includes a server (also called an aggregator) and a set of clients with their own data sets. In client-server mode, the entire learning process involves multiple rounds of training, each client retrieving a global model from the server, training the model using its local data set, and then sending the updated model to the server. However, this mode faces some challenges. First, the reliability of the server is very important, because the server is the only one node that performs aggregation. As a system core in this mode, the learning process must be suspended once the server is paralyzed by an external attack. Second, this mode is not suitable for some dynamic applications, such as in an on-board ad hoc network, where the nature of the vehicle movement makes it difficult to maintain continuous and stable communication between the server and the vehicle, and the exit of the client may interrupt the learning process. Third, there are sometimes no trusted third parties that can act as servers.
In order to adapt federation learning to the requirements of dynamic application scenarios, asynchronous federation learning based on peer-to-peer network environments is generated. In contrast to the client-server model, this model allows each participant to exchange update models directly, without resorting to a third party, and complete the aggregation locally at the client. Thus, it does not require a trusted third party and can avoid suspension of training due to failure of individual clients or aggregators. At the same time, the participant's exit will not interrupt the training process.
However, asynchronous federal learning still needs to address data privacy and model performance issues. Recent studies have found that various types of attacks, such as gradient analysis, membership inference attacks, and reconstruction attacks, may still be suffered in the manner in which updates to the model are exchanged between users participating in federal learning, and that semi-honest or malicious users may obtain user privacy data from the exchanged updates. Therefore, a protection mechanism needs to be established in federal learning to ensure the privacy security of data in the federal learning process. For model performance, the accuracy of the aggregate model may be impacted due to poor quality of the locally trained model or the malicious client deliberately issuing low quality updated models. A verification method is designed, which can verify the received updated model to avoid low-quality updating, and effectively improve the accuracy of the final model.
To improve federal learning security, many privacy protection schemes based on secure multiparty computing and differential privacy protection have been proposed. For secure multiparty computing schemes, secret sharing, homomorphic encryption and other methods are adopted, but the schemes require multiple rounds of communication, and have high computing cost. At the same time, these privacy protection schemes are either computationally expensive, communication costly, or limit the number of clients or require additional trusted assumptions. Also, since low quality update models easily affect the accuracy of the aggregate model, different approaches have been proposed to improve the performance of federal learning, such as using asynchronous model updates to improve aggregate efficiency, however, they do not take into account the problem of low quality model updates.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a privacy protection asynchronous federation learning method based on verifiable aggregation and differential privacy in a peer-to-peer network so as to solve the problems of data privacy and model precision of deploying asynchronous federation learning to a distributed scene such as collaborative driving.
The specific technical scheme for realizing the invention is as follows:
an asynchronous federal learning method based on verifiable aggregation and differential privacy in a peer-to-peer network, the method comprising the steps of:
1) System initialization phase:
a) Initializing a client database: ith client u for participating in asynchronous federal learning in peer-to-peer network environment i Representation, each client u i Separately maintaining own local data sets D i ={(x 1 ,y 1 ),...,(x n ,y n ) X, where x i For the characteristic value of the ith data in the client data set, y i A tag value for the ith data; in the system initialization phase, the client u i For own local data set D i Dividing, wherein a training set is used for model training, and a verification set is used for screening received model updates in a model verification stage;
b) Initializing communication parameters: the model update is transmitted by establishing a trusted channel; the CA is a trusted third party mechanism and participates in the establishment of a trusted channel; the trusted third party CA generates necessary system parameters for establishing a channel in an initialization stage;
2) Registration:
the trusted third party CA first generates a set of public-private key pairs, wherein the public key generated by the CA is denoted as mpk, the private key is denoted as msk, the set of public-private key pairs is denoted as (msk, mpk), and specifies a signature scheme sigma, which the CA will use to issue signature certificates for each client in the Federal learning system to provide identity authentication of the client; the signature scheme sigma and the public key mpk in the CA public-private key pair are both disclosed by the CA; each client in the system generates own public and private key pair, wherein, client u i For public keys generatedIndicating that private key is->Indicating that the public and private key pair is +.>Representing that the public-private key pair corresponds to a signature scheme Σ; client u i Its public key->The method comprises the steps of sending the information to a trusted third party CA, and generating a member certificate signed by a private key msk of each client by the CA so that the identity of the client in the system can be authenticated;
3) Local model training phase:
the client performs local model training according to the training set segmented in the initialization stage; for generating an initial model during the first trainingIndicating, will->As a training initial model, the training model for the user training the model of the t-th round is thatSetting the maximum training round as T, wherein the current training round is denoted by T, and the client u i The model obtained by performing the t-turn training is expressed as +.>Setting privacy budget parameters in t-th training to epsilon t Privacy budget ε t The method is used for controlling the added noise interference quantity to realize local differential privacy, adding noise by adopting a Gaussian mechanism, and enabling the noise distribution obeying mean value to be 0 and standard deviation to be +.>Gaussian distribution N (0),σ 2 ) The method comprises the steps of carrying out a first treatment on the surface of the In order to resist inference attacks from semi-honest clients or malicious clients, a local differential privacy method is introduced to protect private data from curiosity or malicious clients, the method is as follows: the client is disturbed by adding noise, so that the data privacy is effectively protected while the model precision is ensured, and the client u for training the t-th model is provided i Its local model is +.>Client u before it sends this local model to other clients i Based on the set privacy budget ε t Controlling the amount of added noise by calculating +.>Adding noise to get a model for distribution->Local differential privacy is realized;
4) Model distribution stage:
after the client finishes the local model training, entering a model distribution and verification stage; in asynchronous federal learning, training rounds by each client are allowed to be inconsistent, and the number of models received from surrounding clients is also allowed to be inconsistent;
in the model distribution phase: client u i Selecting m clients with good surrounding communication conditions, and sending a model obtained by local training to the selected m clients; the credible secure channel ensures the security of the transmission data, and the model data cannot be acquired by an external adversary in the transmission process;
5) Model polymerization stage:
in peer-to-peer network, clients update and aggregate models respectively, without setting a central aggregation server, client u i Received from client u at the time of model training of the t-th round j Is updated to beThe accuracy of model updating from other clients is verified by each client, and the model updating through verification is used for model aggregation to generate a new model;
client u i In the verification stage, a precision parameter beta (0)<Beta is less than or equal to 1), the reference value of beta is 0.5, and the larger the beta value is, the higher the precision requirement is; and then carrying out data set quality verification firstly, and then carrying out model updating quality verification, wherein the method comprises the following steps of:
a) Training data set quality verification: evaluating the quality of the data set of the other clients according to the similarity between the received model update from the other clients and the local model so as to verify whether the data set of the other clients encounters a poisoning attack; calculating the cosine value of the model update and the local model to calculate the similarity: first calculate client u i Is a local model of (a)Model update with received client +.>The inner product between, namely:
the cosine values of the local model and the updated model are then calculated by:
each client independently selects whether to update the received model according to the verification result; if the cosine values of the local model and the updated model are lower than the parameter beta, the quality of the data set for training the updated model is lower, and the data set may be subjected to a poisoning attack;
b) Model update quality verification: when the model update passes the data set quality verification, temporary is performedAggregating, then testing the precision of the temporary aggregation model by using the test set data, and if the precision of the temporary aggregation model is lower than that of the local model, giving up the low-quality model update by the client, and continuing to train by using the local model; to prevent servers from getting original model updates to infer the privacy information of clients, an asynchronous aggregation method is employed, for a set of k clients, u= { U 1 ,u 2 ,...,u k Each of the clients u i All hold their own data set D i ,D i Is the whole user data set { U.D of whole federal study i A subset of }. In asynchronous federal learning, client u i The object of (a) is to obtain an optimal model M through model training, wherein the model M takes a characteristic value x as an independent variable, and a model parameter w is taken as a function h (w, x) of an independent variable coefficient, namely M=h (w, x); l (L) j (w) is the loss function of model with model parameters w for the jth sample, F i (w) is the model with model parameters w for client u i Data set D i The optimization objective of the model training is:
the training target of each client in asynchronous federation learning is to minimize the own loss function, and each client locally uses a gradient descent algorithm to perform model aggregation training; in the invention, the client performs model update to obtain an aggregation model by using data transmitted by m clients with good communication conditionsModel updates were performed using the following algorithm:
wherein the method comprises the steps ofIs a model update sent by other clients;
the client continues to run a machine learning algorithm on the own data set based on the model result of the iteration, and trains a new local model by utilizing random gradient descent; repeating the above stage 3) to stage 5) until the training maximum number of iterations T is reached.
Aiming at the characteristics of high calculation cost, high communication cost, multiple limiting conditions and inapplicability to a peer-to-peer network commonly existing in the existing federal learning encryption method, the invention provides a verifiable privacy protection federal learning scheme for solving the problems of data privacy and model precision of deploying asynchronous federal learning to a distributed scene such as collaborative driving and the like. The new verifiable aggregation method is provided, so that the updated models can be directly exchanged between the clients, the quality of the received model update is verified, the low-quality models are removed, and the model convergence speed is increased.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The federal learning scheme of the present invention will be described in detail with reference to fig. 1.
1) System initialization phase:
a) Initializing a client database: in a point-to-point network, client u i Maintaining own database D i ={(x 1 ,y 1 ),...,(x n ,y n ) Client u }, client u i The data in the database is partitioned to produce a training data set for local model training and a test data set for verifying accuracy of the received model.
b) Initializing communication parameters: the model update in the invention is transmitted by establishing a trusted channel, and the trusted third party CA generates necessary system parameters for establishing the channel in an initialization stage.
2) Registration:
the first step: the trusted third party CA generates a public and private key pair (msk, mpk) of the CA and a designated signature scheme sigma according to the system parameters generated in the initialization stage, and the CA issues a signature certificate for the client according to the scheme to provide identity authentication of the client;
and a second step of: CA discloses a public key mpk in a signature scheme Σ and its own key pair (msk, mpk);
and a third step of: each client u in the system i Generate own public and private key pairThe public-private key pair corresponds to the signature scheme sigma and its public key +.>Sent to a trusted third party CA, which is used for each client u i A membership certificate signed with its private key msk is generated so that the identity of the client in the system can be authenticated.
3) Local model training phase:
the first step: at first training, the client (u i ) Initializing a model toModel training using training set data, for user training model for the t-th round of model training +.>Setting the learning rate as alpha and the maximum iteration round T;
and a second step of: disturbing model added noise by using a local differential privacy method, and for a user u who performs t-th round model training i Local modelSetting privacy budget parameters in t-th training to epsilon t Privacy budget ε t The method is used for controlling the added noise interference quantity to realize local differential privacy, adding noise by adopting a Gaussian mechanism, and enabling the noise distribution obeying mean value to be 0 and standard deviation to be +.>Gaussian distribution N (0, sigma) 2 ) The method comprises the steps of carrying out a first treatment on the surface of the Based on the set privacy budget ε t And controlling the added noise amount, and effectively protecting the data privacy while guaranteeing the model precision. The scheme adopts a Gaussian mechanism to calculate +.>Adding noise to the model enables local differential privacy.
4) Model distribution stage:
in asynchronous federal learning, training rounds performed by each client may be inconsistent, and the number of models received from surrounding clients may also be inconsistent, so model training of a client may be performed without waiting for all clients in the same round to complete training:
the first step: client u i Selecting m clients with good surrounding communication conditions;
and a second step of: negotiating with the selected m clients through identity authentication generated in the registration stage, and establishing a secure channel;
and a third step of: the local training model is sent to the m clients. The trusted secure channel ensures the security of the transmission data, and the model data cannot be acquired by an external adversary in the transmission process.
5) Model polymerization stage:
in the peer-to-peer network, the clients update and aggregate the models respectively, a central aggregation server is not arranged, the accuracy of the models from other clients is verified by the clients, and the verification stage mainly comprises three steps.
The first step: client u i In the verification stage, a precision parameter beta (0)<Beta is less than or equal to 1), the reference value of beta is 0.5, and is used as the reference parameter (0) for verifying the accuracy of the data set<Beta is less than or equal to 1), the larger the beta value is, the higher the precision requirement is.
And a second step of: training data set quality verification: and evaluating the quality of the training data set according to the directional similarity of the updated model and the local model. First calculateClient u i Is a local model of (a)Model update with received client +.>The inner product between, namely:
the cosine values of the local model and the updated model are then calculated by:
each client independently selects whether to use the received model update according to the verification result. If the cosine values of the local model and the update model are smaller than the precision parameter beta, the update model is sentThe training data set of the client of (a) is of lower quality and the data set D j May be subject to a poisoning attack.
And a third step of: and (5) model updating quality verification. When the model update passes the data set quality verification, temporary aggregation is carried out, then the precision of the temporary aggregation model is tested by using the test set data, and if the precision of the temporary aggregation model is lower than that of the local modelThe client will discard this low quality update model. By the two verification methods, the polymerization of a low-quality model can be avoided, so that the performance of the model is effectively improved.
For a set of k clients u= { U 1 ,u 2 ,...,u k Each of the clients u i All hold their own data set D i ={(x 1 ,y 1 ),...,(x n ,y n ) All user data sets of federal learning are { u D }, then i }. The client side aggregates the model passing the verification, and the aggregation process has two steps.
The first step: optimizing the target definition. In asynchronous federal learning, client u i The object of (a) is to obtain an optimal model M through model training, wherein the model M uses a characteristic value x as an independent variable, a model parameter w is used as a function h (w, x) of an independent variable coefficient, namely m=h (w, x), and the optimization object of the model is defined as follows:
wherein L is j (w) is the loss function for the j-th sample for model parameter w, and each client's goal is to minimize its own loss function.
And a second step of: each client performs model aggregation training locally using a gradient descent algorithm. Setting the training minimum batch as B, and calculating the gradient of the client in each batch as followsIn the t-th round, the client will use the data sent by the surrounding m clients to update the model using the following algorithm:
wherein the method comprises the steps ofIs a validated update model generated by other clients.
Repeating the above stage 3) to stage 5) until the training times reach the maximum iteration times T which are initially set, and ending the model training.

Claims (1)

1. An asynchronous federal learning method based on verifiable aggregation and differential privacy in a peer-to-peer network, the method comprising the steps of:
1) System initialization phase:
a) Initializing a client database: ith client u for participating in asynchronous federal learning in peer-to-peer network environment i Representation, each client u i Separately maintaining own local data sets D i ={(x 1 ,y 1 ),...,(x n ,y n ) X, where x i For the characteristic value of the ith data in the client data set, y i A tag value for the ith data; in the system initialization phase, the client u i For own local data set D i Dividing, wherein a training set is used for model training, and a verification set is used for screening received model updates in a model verification stage;
b) Initializing communication parameters: the model update is transmitted by establishing a trusted channel; the CA is a trusted third party mechanism and participates in the establishment of a trusted channel; the trusted third party CA generates necessary system parameters for establishing a channel in an initialization stage;
2) Registration:
the trusted third party CA first generates a set of public-private key pairs, wherein the public key generated by the CA is denoted as mpk, the private key is denoted as msk, the set of public-private key pairs is denoted as (msk, mpk), and specifies a signature scheme sigma, which the CA will use to issue signature certificates for each client in the Federal learning system to provide identity authentication of the client; the signature scheme sigma and the public key mpk in the CA public-private key pair are both disclosed by the CA; each client in the system generates own public and private key pair, wherein, client u i For public keys generatedIndicating that private key is->Indicating that the public and private key pair is +.>Representing that the public-private key pair corresponds to a signature scheme Σ; client u i Its public key->The method comprises the steps of sending the information to a trusted third party CA, and generating a member certificate signed by a private key msk of each client by the CA so that the identity of the client in the system can be authenticated;
3) Local model training phase:
the client performs local model training according to the training set segmented in the initialization stage; for generating an initial model during the first trainingIndicating, will->As an initial model for training, the user training model for training the t-th round model is +.>Setting the maximum training round as T, wherein the current training round is denoted by T, and the client u i The model obtained by performing the t-turn training is expressed as +.>Setting privacy budget parameters in t-th training to epsilon t Privacy budget ε t The method is used for controlling the added noise interference quantity to realize local differential privacy, adding noise by adopting a Gaussian mechanism, and enabling the noise distribution obeying mean value to be 0 and standard deviation to be +.>Gaussian distribution N (0, sigma) 2 ) The method comprises the steps of carrying out a first treatment on the surface of the To combat inference attacks from semi-honest clients or malicious clients, a local differential privacy approach is introduced to protectThe private data is not affected by curious or malicious clients, the method is as follows: the client is disturbed by adding noise, so that the data privacy is effectively protected while the model precision is ensured, and the client u for training the t-th model is provided i Its local model is +.>Client u before sending this local model to other clients i Based on the set privacy budget ε t Controlling the amount of added noise by calculating +.>Adding noise to get a model for distribution->Local differential privacy is realized;
4) Model distribution stage:
after the client finishes the local model training, entering a model distribution and verification stage; in asynchronous federal learning, training rounds by each client are allowed to be inconsistent, and the number of models from surrounding clients is allowed to be accepted to be inconsistent;
in the model distribution phase: client u i Selecting m clients with good surrounding communication conditions, and sending a model obtained by local training to the selected m clients; the credible secure channel ensures the security of the transmission data, and the model data cannot be acquired by an external adversary in the transmission process;
5) Model polymerization stage:
in peer-to-peer network, clients update and aggregate models respectively, without setting a central aggregation server, client u i Received from client u at the time of model training of the t-th round j Is updated to beThe accuracy of model updates from other clients is determined by each guestThe client side verifies by itself, and the model is used for model aggregation to generate a new model through the verified model update;
client u i In the verification stage, a precision parameter beta (0)<Beta is less than or equal to 1), the reference value of beta is 0.5, and the larger the beta value is, the higher the precision requirement is; and then carrying out data set quality verification firstly, and then carrying out model updating quality verification, wherein the method comprises the following steps of:
a) Training data set quality verification: evaluating the quality of the data set of the other clients according to the similarity between the received model update from the other clients and the local model so as to verify whether the data set of the other clients encounters a poisoning attack; calculating the cosine value of the model update and the local model to calculate the similarity: first calculate client u i Is a local model of (a)With the received client u j Model update of->Inner product μ (u) i ,u j ) The method comprises the following steps:
the local model is then calculated byAnd update model->Cosine value cos (u) i ,u j ):
Each client will be tied according to the authenticationIndependently selecting whether to update the received model; if the cosine value cos (u i ,u j ) If the parameter is smaller than the parameter beta, the quality of the data set for training the updated model is lower, and the data set may be subjected to poisoning attack;
b) Model update quality verification: when the model update passes the data set quality verification, performing temporary aggregation, then testing the precision of the temporary aggregation model by using test set data, and if the precision of the temporary aggregation model is lower than that of the local model, giving up the low-quality model update by the client, and continuing to train by using the local model; to prevent servers from getting original model updates to infer the privacy information of clients, an asynchronous aggregation method is employed, for a set of k clients, u= { U 1 ,u 2 ,...,u k Each of the clients u i All hold their own data set D i ,D i Is the whole user data set { U.D of whole federal study i A subset of }; in asynchronous federal learning, client u i The object of (a) is to obtain an optimal model M through model training, wherein the model M takes a characteristic value x as an independent variable, and a model parameter w is taken as a function h (w, x) of an independent variable coefficient, namely M=h (w, x); l (L) j (w) is the loss function of model with model parameters w for the jth sample, F i (w) is the model with model parameters w for client u i Data set D i The optimization objective of the model training is:
the training target of each client in asynchronous federation learning is to minimize the own loss function, and each client locally uses a gradient descent algorithm to perform model aggregation training; the client updates the model by using the data sent by m clients with good communication conditions to obtain an aggregation modelThe polymerization method is as follows:
by two verification methods, the aggregation of low-quality models can be avoided, so that the performance of the models is effectively improved; the client continues to run a machine learning algorithm on the own data set based on the model result of the iteration, and trains a new local model by utilizing random gradient descent;
repeating the above stage 3) to stage 5) until the maximum number of iterations T is reached.
CN202111657350.1A 2021-12-30 2021-12-30 Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network Active CN114363043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111657350.1A CN114363043B (en) 2021-12-30 2021-12-30 Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111657350.1A CN114363043B (en) 2021-12-30 2021-12-30 Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network

Publications (2)

Publication Number Publication Date
CN114363043A CN114363043A (en) 2022-04-15
CN114363043B true CN114363043B (en) 2023-09-08

Family

ID=81105111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657350.1A Active CN114363043B (en) 2021-12-30 2021-12-30 Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network

Country Status (1)

Country Link
CN (1) CN114363043B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115828302B (en) * 2022-12-20 2023-07-07 华北电力大学 Micro-grid-connected control privacy protection method based on trusted privacy calculation
CN116720594B (en) * 2023-08-09 2023-11-28 中国科学技术大学 Decentralized hierarchical federal learning method
CN117436078B (en) * 2023-12-18 2024-03-12 烟台大学 Bidirectional model poisoning detection method and system in federal learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10873456B1 (en) * 2019-05-07 2020-12-22 LedgerDomain, LLC Neural network classifiers for block chain data structures
CN113010305A (en) * 2021-02-08 2021-06-22 北京邮电大学 Federal learning system deployed in edge computing network and learning method thereof
CN113407963A (en) * 2021-06-17 2021-09-17 北京工业大学 Federal learning gradient safety aggregation method based on SIGNSGD
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10873456B1 (en) * 2019-05-07 2020-12-22 LedgerDomain, LLC Neural network classifiers for block chain data structures
CN113010305A (en) * 2021-02-08 2021-06-22 北京邮电大学 Federal learning system deployed in edge computing network and learning method thereof
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN113407963A (en) * 2021-06-17 2021-09-17 北京工业大学 Federal learning gradient safety aggregation method based on SIGNSGD

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
联邦学习安全与隐私保护研究综述;周俊;方国英;吴楠;;西华大学学报(自然科学版)(04);全文 *

Also Published As

Publication number Publication date
CN114363043A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN112749392B (en) Method and system for detecting abnormal nodes in federated learning
Chen et al. Privacy-preserving image multi-classification deep learning model in robot system of industrial IoT
CN114254386A (en) Federated learning privacy protection system and method based on hierarchical aggregation and block chain
US11170786B1 (en) Federated speaker verification method based on differential privacy
Lyu et al. Towards fair and decentralized privacy-preserving deep learning with blockchain
WO2021106077A1 (en) Update method for neural network, terminal device, calculation device, and program
CN116049897B (en) Verifiable privacy protection federal learning method based on linear homomorphic hash and signcryption
Kumar et al. Blockchain-based authentication and explainable ai for securing consumer iot applications
CN115238172A (en) Federal recommendation method based on generation of countermeasure network and social graph attention network
Zhou et al. Securing federated learning enabled NWDAF architecture with partial homomorphic encryption
Sun et al. Fed-DFE: A Decentralized Function Encryption-Based Privacy-Preserving Scheme for Federated Learning.
Smahi et al. BV-ICVs: A privacy-preserving and verifiable federated learning framework for V2X environments using blockchain and zkSNARKs
Wan et al. Towards privacy-preserving and verifiable federated matrix factorization
Zhang et al. Visual object detection for privacy-preserving federated learning
Malladi et al. Decentralized aggregation design and study of federated learning
Li et al. Catfl: Certificateless authentication-based trustworthy federated learning for 6g semantic communications
Asad et al. Secure and Efficient Blockchain-Based Federated Learning Approach for VANETs
Yang et al. Efficient and secure federated learning with verifiable weighted average aggregation
Zhong et al. MPC-based privacy-preserving serverless federated learning
Zhou et al. Personalized privacy-preserving federated learning: Optimized trade-off between utility and privacy
Arifeen et al. Autoencoder based consensus mechanism for blockchain-enabled industrial internet of things
Kong et al. Information encryption transmission method of automobile communication network based on neural network
Gao et al. Privacy-preserving verifiable asynchronous federated learning
Ren et al. BPFL: Blockchain-Based Privacy-Preserving Federated Learning against Poisoning Attack

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant