CN113449319B - Gradient descent method for protecting local privacy and oriented to cross-silo federated learning - Google Patents

Gradient descent method for protecting local privacy and oriented to cross-silo federated learning Download PDF

Info

Publication number
CN113449319B
CN113449319B CN202110698626.4A CN202110698626A CN113449319B CN 113449319 B CN113449319 B CN 113449319B CN 202110698626 A CN202110698626 A CN 202110698626A CN 113449319 B CN113449319 B CN 113449319B
Authority
CN
China
Prior art keywords
client
model
parameters
gradient descent
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110698626.4A
Other languages
Chinese (zh)
Other versions
CN113449319A (en
Inventor
何道敬
陆欣彤
潘凯云
刘川意
田志宏
张宏莉
蒋琳
廖清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jingshan Technology Co ltd
East China Normal University
Original Assignee
Shanghai Jingshan Technology Co ltd
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jingshan Technology Co ltd, East China Normal University filed Critical Shanghai Jingshan Technology Co ltd
Priority to CN202110698626.4A priority Critical patent/CN113449319B/en
Publication of CN113449319A publication Critical patent/CN113449319A/en
Application granted granted Critical
Publication of CN113449319B publication Critical patent/CN113449319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a gradient descent method for protecting local privacy and oriented to cross-silo federated learning, which comprises the following specific implementation steps: randomly generating an initial value of a scalar parameter when a client side is initialized; the client executes the weight strategy to select the weight parameter; the client broadcasts the model parameters to the neighbors, receives the model parameters broadcast by the internal neighbors at the same time, and then aggregates the model updating parameters; the client updates the local model parameters; and the client updates the gradient descent parameters. The gradient descent method for protecting the local privacy in the cross-silo federated learning overcomes the privacy defects of several classical gradient descent methods facing the cross-silo/cross-equipment federated learning, and can better protect the local model privacy when a linear regression task is trained.

Description

Gradient descent method for protecting local privacy and oriented to cross-silo federated learning
Technical Field
The invention relates to a privacy protection method in federal learning, in particular to a gradient descent method for protecting local privacy in cross-silo federal learning.
Background
With the increasing importance of people on the problem of private information privacy protection, data security and privacy protection analysis become important research hotspots in many fields. Federal learning is an optimization algorithm of distributed machine learning, and privacy risks hidden in a traditional machine learning method can be reduced. One major advantage of federated learning is that the raw data is stored locally at the client, and potential adversaries can be prevented from eavesdropping on sensitive and private raw data of users during local training for exchanging relevant parameters between the client and the server.
While federal learning has significant advantages in protecting individual sensitive information, there is still a risk of privacy leakage. In the process of training the global shared optimal model based on federal learning coordination, a communication channel of relevant parameters such as transmission model updating exists between the client and the server (or the client), after the client receives the model parameters sent by the server, the client updates the model parameters according to the downloaded local model, for example, the local model is optimized by using a random gradient descent method, but in the process, the potential risk of privacy disclosure of the client exists, namely, a hidden adversary can deduce the local model of the client by stealing transmission information on the channel in the training process. Relevant research work has shown that sensitive information is hidden inside the model. For example, in 2014, Fredrikson et al introduced a model inversion attack method in research work and applied it to linear classifier research in personalized medicine, and the result shows that user privacy might be obtained by abusing antagonistic access to a machine learning model. In 2015, Fredrikson et al proposed a novel model inversion attack method using confidence information, which indicated that in the case of neural network and machine learning as system services, an adversary who has the ability to predict and query the model could recover the facial image of the user by knowing the confidence value, only given the relevant names and authority to access the machine learning model.
The federated learning has two classical settings, namely cross-equipment federated learning and cross-silo federated learning, and is characterized in that the former has a central server, a client uploads model parameters to the server, and the latter is only composed of clients which perform end-to-end transmission among the clients. Compared with the characteristic that the client side is allowed to operate in a large-scale parallel mode in the former mode, the client side in the latter mode is much smaller in scale, and high communication cost can be reduced and communication efficiency can be improved due to the fact that cross-silo federal setting is conducted when the client side operates on a network with low bandwidth and high delay.
Mcmahan et al proposes a gradient descent method for cross-device federal learning, namely the FedAvg method. The FedAvg method is a decentralization method based on iterative model averaging, namely training data are distributed on mobile equipment (client), and a server aggregates local model update to learn a shared model in cooperation with the mobile equipment. But the FedAvg method introduces a communication bottleneck. Therefore, while working on gradient descent for cross-silo federal learning, a gradient descent method named CBGD for cross-silo federal learning was first proposed for distributed (de-centralized) optimization, which has strong adaptability to the communication network between silos. However, the CBGD method needs to transmit the entire model parameters to the neighboring client, which has privacy hidden danger, so that a malicious adversary can decrypt the local model of the client from the gradient update step of the gradient descent method under the condition that the gradient descent method has sufficient cognition. Therefore, a PSGD method is proposed in which the client only needs to send partial model parameters to the neighbors. However, it has been proven that, assuming a curious adversary with eavesdropping capability and knowledge of the three gradient descent process flows, the adversary has the ability to derive the client-side local model while the client is training the linear regression task. Therefore, neither the classical FegAvg method, CBGD method, nor PSGD method can protect client local privacy.
Disclosure of Invention
In view of the above, the present invention aims to provide a gradient descent method for protecting local privacy facing cross-silo federal learning.
The specific technical scheme for realizing the purpose of the invention is as follows:
a gradient descent method for protecting local privacy and oriented to cross-silo federated learning comprises the following steps:
step 1: client initialization, namely, each client initializes a local model parameter and randomly generates an initial value of a scalar parameter;
and 2, step: each client executes a weight strategy to select a weight parameter;
and step 3: each client broadcasts the trained model updating parameters to adjacent communicable clients, receives the model parameters broadcast to the client from the internal neighbor clients, and aggregates the local model parameters with the received model updating parameters of the neighbor clients;
and 4, step 4: the client updates the local model parameters;
and 5: executing gradient descent operation, and updating gradient descent parameters by each client;
step 6: and (5) circulating the step (3-5) until the update parameters of the model transmitted in the network are not changed any more, namely the client calculates a consistent shared optimal model, stops training and jumps out of circulation.
Further, the role of the client initialization method of step 1 is to prevent adversaries from decrypting more parameter information by knowing the initial value of the client scalar parameter. The specific initialization method comprises the following steps: assuming that there are clients in the network, each client selects an algorithm to train the local model. Let x i Gradient descent parameter, w, for client i i For its local model parameter, y i (0) Is its scalar parameter. When the starting wheel is started, the client side initializes the local model parameters and orders
Figure BDA0003128820540000022
And a scalar value of 0<
Figure BDA0003128820540000021
And to prevent adversaries from knowing the client scalar parameter y i (0) To the initial value ofWith more parameter information, the client i randomly generates a scalar parameter y from a distribution of positive non-zero ranges i (0) Is started.
Further, the role of the weighting policy of step 2 is to ensure that the adversary cannot deduce the value of the weighting parameter even if he knows the number of outer neighbors of the client. The specific strategy is as follows: let p be j,i (t) model weight parameters of the tth round when the client sends part of the model parameters to the neighbor are as follows
Figure BDA0003128820540000023
The number of outer neighbors to the client is. When the number of nodes which can receive the message sent by the client i in the t-th round meets the requirement
Figure BDA0003128820540000031
When, to
Figure BDA0003128820540000032
From the range [ ζ', β ]]In the distribution of (2) selecting a weight parameter p i,i(t) Calculating
Figure BDA0003128820540000033
Is assigned to p j,i (t), attention is paid to
Figure BDA0003128820540000034
If not, the weight parameter p i,i(t) The value of (b) is 1.
Further, the broadcasting of the trained model update parameters to the adjacent communicable clients in step 3, and receiving the model parameters broadcasted to the client from the internal neighbor at the same time specifically include: the client i updates the model and then trains the model parameter p j,i (t)x i (t),p j,i (t)y i (t) upload to neighbor clients in a network
Figure BDA0003128820540000035
Note that this process only uploads model parameters and not all parameters, while receiving data from the intra-neighbor client
Figure BDA0003128820540000036
Model parameter p of i,j (t)x i (t),p i,j (t)y i (t)。
Further, the client in step 3 aggregates model parameters, that is, aggregates local model parameters and model update parameters sent by neighboring clients, and aims to comprehensively consider the model parameters of multiple clients, which is beneficial to model optimization. The calculation formula of the polymerization result is as follows:
Figure BDA0003128820540000037
Figure BDA0003128820540000038
wherein z is i And aggregating and calculating the weighted average of all the received model parameters and the model parameters of the client i.
Further, the client in step 4 updates the local model parameter method, that is, the model parameters from the next round of the aggregation result are used, and the calculation formula is as follows:
Figure BDA0003128820540000039
further, the method for updating gradient descent parameters in step 5 updates gradient descent parameters by using the aggregated result and the model parameters of the next round, and the calculation formula is as follows:
Figure BDA00031288205400000310
wherein eta is the learning rate, and wherein eta is the learning rate,
Figure BDA00031288205400000311
l i for loss functions, the loss function is typically defined as an empirical risk to the local data model.
The invention develops a federal learning scene around a client local training linear regression task, optimizes a global model by using a gradient descent method, and provides a gradient descent method for cross-silo federal machine learning and protecting local privacy. By improving the privacy vulnerability in the classic cross-silo/cross-equipment federated learning gradient descent method, an adversary is more difficult to decrypt the local model, so that the privacy of the local model can be better protected.
Drawings
FIG. 1 is a schematic diagram of cross-silo federal learning;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a schematic diagram of the convergence of the present invention;
FIG. 4 is a schematic diagram of neighboring data sets;
FIG. 5 is a graph of weak enemy classification model prediction results;
fig. 6 is a diagram of the prediction results of the strong enemy classification model.
Detailed Description
The invention refers to differential privacy definition and introduces a model privacy concept to measure the protection of the method of the invention on the local model privacy. If the method can ensure that the difference between the enemy decryption model and the local model is large enough with extremely high probability, the local privacy can be well protected. The invention provides a gradient descent method for protecting local privacy and oriented to cross-silo federated machine learning, which is mainly used for improving privacy defects in a classical federated learning gradient descent method by randomly generating scalar parameter initial values and setting a weight strategy.
The present invention will be described in further detail below with reference to specific embodiments and the attached drawings. The procedures, conditions and experimental methods for carrying out the present invention are general knowledge and common general knowledge in the field except for those specifically mentioned below, and should not be construed as limiting the scope of the present invention.
The parameters involved in the present invention are shown in the following table:
TABLE 1 description of the symbols
Figure BDA0003128820540000041
Examples
The embodiment takes the directed graph as the network setting of the invention, and experimental results show that the invention has convergence, and comparative experimental analysis verifies that the invention can better protect the privacy of the local model.
Consider first a directed graph network with 5 nodes (clients) set across the silo federated learning network, as shown in the left diagram of fig. 4, under which the clients aim to collaborate distributively to solve the following minimization problem:
Figure BDA0003128820540000051
the present embodiment assumes that the local data sets of the clients are in a linear relationship. Consider that client i has trained a local penalty function as a convex function from a local dataset
Figure BDA0003128820540000052
Model optimal solution set in local office
Figure BDA0003128820540000053
Under the assumption of null, the method is applied to enable the local model parameter w of the client to be in a way of i (t) can converge to the same value over time, i.e. the global optimum model w * . Therefore, the local model (loss function) trained by each client in the network setting reference (base) of the present embodiment is as follows:
Figure BDA0003128820540000054
by combining the two equations, the global model under the reference network can be obtained as follows:
Figure BDA0003128820540000055
it can be observed that the minimum value of the global model l (w) under the reference network is 5, i.e. minimizedThe optimal solution of the global model is w * This optimal solution means that when the method solves for the minimized global model, the local model parameters of each client converge to the global optimal solution w * =2。
The specific process for each client to run the method is as follows:
(1) first, when t is 0 (initial round), the client initializes local model parameters, each being
Figure BDA0003128820540000059
i belongs to {1, …, 5}, which are {3, 1, 1, 4, 2}, and simultaneously, scalar parameter initial value y subject to uniform distribution U is randomly generated i (0) The experimental setup y i (t)~U(1,10);
(2) Initialization parameter x i (t) and y i After (t), each client executes a weight strategy to select a weight parameter p j,i (t), attention is paid to
Figure BDA0003128820540000056
Wherein, the weight policy is that the number of nodes which can receive the message sent by the client i in the tth round satisfies
Figure BDA0003128820540000057
When, to
Figure BDA0003128820540000058
From the range [ ζ', β]In the distribution of (2) selecting a weight parameter p i,i(t) Calculating
Figure BDA0003128820540000061
Is assigned to p j,i (t); if not, the weight parameter p is determined i,i(t) Has a value of 1.
(3) After selecting the weights, the client i broadcasts the model parameters p j,i (t)x i (t),p j,i (t)y i (t) to the neighbor client j,
Figure BDA0003128820540000062
note that this process only uploads model parametersCounting without uploading all parameters and receiving the parameters from the internal neighbor client
Figure BDA0003128820540000063
Figure BDA0003128820540000064
Model parameter p of i,j (t)x i (t),p i,j (t)y i (t) of (d). Then the client i aggregates the received messages to obtain a parameter z i (t+1),y i (t + 1). The calculation formula is as follows:
Figure BDA0003128820540000065
Figure BDA0003128820540000066
(4) updating local parameters w i (t +1), the calculation formula is as follows:
Figure BDA0003128820540000067
(5) finally, executing gradient descent operation and updating the gradient descent parameter x i (t) is x i (t +1), the calculation formula is as follows:
Figure BDA0003128820540000068
the above process is a complete process of executing the present invention, and the process is circulated until the local model parameters of the client end converge to the global shared optimal model, and the circulation is skipped. The results of the experiment are shown in FIG. 2. Experiments prove that the method can ensure convergence, namely local model parameters of the client are converged to the optimal solution of the global model after multiple rounds of calculation, and the optimal solution is equal to an actual value, which means that the distributed optimization problem can be still solved on the premise of not sacrificing the accuracy of the optimal solution of the global model.
In addition, an adversary is designed based on the differential privacy concept, and the classification model is learned by training the two data sets by using a machine learning method, wherein the tasks of the adversary are as follows: when the malicious adversary only has the capability of monitoring the messages transmitted on the channel of the client i, whether the malicious adversary can detect that the local model of the client in the experimental benchmark setting is changed or not can be detected. The task of the adversary is shown in the left diagram of fig. 4. Hypothesis data set D 1 It is collected by monitoring the client channel based on the adversary under the experimental reference, note that, here, each row of records in the data set is all messages transmitted on the network channel from the starting round of the method operation to the total round of the method operation, and the total round can be set, that is, if there are 100 rows of records in the data set, it indicates that the method is executed 100 times respectively, wherein the round of each method refers to the total round of the method operation from the starting round to the method operation end, and each row of data in the data set is labeled as 0. Similarly, another data set D is shown on the right hand side of FIG. 4 2 Is a data set that the adversary hears on the transmission channel in case the local data set originating from a certain client is modified, with a data label of 1. If the adversary trains the learning model by using the two data sets, the difference can not be perceived from the classification output result, that is, the adversary can not find that the local model of a certain client in the experimental benchmark is changed. Thus, if at each listen, he cannot tell whether the collected data set was modified by the client at all, or which collected data set was true, he has less ability to continue decrypting the client's local model privacy in the distributed network.
The method of the present invention (PPSGD) was tested together with the classical PSGD method in cross-silo federal learning and the results were compared. And carrying out comparative analysis under two conditions of weak enemies and strong enemies. The specific process for acquiring the training set and the test set by the two adversaries is similar. Tables 2 and 3 respectively introduce the generation scenes of the training data set and the test data set of the classification models of the weak enemy and the strong enemy in the federal learning operation process. When training set and test set data acquired by an adversary are from the same example, i.e., example 0 and example 6, the adversary is defined as a strong adversary.
TABLE 2 training/testing data set for weak enemy classification model
Figure BDA0003128820540000071
TABLE 3 training/testing data set for strong enemy classification models
Figure BDA0003128820540000072
The experimental result pair ratio is shown in table 4, and the visual result pair ratio is shown in fig. 5 and 6. It can be seen that no matter a strong adversary or a weak adversary, compared with the method disclosed by the invention, the ability of the adversary classification model to attack the PSGD method is higher, and the more accurate the classification result of the test data set is, which means that the method disclosed by the invention can make it more difficult for the adversary to decrypt the local model without sacrificing the accuracy of the result, and can also ensure the convergence. The privacy can be protected to a certain extent by adding Gaussian noise, but the accuracy of the result is sacrificed at the same time. By combining the above, the invention can protect the local privacy of the client.
TABLE 4 comparison of the results
Figure BDA0003128820540000081

Claims (5)

1. A gradient descent method for protecting local privacy and oriented to cross-silo federated learning is characterized by comprising the following steps:
step 1: client initialization, namely, each client initializes a local model parameter and randomly generates an initial value of a scalar parameter;
and 2, step: each client executes a weight strategy to select a weight parameter;
and 3, step 3: each client broadcasts the trained model updating parameters to adjacent communicable clients, receives the model parameters broadcast to the client from the internal neighbor client, and aggregates the local model parameters and the received model updating parameters of the neighbor clients;
and 4, step 4: the client updates the local model parameters;
and 5: executing gradient descent operation, and updating gradient descent parameters by each client;
and 6: step 3-5 is circulated until the update parameters of the model transmitted in the network do not change any more, namely the client calculates a consistent shared optimal model, the training is stopped, and the circulation is skipped; wherein:
step 3, each client broadcasts the trained model update parameters to the adjacent communicable clients and receives the model parameters broadcast to the client from the internal neighbors, specifically: after the client i updates the model, the trained model parameter p is used j,i (t)x i (t),p j,i (t)y i (t) upload to neighbor clients in a network
Figure FDA0003497090650000011
The process only uploads model parameters but not all parameters, and simultaneously receives the parameters from the internal neighbor client
Figure FDA0003497090650000012
Model parameter p of i,j (t)x i (t),p i,j (t)y i (t);
Step 3, aggregating the local model parameters and the received model update parameters of the neighbor clients, wherein the purpose is to comprehensively consider the model parameters of a plurality of clients, which is beneficial to model optimization; the calculation formula of the polymerization result is as follows:
Figure FDA0003497090650000013
Figure FDA0003497090650000014
wherein z is i And aggregating and calculating the weighted average of all the received model parameters and the model parameters of the client i.
2. The gradient descent method according to claim 1, wherein each client in step 1 initializes local model parameters, specifically: if n clients exist in the network, each client selects an algorithm for training a local model; let x i Gradient descent parameter, w, for client i i For its local model parameter, y i (0) Is its scalar parameter; when the starting wheel is started, the client i initializes the local model parameters and orders
Figure FDA0003497090650000015
And in the interval
Figure FDA0003497090650000016
In the range of (1), the scalar values ζ' and β are arbitrarily chosen so that
Figure FDA0003497090650000017
Figure FDA0003497090650000018
And to prevent adversaries from knowing the client scalar parameter y i (0) To decrypt more parameter information, the client i randomly generates a scalar parameter y from a distribution of positive non-zero ranges i (0) Of (4) is calculated.
3. The gradient descent method according to claim 1, wherein the weighting strategy of step 2: let p be j,i (t) model weight parameters for the tth round when client i sends its partial model parameters to neighbor j, let
Figure FDA0003497090650000021
Is the outer neighbor of client iCounting; when the number of the nodes which can receive the message sent by the client i in the t-th round meets the requirement
Figure FDA0003497090650000022
In time, to
Figure FDA0003497090650000023
From the range [ ζ', β]In the distribution of (2) selecting a weight parameter p i,i(t) Calculating
Figure FDA0003497090650000024
To p j,i (t) client i sends message x i (t),y i (t) is set to p j,i (t), attention is paid to
Figure FDA0003497090650000025
If not, the weight parameter p i,i(t) The value of (b) is 1.
4. The gradient descent method according to claim 1, wherein the step 4 is specifically: and obtaining the model parameters of the next round by using the aggregation result, wherein the calculation formula is as follows:
Figure FDA0003497090650000026
w i as local model parameters, y i Is a client scalar parameter, z i And aggregating and calculating the weighted average of all the received model parameters and the model parameters of the client i.
5. The gradient descent method according to claim 1, wherein the step 5 of updating the gradient descent parameters: updating the gradient descent parameters by using the result of the aggregation and the model parameters of the next round, wherein the calculation formula is as follows:
Figure FDA0003497090650000027
Figure FDA0003497090650000028
wherein eta is the learning rate,
Figure FDA0003497090650000029
l i is a loss function.
CN202110698626.4A 2021-06-23 2021-06-23 Gradient descent method for protecting local privacy and oriented to cross-silo federated learning Active CN113449319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110698626.4A CN113449319B (en) 2021-06-23 2021-06-23 Gradient descent method for protecting local privacy and oriented to cross-silo federated learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110698626.4A CN113449319B (en) 2021-06-23 2021-06-23 Gradient descent method for protecting local privacy and oriented to cross-silo federated learning

Publications (2)

Publication Number Publication Date
CN113449319A CN113449319A (en) 2021-09-28
CN113449319B true CN113449319B (en) 2022-08-19

Family

ID=77812277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110698626.4A Active CN113449319B (en) 2021-06-23 2021-06-23 Gradient descent method for protecting local privacy and oriented to cross-silo federated learning

Country Status (1)

Country Link
CN (1) CN113449319B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115081024B (en) * 2022-08-16 2023-01-24 杭州金智塔科技有限公司 Decentralized business model training method and device based on privacy protection
CN116822647B (en) * 2023-05-25 2024-01-16 大连海事大学 Model interpretation method based on federal learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
CN111104731A (en) * 2019-11-19 2020-05-05 北京集奥聚合科技有限公司 Graphical model full-life-cycle modeling method for federal learning
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium
AU2021102261A4 (en) * 2021-04-29 2021-06-17 Southwest University Density-based distributed stochastic gradient federated learning algorithm to Byzantine attack

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
CN111104731A (en) * 2019-11-19 2020-05-05 北京集奥聚合科技有限公司 Graphical model full-life-cycle modeling method for federal learning
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium
AU2021102261A4 (en) * 2021-04-29 2021-06-17 Southwest University Density-based distributed stochastic gradient federated learning algorithm to Byzantine attack

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种面向边缘计算的高校异步联邦学习机制;芦效峰等;《计算机研究与发展》;20201231;全文 *

Also Published As

Publication number Publication date
CN113449319A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN113449319B (en) Gradient descent method for protecting local privacy and oriented to cross-silo federated learning
CN105187200A (en) Method For Generating A Key In A Network And User On A Network And Network
US9565559B2 (en) Method and system for preserving privacy during data aggregation in a wireless sensor network
Liu et al. Identifying malicious nodes in multihop IoT networks using diversity and unsupervised learning
Shah et al. Shapely value perspective on adapting transmit power for periodic vehicular communications
He et al. Learning-based wireless powered secure transmission
CN107005378A (en) Technology for improving control channel capacity
Prathapchandran et al. A trust-based security model to detect misbehaving nodes in Internet of Things (IoT) environment using logistic regression
Chung et al. On the capacity of overlay cognitive radios with partial cognition
Xu et al. Detrust-fl: Privacy-preserving federated learning in decentralized trust setting
Elmahallawy et al. Secure and efficient federated learning in LEO constellations using decentralized key generation and on-orbit model aggregation
CN101478751A (en) Energy optimized safe routing method
Wang et al. Trust and independence aware decision fusion in distributed networks
CN105407090A (en) Sensing original data safety protection method supporting data processing
CN112073976B (en) User general grouping method in non-orthogonal multiple access based on machine learning
Liu et al. Artificial noise design for discriminatory channel estimation in wireless MIMO systems
Yan et al. Toward secure and private over-the-air federated learning
Sen et al. An attack on privacy preserving data aggregation protocol for wireless sensor networks
Balakrishnan et al. A novel anomaly detection algorithm for WSN
Yang et al. GA based user matching with optimal power allocation in D2D underlaying network
Jose et al. Integrity protecting and privacy preserving data aggregation protocols in wireless sensor networks: a survey
Akyol et al. Signaling games in networked cyber-physical systems with strategic elements
Balakrishnan et al. An enhanced iterative filtering technique for data aggregation in WSN
Zhao et al. Spectrum tomography attacks: Inferring spectrum allocation mechanisms in multicarrier systems
Restuccia et al. Generalized wireless adversarial deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant