CN116094993B - Federal learning security aggregation method suitable for edge computing scene - Google Patents

Federal learning security aggregation method suitable for edge computing scene Download PDF

Info

Publication number
CN116094993B
CN116094993B CN202211657554.XA CN202211657554A CN116094993B CN 116094993 B CN116094993 B CN 116094993B CN 202211657554 A CN202211657554 A CN 202211657554A CN 116094993 B CN116094993 B CN 116094993B
Authority
CN
China
Prior art keywords
terminal
model
edge
aggregation
terminals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211657554.XA
Other languages
Chinese (zh)
Other versions
CN116094993A (en
Inventor
王瑞锦
李雄
张凤荔
周世杰
王金波
赖金山
周潼
程帆
李嘉坤
孙鹏钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211657554.XA priority Critical patent/CN116094993B/en
Publication of CN116094993A publication Critical patent/CN116094993A/en
Application granted granted Critical
Publication of CN116094993B publication Critical patent/CN116094993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a federal learning security aggregation method suitable for an edge computing scene, which comprises the following steps: (1) The edge node modifies the communication topological graph structure between the terminals from a full connected graph to a terminal connected topological graph based on a minimum spanning tree; (2) Each terminal trains a federal learning model by using local data, and communicates with a neighbor terminal in a broadcast key mode according to the modified terminal connection topological graph; (3) Each terminal calculates a mask and is used for encrypting the model gradient; (4) The edge node receives the encrypted model gradient transmitted by the terminal and carries out local aggregation; (5) The cloud computing processing center receives the local aggregation model gradient, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal. The method solves the problem of privacy leakage in federal learning, avoids the need of a large amount of extra calculation and communication expenditure, and improves the convergence rate of the model.

Description

Federal learning security aggregation method suitable for edge computing scene
Technical Field
The invention relates to the technical field of edge calculation, in particular to a federal learning security aggregation method suitable for an edge calculation scene.
Background
With the popularity of mobile smart devices and the development of wireless communication technologies such as 5G, many computationally intensive applications with low latency requirements, such as online immersive gaming, augmented reality, and video streaming analysis, have emerged. Because conventional cloud computing cannot meet the low-latency requirements of these applications, SATYANARAYANAN et al propose a novel computing model, called edge computing, which offloads a large number of computing tasks from the cloud to edge nodes that are closer to the user, such as wifi wireless access points and base stations, and can more effectively protect data privacy.
The federal learning (FL, federated Learning) technology proposed by google is an important technical method for solving privacy protection in distributed machine learning. On the one hand, how as much data as possible can be utilized in the joint modeling process. On the other hand, regulatory authorities and society are increasingly demanding in terms of privacy protection. Federal learning proposes to solve this dilemma in such a way that data is not moving and data is available in a invisible manner. McMahan et al propose a federal averaging algorithm FedAvg (Federated Averaging) for federal learning, but the workload of each terminal in the algorithm is the same. In a practical scenario, however, the available computing resources of different terminals are not the same and the data is highly heterogeneous. In response to this problem, tian et al propose an improved FedProx algorithm that allows the system to perform a variable amount of work based on the available computing resources of the different terminals to avoid that it is forced to exit due to overload. KARIMIREDDY et al propose an improved algorithm SCAFFOLD for heterogeneous data problems. Under the condition of heterogeneous terminal height data, compared with FedAvg, the algorithm can avoid the global model from developing to local optimum and accelerate the convergence rate.
However, federal learning does not completely solve the privacy disclosure problem. To address this problem, m.abadi et al propose to apply differential privacy techniques in a deep-learned random gradient descent (SGD) algorithm to protect the privacy of user data. Due to the development of federal learning and the improvement of privacy protection requirements, geyer et al propose to apply differential privacy technology in federal learning to protect the data of the terminal from leakage during federal learning. Another solution is to deploy multiparty security computing (MPC, secure multiparty computation), MPL in federal learning. Song et al propose the problem of training a machine learning model with privacy protection over multiple data sets (abbreviated TMMPP) and a solution to the problem of using MPC to solve the security problem in distributed machine learning. G Xu et al propose a verifiable framework VERIFYNET to ensure confidentiality and integrity of the model, wang R et al propose a privacy protection federal learning framework of the medical Internet of things under edge computing, bonawitz et al propose a concept of secure aggregation (SA, secure Aggregation), and primitives such as key sharing, encryption and decryption in cryptography are used in the federal learning framework to protect the privacy of the terminal from being violated.
However, since the computing and communication overheads caused by the key sharing and encryption and decryption on the framework are large, the convergence speed of the global model is often slow. In this regard, bell et al propose to use sparse graphs to reduce connectivity of terminals to reduce computational and communication overhead of security aggregation; choi et al also propose a CCESA algorithm based on a sparse map Erdos-Renyi map, which reduces the communication degree of the map, so that the terminal can only share the secret key with the neighbor terminal, thereby achieving the purpose of reducing the communication calculation overhead.
There are still some disadvantages here. First, both SA and CCESA need to broadcast public keys and key shares by means of the cloud server, which greatly increases the communication computation overhead of the cloud server; second, CCESA scheme, although the object of the key share shared by the terminals is changed from all other terminals to the neighbor terminal, needs to guarantee at least t neighbor terminals per terminal since the key share needs to be acquired from the neighbor terminal to reconstruct the key; third, in some scenarios where the cloud server is far from the terminal, the communication delay between them is high, and much extra time is required to be paid in broadcasting the public key and the key share through the cloud server, which slows down the model convergence time of the whole system.
In summary, how to solve the problem of privacy disclosure in federal learning, avoid the need of a large amount of additional computation and communication overhead, and improve the convergence speed of the model, which is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a federation learning security aggregation method suitable for an edge computing scene, which can solve the problem of privacy leakage in federation learning, avoid the need of a large amount of extra computing and communication expenditure and improve the convergence rate of a model.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the federal learning security aggregation method suitable for the edge computing scene comprises the following steps:
(1) The edge nodes in the edge calculation middle layer take the communication time delay between all terminals in the edge calculation middle layer as the weight of the edges of the full communication topological graph of the terminals, and the communication topological graph structure between all terminals participating in federal learning is modified from the full communication graph to the terminal communication topological graph based on the minimum spanning tree;
(2) Each terminal trains a federal learning model by using local data, communicates with a neighbor terminal in a broadcast key mode according to a modified terminal connection topological graph, and then collects key shares of each terminal by an edge node;
(3) Each terminal generates a random vector by using a symmetric key between the terminal and a neighbor terminal through a pseudo random generator PRG, uses the random vector as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to an edge node;
(4) The edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient;
(5) The edge node sends the local aggregation model gradient to a cloud computing processing center of the edge computing highest layer;
(6) And the cloud computing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal.
In the step (1), the modification process of the terminal connection topological graph based on the minimum spanning tree is as follows: the edge node sequentially selects the edge with the minimum weight by using the minimum spanning tree algorithm, and simultaneously ensures that the currently selected edge and the already selected edge do not generate loops until all terminals are positioned in one connected component, and finally, the communication topological graph structure among all terminals participating in federal learning is modified from a full connected graph to a terminal connected topological graph based on the minimum spanning tree.
Further, in the step (2), the training process of the model updates parameters by adopting a gradient descent method SGD, and the formula is as follows:
wherein w t,k is the updated parameter of the t-th round of the terminal k, w t-1,k is the parameter of the (t-1) -th round, eta is the learning rate, Is the gradient direction of the objective function F k (w) with respect to the parameter w.
Specifically, in the step (2), the process of the terminal communicating with the neighbor terminal is as follows:
(a) Using a t-out-of-n algorithm to divide the private key s i sk and the random number bu i into t shares, namely SS.share(t,bui)→{bui,j+,SS.share(t,si sk)→{si,j sk},j∈neighbori(j);, wherein ss.share () represents a key sharing protocol and neighbor i (j) represents a neighbor terminal set of terminal i;
(b) Encryption of { bu i,j } and { s i,j sk } using other terminal public key s j pk, i.e. E i,j represents ciphertext generated by encrypting bu i,j and s i,j sk,/>Represents an encryption algorithm using private key s j sk;
(c) According to the modified terminal connectivity topology map, the shares { i, j, e i,j } and the public key s i pk are sent to the neighbor terminal (advertisement ());
(d) The terminal receives the share set { j, i, e j,i } from the neighbor terminal, stores the share e j,i belonging to itself, and forwards { j, i, e j,i } and other neighbor terminals (Transmit ()) outside of the public keys s i pk to j.
Further, in the step (4), the mask is eliminated and local aggregation is performed by adopting the following formula:
Where Θ edge is a local aggregation model gradient, Θ i is a model gradient encrypted by terminal i, n k is the data amount local to the terminal, and n is the total data amount of all terminals.
Still further, in the step (4), the model gradient is locally aggregated by using the following formula:
Where w t is the model gradient with mask for the t-th round.
Preferably, the terminal is an internet of things device.
Preferably, the edge node is a base station or a wifi access point.
Compared with the prior art, the invention has the following beneficial effects:
in the invention, the edge node constructs the terminal communication topological graph based on the minimum spanning tree according to the communication time delay among the terminals, so that the communication degree of the terminal communication graph is greatly reduced; meanwhile, the terminal only generates a symmetric key with the neighbor terminal and calculates a mask, so that the calculation overhead of the system is reduced.
Furthermore, the invention uses the minimum spanning tree as the terminal connected topological graph, and the distribution and sharing of the secret key are both carried out through the neighbor terminal for forwarding broadcast instead of the edge node, thereby well reducing the workload and communication overhead of the edge node.
And the minimum spanning tree structure can minimize the delay of key broadcasting, thereby effectively improving the convergence rate of the model. A large number of experimental results show that compared with the traditional safe polymerization method, when the number of terminals is 10, the safe polymerization method can at least reduce the federal learning operation time by 28.2 percent on the basis of not reducing the federal learning safety level and the model precision.
Drawings
FIG. 1 is a diagram of a system architecture for implementing an embodiment of the present invention.
Fig. 2 is a schematic diagram of a process of modifying a minimum spanning tree based terminal connectivity topology in an embodiment of the present invention.
Fig. 3 is a schematic diagram of a process of broadcasting by a terminal in an embodiment of the present invention.
FIG. 4 is a system runtime comparison of an embodiment of the present invention with a conventional secure aggregation scheme CCESA and SA.
FIG. 5 is a graph showing the model accuracy versus conventional safe polymerization schemes CCESA and SA according to one embodiment of the present invention.
Fig. 6 is a diagram illustrating the security comparison of the present invention-embodiment with the conventional secure polymerization scheme CCESA and SA.
Fig. 7 is a diagram of a terminal connectivity topology based on a minimum spanning tree generated according to a communication latency between terminals in an embodiment of the present invention.
Detailed Description
The invention will be further illustrated by the following description and examples, which include but are not limited to the following examples.
Examples
The embodiment provides the federal learning security aggregation method suitable for the edge computing scene, which can solve the problem of privacy leakage in federal learning, avoid the need of a large amount of extra computing and communication expenditure, and improve the convergence rate of the model.
The following defines a system for a three-layer federal learning security aggregate overall framework in the edge computing scenario in this embodiment, including the overall framework diagram of the system and the definition of entities participating in federal learning in the system.
Federal learning is a model paradigm of a distributed machine learning model, on the basis of which this embodiment introduces federal learning into edge computation, defining a "cloud-edge-end" three-layer model. From the whole, the terminal trains the model locally, then transmits the gradient encryption of the model to the edge node, and the edge node carries out safe aggregation according to the encryption gradient to obtain a local aggregation model. And then the edge node transmits the local aggregation model to a cloud computing center for further aggregation to obtain a global aggregation model. A schematic of federal learning of a "cloud-edge-end" three-tier architecture is shown in fig. 1.
The whole three-layer edge computing architecture comprises three types of entities, wherein the first layer is a cloud computing processing center and is used for aggregating models from edge nodes and simultaneously issuing the finally aggregated models to the edge nodes; the second layer is an edge node for aggregating the local models from the terminals. The third layer is the terminal, where the user data is generated. And the terminal performs local model training by using the user data, and finally, the trained model gradient is uploaded to the edge node for aggregation.
1. Terminal
The Internet of things equipment such as mobile phones, personal computers and the like is located at an edge layer of an edge computing three-layer architecture. In this embodiment, the main function of the terminal is to train the model with local data and communicate with the neighboring terminal in the manner of broadcast key, and at the same time calculate the mask and encrypt the model gradient, and then send to the edge node.
2. Edge node
And an edge server in the middle layer of the edge computing three-layer architecture, such as a base station or a wifi access point. In this embodiment, the edge node is used as a local aggregation center in federal learning, receives the encryption gradient transmitted by the terminal, performs local aggregation and removes the mask, and finally sends the local model to the cloud for further aggregation.
3. Cloud computing processing center
The method is in the highest layer of the three-layer architecture of edge computing, and has the main functions of receiving the local aggregation model gradient from the edge nodes, performing aggregation again to form a global model, and finally transmitting the global model to the edge nodes to provide services for the terminal.
In the federal learning model of the present embodiment, the proposed secure aggregation scheme is introduced into the "edge-end" edge layer in edge computation, so as to achieve the purposes of minimizing the edge node overhead and protecting the data privacy of the terminal. Assuming that the model objective function of the terminal in edge-end federation learning is F k (w), the objective function of the edge node appears asMeanwhile, under the conditions of ensuring that the global model converges, the accuracy is high and the data privacy is well protected, smaller additional communication and calculation overhead of the system are required to be ensured. Assuming that the computational overhead is a and the communication overhead is b, the problem of the model is defined as:
s.t.min(a+b)(2)
Parameter definition
The definitions of some of the parameters used in this embodiment are described in detail herein, as shown in tables 1 and 2.
TABLE 1 definition of parameters
Sequence number Parameter name Description of related Art
1 V Terminal set participating in federal learning
2 neighbori(j) Neighbor terminal set of terminal i
3 ti,j Transmission delay between nodes i and j
4 si pk Public key of terminal i
5 si sk Private key of terminal i
6 bui Random number of terminal i for double masking
7 bui,j The bu i share generated by terminal i, which is sent to terminal j
8 si,j sk S i sk share generated by terminal i and sent to terminal j
9 Θi Terminal i original model gradient
10 Θi Model mask gradient for terminal i
11 ei,j Ciphertext generated by encrypting bu i,j and s i,j sk
Table 2 Algorithm definition
Considering that in edge computation, some tasks of different terminals requiring a larger computation amount may be offloaded to edge nodes to be performed, such as training and reasoning processes of models, the overhead of the edge nodes is generally larger. At this time, if a large amount of tasks of broadcasting terminal information in the security aggregation process are put at the edge node, the edge node may be down, so that the training effect of the federal learning model of the overall three-layer edge computing architecture is affected. In this regard, the present embodiment places a significant amount of communication computation overhead locally, not broadcasting the public key and key shares of the terminal through the edge node, but letting the terminal itself broadcast.
As can be seen from fig. 1, the terminals i and j participating in federal learning communicate with each other, and the communication delay t i,j is transmitted to the edge node, and the edge node uses the minimum spanning tree algorithm to modify the communication topology between the terminals from the full connectivity graph G to the minimum spanning tree-based terminal connectivity topology G . According to the structure of the figure, the terminal and the neighbor terminal are mutually forwarded to achieve the purpose of broadcasting the secret key. The terminal generates a random vector as a mask of the gradient through a pseudo random generator PRG by using a symmetric key between the terminal and the neighbor terminal, and transmits the masked gradient to the edge node for local model aggregation and decoding. And finally, the edge nodes transmit the aggregated model gradient to a cloud computing center for further aggregation, and finally, a global model is obtained. The global model is issued to the edge node in the next iteration and serves the terminal.
Each step of the flow of the embodiment will be described in detail.
Firstly, the edge node sequentially selects the edge with the minimum weight by using the minimum spanning tree algorithm, simultaneously ensures that the currently selected edge and the selected edge do not generate loops until all terminals are positioned in one connected component, and finally modifies the communication topological graph structure among all terminals participating in federal learning from a full connected graph to a terminal connected topological graph based on the minimum spanning tree.
As shown in fig. 2, the left is a connected graph between terminals in an edge computation scenario, where there are 7 terminals (nodes of the graph) v= { a, b, c, d, e, f, g }. The line indicates that communication can be performed between terminals (edge )E={ea,b,ea,d,ea,f,eb,c,eb,d,ec,e,ed,e,ed,f,ee,f,ee,g,ef,g}, of the graph indicates the communication delay (weight of edge). Right is a terminal connected topology graph of a minimum spanning tree selected by the minimum spanning tree algorithm, the algorithm process starts with selecting the edge with the minimum weight from the left connected graph, the minimum weight is edge e f,g, the weight is 3, the corresponding terminals are f and g, e f,g and terminals f and g are added to the selected terminal set V , then the edge with the minimum weight in the remaining edges is continuously selected, assuming that e x,y is assumed, the corresponding terminals are x and y. as long as two terminals x and y of the edge are located in two different connected components (the extremely large connected subgraphs in the undirected graph are called connected components), and the edge can be selected to form the minimum spanning tree:
if V has only one connected component, x, y need to satisfy one of three conditions:
If V-contains more than one connected component, it is assumed that V- = V 1∪V2∪…∪Vm, where V i (i e 1, m) is a sub-connected component of V-that is not connected to other V j (j e 1, m, j not equal i). The terminals x, y need to meet one of four conditions:
and sequentially selecting the minimum weighted sides of the original terminal connected graph according to the rule until all terminals are positioned in one connected component. In the edge computing scene, when n terminals exist, the edge node needs to select a proper n-1 communication paths according to the communication time delay between the terminals to generate a terminal communication topological graph based on a minimum spanning tree.
And then, each terminal trains a federal learning model by using local data, communicates with the neighbor terminals in a broadcast key mode according to the modified terminal connection topological graph, and then collects key shares of each terminal by the edge node.
This embodiment uses FedAvg as the federal learning algorithm on the edge side. Specifically, the terminal node performs iterative training for multiple times locally to form a local model, and the training process of the model adopts a gradient descent method SGD to update parameters, and the formula is as follows:
wherein w t,k is the updated parameter of the t-th round of the terminal k, w t-1,k is the parameter of the (t-1) -th round, eta is the learning rate, Is the gradient direction of the objective function F k (w) with respect to the parameter w.
The key of the terminal used in the security aggregation process is sent to the neighbor terminal, and the neighbor terminal forwards the key. In the forwarding process, the neighbor terminal does not send the key back to the terminal which originally sends the key, so that the unlimited propagation of the key is avoided.
Fig. 3 shows a procedure of key broadcasting by the terminal d. d sends the key to the neighbor terminals a and e first, then a and e receive the key from d and then help d forward to achieve the broadcasting effect, namely a forwards b again and e forwards g again, but does not transmit to d again. The whole process does not need intervention of edge nodes, so that workload of the edge nodes is reduced. The following is a specific broadcast algorithm based on neighbor terminal forwarding:
the algorithm 1 is used for transmitting broadcast data in an initial stage of the terminal, namely, the terminal transmits the public key and the key share to all neighbor terminals; algorithm 2 is used to receive data sent by a neighbor terminal (with the index last_id) and at the same time help forward the data to all other neighbor terminals of the own that do not include the index last_id. The following is the step of the terminal sharing the broadcast key:
For the terminal:
1. The private keys s i pk and bu i are divided into n shares using the t-out-of-n algorithm, ss.share (t, bu i)→{bui,j),
SS.share(t,si sk)→{si,j sk},j∈neighbori(j);
2. The other terminal public keys are used to encrypt the { bu i,j } and { s i,j sk },
3. According to G, the shares { i, j, e i,j } and the public key s i pk are sent to the neighbor terminal (advertisement ()).
4. The share set { j, i, e j,i } from the neighbor terminal is received, the share e j,i belonging to itself is stored, and the other neighbors (Transmit ()) outside of { j, i, e j,i } and the public keys s i pk to j are forwarded at the same time.
For edge nodes:
1. The public key { s i pk, i e V } from the terminal is collected.
Next, each terminal generates a random vector by a pseudo random generator PRG using a symmetric key with a neighbor terminal, and uses it as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to the edge node. And the edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient.
After the terminal training is completed, encrypting the model gradient using mask, and transmitting the encrypted gradient to the edge node for local aggregation, wherein the formula is as follows:
Where n k is the data size local to the terminal, and n is the total data size of all terminals. The resulting w t is the model gradient with the mask at round t, and the next step requires the edge node to get the key shares from the terminal and reconstruct the key, thus removing the mask. The specific algorithm is as follows:
and finally, the cloud computing processing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and issues the global aggregation model to the edge node to provide services for the terminal.
Summarizing, it can be summarized as follows:
For the terminal:
1. Decryption acquisition using own private key
2. All shares { (bu j,i,sj,i sk) } are sent to the edge node;
For edge nodes:
1. Collecting all key shares { (bu j,i,sj,i sk) };
2. Reconstruct ss.recon ({ bu i,j})→bui,SS.recon({si,j sk})→si sk) with t share keys;
3. using federal learning aggregation function federal averaging algorithm FedAvg:
4. the edge node removes the mask:
For a cloud computing processing center:
1. collecting model gradients from the edge nodes;
2. Calculating an aggregation model gradient: Θ cloud=∑Θedge;
3. and (3) issuing the global model theta cloud to the edge side, and performing the next iteration.
The present example protocol is experimentally compared and analyzed with conventional protocols as follows. The experiment is mainly performed from three dimensions of running time, model accuracy and security to analyze and compare the scheme of the embodiment with the conventional secure aggregation scheme CCESA and SA. The three federally learned safe aggregation schemes were tested for run time, accuracy and safety using Resnet model and Vgg16 model on CIFAR, CIFAR100 dataset, respectively, and selecting different numbers of terminals. All code is implemented in the python language and pytorch framework, and all code runs on the GPU.
1. Experimental configuration
The three federally learned safe aggregation schemes described above were tested on CIFAR, CIFAR100 datasets using Resnet model and Vgg16 model, respectively. We performed several experiments and set different terminal numbers, n=5, n=7, n=10; the threshold t=n/2+1 of the reconstruction key; setting the local training iteration number of the terminal in edge-side federal learning as local_ epochs =3; global iteration number is global_ epochs =60; the number of samples for one training is batch_size=32; the learning rate is set to η=0.001.
2. Analysis of experimental results
(1) Run-time analysis
Table 3 shows a comparison of the average of global per round training times on Cifar data set, cifar data set, including broadcast shared key time, encryption decryption time, and individual terminal local training time, for each security aggregation scheme using Resnet model, vgg16 model, for n=5, n=7, n=10, respectively.
Table 3 run time comparison
/>
Table 3 shows the time for a round of global training for each security aggregation scheme with different numbers of terminals, different models and data sets. From the point of view of the number of terminals, for the same security aggregation scheme, when the number of terminals increases, the corresponding global per-round training time increases. The reason is that the mask calculation amount of the terminal as a whole becomes large, and the time of broadcasting the key, key sharing becomes longer as the number of terminals increases. From the view of the terminal topology, the global per-round training time required in this embodiment is smaller than CCESA and SA, no matter the number of terminals is 5, 7 or 10, and the running time difference between the global per-round training time and the ccs and SA is larger and larger as the number of terminals increases. This is particularly true for Cifar data sets. As can be seen from the table, EFLSAS was reduced by 16.24 and 17.42 seconds from the average run time per round of CCESA and SA, respectively, for a terminal number of 5 and a dataset of Cifar. In the case of the number of terminals being 7 and the data set being Cifar s, this embodiment is 35.29 and 39.63 seconds more than the CCESA and SA, respectively. In the case where the number of terminals is 10 and the data set is Cifar, the present embodiment is 47.73 seconds and 77.07 seconds more than the CCESA and SA, respectively.
Fig. 4 is the run time of a round of global training on CIFAR data sets using Vgg16 model for each security aggregation scheme when the number of edge side terminals is 7 and 10.
As can be seen from fig. 4, the system of this embodiment has the shortest running time, followed by CCESA schemes and finally SA schemes. Specifically, fig. 4 (a) shows the result of running on Cifar data sets using Vgg16 model when the number of terminals in each scenario is 7, and fig. 4 (b) shows the result when the number of terminals in each scenario is 10. In fig. 4 (a), the present embodiment is significantly lower overall than the other two schemes, with the run time always maintained around 170 seconds. In fig. 4 (b), the run time of this embodiment was maintained at 196 seconds. The reason why the operation time rises from 170 seconds to 196 seconds is that the number of terminals increases, and both the calculation overhead and the communication overhead increase. The CCESA scheme and the SA scheme run times do not differ much from the overall point of view. The present embodiment is calculated to reduce the run time by 18.9% and 28.2% on Cifar data sets compared to the SA scheme when the number of terminals is 7 and 10, respectively. The main reason that the running time can be reduced in this embodiment is that connectivity of the terminal connectivity topology map based on the minimum spanning tree is low, the number of neighbor terminals of the terminal is a constant level O (1), overhead of calculating the mask by using the PRG is O (m), and m is a vector data size. In contrast, the number of neighbors of the SA is the O (n) level, and the overhead of computing the mask is the O (mn) level. Meanwhile, as the communication overhead is offloaded from the edge node to the terminal, the communication overhead of the edge node is also obviously reduced, and O (n 2 +mn) of the SA scheme is reduced to O (n+mn). And the terminal broadcast key is broadcast according to the minimum spanning tree structure, so that the communication time delay is greatly shortened. As can be seen from table 3, the greater the number of terminals, the greater the difference in system run times for each scenario.
(2) Accuracy analysis
Fig. 5 shows the model accuracy of ResNet model after each round of global on CIFAR data set and the model accuracy of Vgg16 model after each round of global on CIFAR data set when the number of terminals is 7.
As can be seen from fig. 5, the model accuracy of the three different secure aggregation schemes of this embodiment, CCESA and SA are not very different, i.e., the EFLSAS scheme proposed herein can ensure the model accuracy while reducing the running time of federal learning. Specifically, each secure aggregation scheme begins to converge to 85% on the Cifar data set for substantially the first 20 rounds of model accuracy, and begins to converge to 65% on the Cifar data set for substantially the first 10 rounds of model accuracy.
(3) Security analysis
For security, different security aggregation schemes and federal learning FedAvg without any security measures were attacked using the gradient leakage attack DLG proposed by l.zhu et al in 2019. To perform an attack, a pair of pseudo-inputs and tags are first randomly generated, and then normal forward and backward propagation is performed. After deriving the pseudo-gradients from the pseudo-data, the model weights are not optimized as in typical training, but instead the pseudo-inputs and labels are optimized to minimize the distance between the pseudo-gradients and the real gradients, bringing the pseudo-data close to the original data by matching the gradients. Fig. 6 shows the behavior of different security aggregation schemes and FedAvg in the face of gradient leakage attacks. The dataset Cifar used was tested.
As can be seen from fig. 6, after revealing the gradient to the attacker, the attacker can basically restore the original training data by attacking the DLG through gradient revealing through about 30 rounds of iteration. In contrast, this embodiment exhibits the same security in the iterative attack of 270 rounds before DLG, and CCESA and SA. Therefore, the embodiment can prove that on the premise of ensuring the accuracy and the safety of the model, the calculation and communication costs of the terminal and the edge node are reduced to a great extent, and the overall convergence time of the model is reduced.
3. Performance analysis
Comparing the performance analysis of the embodiment with CCESA scheme and SA scheme, the embodiment proves that the embodiment can realize reliable safe aggregation and is superior to SA and CCESA algorithm in calculation efficiency and communication efficiency.
(1) System overhead analysis
Fig. 7 shows a generation manner of the minimum connected subgraph, namely, a terminal connected topological graph structure based on the minimum spanning tree is generated according to the communication time delay between terminals.
According to fig. 7, the mask formula of the original SA scheme, terminal a, is:
Θi =Θi+PRG(bua)+PRG(sa,b)+PRG(sa,c)+PRG(sa,d)+PRG(sa,e) (3)
In this embodiment, the mask formula of the terminal a is:
Θi = Θi+PRG(bua)+PRG(sa,d)+PRG(sd,e) (4)
According to equation (4), the present embodiment also achieves privacy protection with less resources than the SA scheme, compared to federal learning without privacy protection measures. Table 4 shows the calculation overhead and communication overhead (n is the number of terminals, and m is the vector data transmitted) of each algorithm deduced through mathematical proof.
Table 4 comparison of computational overhead and communication overhead for various security aggregation algorithms
(2) Terminal overhead analysis
Calculation overhead: o (n 2 +m). The terminal computing overhead is divided into the following: 1. since there are n-1 edges in the minimum spanning tree topology, all nodes need to perform 2 (n-1) key agreement protocols in total, on average each node performs 2 (n-1)/n key agreement protocols, and when n is large, this process is of a constant level, i.e., it takes O (1). 2. The use of the t-out-of-n algorithm to share the key and bu requires the expense of O (n 2). 3. K mask vectors of length m are generated with the key and bu as parameters of the randomizer PRG, k being a constant (number of neighbor terminals), and O (m) is consumed. O (n 2 +m) is required in total.
Communication overhead: o (m+n). The communication overhead of the terminal is divided into the following:
1) The terminal needs to transmit a vector of 1 communication delay with other terminals and receive the terminal topology.
2) In the worst case, n public keys are transmitted and n public keys are received.
3) In the worst case, 2 (n-1) shares of the key and bu are sent, and 2 (n-1) shares of the key and bu are received.
4) The edge node 2n keys and the shares of bu are sent.
5) An encryption gradient of length m is sent to the edge node. The total communication overhead is a 1+a2+2na3+2(2n-n)a4 +m, wherein a 1 is the bit number of the communication delay vector between the recording terminals, a 2 is the bit number of the topological graph, a 3 is the bit number of the public key, a 4 is the bit number of the key share, and m is the bit number of the gradient vector. The total communication overhead is O (m+n).
(3) Edge node overhead analysis
Calculation overhead: o (n 2 +m). The edge node computation overhead is divided into the following:
1) The edge node needs to reconstruct the key and bu for each end device, which requires the cost of O (n 2).
2) O (n 2) is required to generate the minimum spanning tree topology.
3) Generating a mask vector of length m with the key and bu as parameters for the randomizer PRG while eliminating the mask requires the expense of O (mn). O (n 2 +mn) is required in total.
Communication overhead: o (mn+n). The edge node communication overhead is divided into the following:
1) The edge node needs to receive n delay vectors, and needs to consume O (n).
2) N topologies are transmitted, which requires the expense of O (n).
3) Receiving 2n shares of keys and bu requires the expense of O (n).
4) Receiving n mask gradient vectors of length m requires the cost of O (mn). The total communication overhead is na 1+na2+2na3 +mn, wherein a 1 is the bit number of the communication delay vector between the recording terminals, a 2 is the bit number of the topological graph, a 3 is the bit number of the key share, and m is the bit number of the gradient vector. A total of O (mn+n) is required.
The invention not only reduces the communication time of the system and the calculation cost of the terminal to the maximum extent through reasonable design of the safe aggregation scheme, but also reduces the whole training time of the system on the premise of ensuring the accuracy of the model and the safety of the user data. Therefore, compared with the prior art, the invention has obvious technical progress and outstanding substantive characteristics and remarkable progress.
The above embodiment is only one of the preferred embodiments of the present invention, and should not be used to limit the scope of the present invention, but all the insubstantial modifications or color changes made in the main design concept and spirit of the present invention are still consistent with the present invention, and all the technical problems to be solved are included in the scope of the present invention.

Claims (3)

1. The federal learning security aggregation method suitable for the edge computing scene is characterized by comprising the following steps of:
(1) The edge nodes in the edge calculation middle layer take the communication time delay between all terminals in the edge calculation middle layer as the weight of the edges of the full communication topological graph of the terminals, and the communication topological graph structure between all terminals participating in federal learning is modified from the full communication graph to the terminal communication topological graph based on the minimum spanning tree; in the step, the modification process of the terminal connection topological graph based on the minimum spanning tree is as follows: the edge node sequentially selects the edge with the minimum weight by using a minimum spanning tree algorithm, and simultaneously ensures that the currently selected edge and the selected edge do not generate loops until all terminals are positioned in one connected component, and finally, the communication topological graph structure among all terminals participating in federal learning is modified from a full connected graph to a terminal connected topological graph based on the minimum spanning tree;
(2) Each terminal trains a federal learning model by using local data, communicates with a neighbor terminal in a broadcast key mode according to a modified terminal connection topological graph, and then collects key shares of each terminal by an edge node; in the step, the training process of the model adopts a gradient descent method SGD to update parameters, and the formula is as follows:
Wherein w t,k is the updated parameter of the t-th round of the terminal k, w t-1,k is the parameter of the (t-1) -th round, eta is the learning rate, Is the gradient direction of the objective function F k (w) for the parameter w;
The process of the terminal communicating with the neighbor terminal is as follows:
(a) Using a t-out-of-n algorithm to divide the private key s i sk and the random number bu i into t shares, namely SS.share(t,bui)→{bui,j},SS.share(t,si sk)→{si,j sk},j∈neighbori(j);, wherein ss.share () represents a key sharing protocol and neighbor i (j) represents a neighbor terminal set of terminal i;
(b) Encryption of { bu i,j } and { s i,j sk } using other terminal public key s j pk, i.e. E i,j represents ciphertext generated by encrypting bu i,j and s i,j sk,/>Represents an encryption algorithm using private key s j sk;
(c) According to the modified terminal connectivity topology map, the shares { i, j, e i,j } and the public key s i pk are sent to the neighbor terminal (advertisement ());
(d) The terminal receives the share set { j, i, e j,i } from the neighbor terminal, stores the share e j,i belonging to itself, and forwards { j, i, e j,i } and the public key s i pk to other neighbor terminals (Transmit ());
(3) Each terminal generates a random vector by using a symmetric key between the terminal and a neighbor terminal through a pseudo random generator PRG, uses the random vector as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to an edge node;
(4) The edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient; in this step, the mask is eliminated and local aggregation is performed using the following formula:
wherein Θ edge is a local aggregation model gradient, Θ i is a model gradient encrypted by a terminal i, n k is a local data volume of the terminal, and n is a total data volume of all terminals; PRG () represents a pseudo-random generator that decides an output vector from input parameters;
Local aggregation of model gradients was performed using the following formula:
Wherein w t is model gradient with mask of the t-th round, n k is data volume of the terminal local, and n is total data volume of all terminals;
(5) The edge node sends the local aggregation model gradient to a cloud computing processing center of the edge computing highest layer;
(6) And the cloud computing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal.
2. The federal learning security aggregation method applicable to the edge computing scene of claim 1, wherein the terminal is an internet of things device.
3. The federal learning security aggregation method applicable to the edge computing scenario according to claim 2, wherein the edge node is a base station or a wifi access point.
CN202211657554.XA 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene Active CN116094993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211657554.XA CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211657554.XA CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Publications (2)

Publication Number Publication Date
CN116094993A CN116094993A (en) 2023-05-09
CN116094993B true CN116094993B (en) 2024-05-31

Family

ID=86198382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211657554.XA Active CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Country Status (1)

Country Link
CN (1) CN116094993B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596065B (en) * 2023-07-12 2023-11-28 支付宝(杭州)信息技术有限公司 Gradient calculation method and device, storage medium, product and electronic equipment
CN116720594B (en) * 2023-08-09 2023-11-28 中国科学技术大学 Decentralized hierarchical federal learning method
CN117196014B (en) * 2023-09-18 2024-05-10 深圳大学 Model training method and device based on federal learning, computer equipment and medium
CN117010485B (en) * 2023-10-08 2024-01-26 之江实验室 Distributed model training system and gradient protocol method in edge scene
CN117077186B (en) * 2023-10-18 2024-02-02 南方电网科学研究院有限责任公司 Power load prediction method for realizing privacy protection by federal learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100659A (en) * 2020-09-14 2020-12-18 电子科技大学 Block chain federal learning system and Byzantine attack detection method
CN112565331A (en) * 2020-11-02 2021-03-26 中山大学 Edge calculation-based end-edge collaborative federated learning optimization method
CN113791895A (en) * 2021-08-20 2021-12-14 北京工业大学 Edge calculation and resource optimization method based on federal learning
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN114154646A (en) * 2021-12-07 2022-03-08 南京华苏科技有限公司 Efficiency optimization method for federal learning in mobile edge network
CN114298331A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data processing method and device, equipment and storage medium
CN114492739A (en) * 2022-01-04 2022-05-13 北京邮电大学 Federal learning method based on Internet of vehicles, roadside unit, vehicle node and base station
CN115017541A (en) * 2022-06-06 2022-09-06 电子科技大学 Cloud-side-end-collaborative ubiquitous intelligent federal learning privacy protection system and method
CN115277015A (en) * 2022-07-16 2022-11-01 西安邮电大学 Asynchronous federal learning privacy protection method, system, medium, equipment and terminal
EP4102351A1 (en) * 2021-06-11 2022-12-14 Mellanox Technologies, Ltd. Secure network access device
CN115484042A (en) * 2021-06-14 2022-12-16 迈络思科技有限公司 Machine learning assisted network device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11244242B2 (en) * 2018-09-07 2022-02-08 Intel Corporation Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (MEC) networks
US20210406782A1 (en) * 2020-06-30 2021-12-30 TieSet, Inc. System and method for decentralized federated learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100659A (en) * 2020-09-14 2020-12-18 电子科技大学 Block chain federal learning system and Byzantine attack detection method
CN112565331A (en) * 2020-11-02 2021-03-26 中山大学 Edge calculation-based end-edge collaborative federated learning optimization method
EP4102351A1 (en) * 2021-06-11 2022-12-14 Mellanox Technologies, Ltd. Secure network access device
CN115484042A (en) * 2021-06-14 2022-12-16 迈络思科技有限公司 Machine learning assisted network device
CN113791895A (en) * 2021-08-20 2021-12-14 北京工业大学 Edge calculation and resource optimization method based on federal learning
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN114154646A (en) * 2021-12-07 2022-03-08 南京华苏科技有限公司 Efficiency optimization method for federal learning in mobile edge network
CN114298331A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data processing method and device, equipment and storage medium
CN114492739A (en) * 2022-01-04 2022-05-13 北京邮电大学 Federal learning method based on Internet of vehicles, roadside unit, vehicle node and base station
CN115017541A (en) * 2022-06-06 2022-09-06 电子科技大学 Cloud-side-end-collaborative ubiquitous intelligent federal learning privacy protection system and method
CN115277015A (en) * 2022-07-16 2022-11-01 西安邮电大学 Asynchronous federal learning privacy protection method, system, medium, equipment and terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Edge Intelligence: Empowering Intelligence to the Edge of Network";Dianlei Xu;《Proceedings of the IEEE》;20211101;全文 *
"边缘场景下动态权重的联邦学习优化方法";程帆;《计算机科学》;20221215;全文 *
基于联邦学习的边缘智能协同计算与隐私保护方法;刘庆祥;许小龙;张旭云;窦万春;计算机集成制造系统;20211231(第009期);全文 *
面向数据共享交换的联邦学习技术发展综述;王亚珅;;无人系统技术;20191115(第06期);全文 *

Also Published As

Publication number Publication date
CN116094993A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN116094993B (en) Federal learning security aggregation method suitable for edge computing scene
Wang et al. Privacy-preserving federated learning for internet of medical things under edge computing
Liu et al. Blockchain and federated learning for collaborative intrusion detection in vehicular edge computing
Chen et al. BDFL: A byzantine-fault-tolerance decentralized federated learning method for autonomous vehicle
Jung et al. Collusion-tolerable privacy-preserving sum and product calculation without secure channel
Liu et al. Privacy-preserving federated k-means for proactive caching in next generation cellular networks
Li et al. Scalable privacy-preserving participant selection for mobile crowdsensing systems: Participant grouping and secure group bidding
Hao et al. Efficient, private and robust federated learning
CN112668044B (en) Privacy protection method and device for federal learning
Qu et al. Generative adversarial networks enhanced location privacy in 5G networks
Fang et al. A privacy-preserving and verifiable federated learning method based on blockchain
Olowononi et al. Federated learning with differential privacy for resilient vehicular cyber physical systems
Jiang et al. Federated dynamic graph neural networks with secure aggregation for video-based distributed surveillance
CN116523074A (en) Dynamic fairness privacy protection federal deep learning method
CN116187482A (en) Lightweight trusted federation learning method under edge scene
Kanchan et al. An efficient and privacy-preserving federated learning scheme for flying ad hoc networks
Zhu et al. Enhanced federated learning for edge data security in intelligent transportation systems
Luo et al. RUAP: Random rearrangement block matrix-based ultra-lightweight RFID authentication protocol for end-edge-cloud collaborative environment
Rani et al. A probabilistic routing-based secure approach for opportunistic IoT network using blockchain
Zeng et al. Attribute‐Based Anonymous Handover Authentication Protocol for Wireless Networks
CN117540426A (en) Method and device for sharing energy power data based on homomorphic encryption and federal learning
Zhang et al. A security optimization scheme for data security transmission in UAV-assisted edge networks based on federal learning
Liang et al. Secure and efficient hierarchical Decentralized learning for Internet of Vehicles
Ergün et al. Communication-efficient secure aggregation for federated learning
CN116340986A (en) Block chain-based privacy protection method and system for resisting federal learning gradient attack

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant