CN116094993A - Federal learning security aggregation method suitable for edge computing scene - Google Patents

Federal learning security aggregation method suitable for edge computing scene Download PDF

Info

Publication number
CN116094993A
CN116094993A CN202211657554.XA CN202211657554A CN116094993A CN 116094993 A CN116094993 A CN 116094993A CN 202211657554 A CN202211657554 A CN 202211657554A CN 116094993 A CN116094993 A CN 116094993A
Authority
CN
China
Prior art keywords
terminal
model
edge
aggregation
federal learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211657554.XA
Other languages
Chinese (zh)
Other versions
CN116094993B (en
Inventor
王瑞锦
李雄
张凤荔
周世杰
王金波
赖金山
周潼
程帆
李嘉坤
孙鹏钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211657554.XA priority Critical patent/CN116094993B/en
Publication of CN116094993A publication Critical patent/CN116094993A/en
Application granted granted Critical
Publication of CN116094993B publication Critical patent/CN116094993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a federal learning security aggregation method suitable for an edge computing scene, which comprises the following steps: (1) The edge node modifies the communication topological graph structure between the terminals from a full connected graph to a terminal connected topological graph based on a minimum spanning tree; (2) Each terminal trains a federal learning model by using local data, and communicates with a neighbor terminal in a broadcast key mode according to the modified terminal connection topological graph; (3) Each terminal calculates a mask and is used for encrypting the model gradient; (4) The edge node receives the encrypted model gradient transmitted by the terminal and carries out local aggregation; (5) The cloud computing processing center receives the local aggregation model gradient, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal. The method solves the problem of privacy leakage in federal learning, avoids the need of a large amount of extra calculation and communication expenditure, and improves the convergence rate of the model.

Description

Federal learning security aggregation method suitable for edge computing scene
Technical Field
The invention relates to the technical field of edge calculation, in particular to a federal learning security aggregation method suitable for an edge calculation scene.
Background
With the popularity of mobile smart devices and the development of wireless communication technologies such as 5G, many computationally intensive applications with low latency requirements, such as online immersive gaming, augmented reality, and video streaming analysis, have emerged. Because conventional cloud computing cannot meet the low-latency requirements of these applications, the sadylanarayanan et al propose a novel computing model, called edge computing, which offloads a large number of computing tasks from the cloud to edge nodes, such as wifi wireless access points and base stations, which are closer to users, and can also more effectively protect data privacy.
The federal learning (FL, federated Learning) technique proposed by google is an important technical method for solving privacy protection in distributed machine learning. On the one hand, how as much data as possible can be utilized in the joint modeling process. On the other hand, regulatory authorities and society are increasingly demanding in terms of privacy protection. Federal learning proposes to solve this dilemma in such a way that data is not moving and data is available in a invisible manner. McMahan et al propose a federal averaging algorithm FedAvg (Federated Averaging) for federal learning, but the workload of each terminal in the algorithm is the same. In a practical scenario, however, the available computing resources of different terminals are not the same and the data is highly heterogeneous. To address this problem, tian et al propose an improved FedProx algorithm that allows the system to perform variable amounts of work based on the available computing resources of the different terminals to avoid that they are forced to exit due to overload. Karimireddy et al propose an improved algorithm SCAFFOLD for heterogeneous data problems. Under the condition of heterogeneous terminal height data, compared with FedAVg, the algorithm can avoid the global model from developing to local optimum and quicken the convergence rate.
However, federal learning does not completely solve the privacy disclosure problem. To address this problem, m.abadi et al propose to apply differential privacy techniques in a deep-learned random gradient descent (SGD) algorithm to protect the privacy of user data. Due to the development of federal learning and the improvement of privacy protection requirements, geyer et al propose to apply differential privacy technology in federal learning to protect the data of the terminal from leakage in the federal learning process. Another solution is to deploy multiparty security computing (MPC, secure multiparty computation), MPL, in federal learning. Song et al propose the problem of training a machine learning model with privacy protection (TMMPP for short) over multiple data sets and offer a solution to the security problem in distributed machine learning by using MPC. G Xu et al propose a verifiable framework VerifyNet to ensure confidentiality and integrity of the model, wang R et al propose a privacy protection federal learning framework of medical Internet of things under edge computing, bonawitz et al propose a concept of secure aggregation (SA, secure Aggregation), and primitives such as key sharing, encryption and decryption in cryptography are used in the federal learning framework to protect privacy of the terminal from being violated.
However, since the computing and communication overheads caused by the key sharing and encryption and decryption on the framework are large, the convergence speed of the global model is often slow. In this regard, bell et al propose to use sparse graphs to reduce connectivity of terminals to reduce computational and communication overhead of security aggregation; choi et al also propose a CCESA algorithm based on a sparse graph Erdos-Renyi graph, which reduces the connectivity of the graph, so that the terminal can only share the secret key with the neighbor terminal, thereby achieving the purpose of reducing the communication calculation overhead.
There are still some disadvantages here. First, both SA and CCESA require broadcasting public keys and key shares by means of a cloud server, which greatly increases the communication computation overhead of the cloud server; second, the CCESA scheme, although reconstructing a key by changing an object of a terminal sharing a key share from all other terminals to a neighbor terminal, needs to guarantee at least t neighbor terminals per terminal since it is necessary to acquire a key share from the neighbor terminal; third, in some scenarios where the cloud server is far from the terminal, the communication delay between them is high, and much extra time is required to be paid in broadcasting the public key and the key share through the cloud server, which slows down the model convergence time of the whole system.
In summary, how to solve the problem of privacy disclosure in federal learning, avoid the need of a large amount of additional computation and communication overhead, and improve the convergence speed of the model, which is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a federation learning security aggregation method suitable for an edge computing scene, which can solve the problem of privacy leakage in federation learning, avoid the need of a large amount of extra computing and communication expenditure and improve the convergence rate of a model.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the federal learning security aggregation method suitable for the edge computing scene comprises the following steps:
(1) The edge nodes in the edge calculation middle layer take the communication time delay between all terminals in the edge calculation middle layer as the weight of the edges of the full communication topological graph of the terminals, and the communication topological graph structure between all terminals participating in federal learning is modified from the full communication graph to the terminal communication topological graph based on the minimum spanning tree;
(2) Each terminal trains a federal learning model by using local data, communicates with a neighbor terminal in a broadcast key mode according to a modified terminal connection topological graph, and then collects key shares of each terminal by an edge node;
(3) Each terminal generates a random vector by using a symmetric key between the terminal and a neighbor terminal through a pseudo random generator PRG, uses the random vector as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to an edge node;
(4) The edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient;
(5) The edge node sends the local aggregation model gradient to a cloud computing processing center of the edge computing highest layer;
(6) And the cloud computing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal.
In the step (1), the modification process of the terminal connection topological graph based on the minimum spanning tree is as follows: the edge node sequentially selects the edge with the minimum weight by using the minimum spanning tree algorithm, and simultaneously ensures that the currently selected edge and the already selected edge do not generate loops until all terminals are positioned in one connected component, and finally, the communication topological graph structure among all terminals participating in federal learning is modified from a full connected graph to a terminal connected topological graph based on the minimum spanning tree.
Further, in the step (2), the training process of the model updates parameters by adopting a gradient descent method SGD, and the formula is as follows:
Figure BDA0004012093370000031
wherein w is t,k Is the updated parameter of the terminal k t round, w t-1,k Is the parameter of the (t-1) th round, eta is the learning rate,
Figure BDA0004012093370000032
is an objective function F k (w) gradient direction for parameter w.
Specifically, in the step (2), the process of the terminal communicating with the neighbor terminal is as follows:
(a) Using the t-out-of-n algorithm to transfer private key s i sk And a random number bu i Each divided into t shares, i.e. ss.share (t, bu i )→{bu i,j +,SS.share(t,s i sk )→{s i,j sk },j∈neighbor i (j) The method comprises the steps of carrying out a first treatment on the surface of the Wherein ss.share () stands for key sharing protocol, neighbor i (j) A neighbor terminal set representing terminal i;
(b) Using other terminal public keys s j pk For { bu i,j Sum { s } i,j sk Encryption, i.e
Figure BDA0004012093370000033
e i,j Representative pair bu i,j Sum s i,j sk Ciphertext generated by encryption->
Figure BDA0004012093370000034
Representative uses private key s j sk Is a cryptographic algorithm of (a);
(c) According to the modified terminal connection topological graph, shares { i, j, e i,j Sum of public key s i pk Transmitting to a neighbor terminal (advertisement ());
(d) The terminal receives the share set { j, i, e } from the neighbor terminal j,i Store share e belonging to oneself j,i Forwarding { j, i, e simultaneously j,i Sum of public key s i pk To other neighbor terminals outside j (Transmit ()).
Further, in the step (4), the mask is eliminated and local aggregation is performed by adopting the following formula:
Figure BDA0004012093370000035
Figure BDA0004012093370000036
in the formula Θ edge For local aggregate model gradients, Θ i Model gradient encrypted for terminal i, n k And n is the total data volume of all terminals.
Still further, in the step (4), the model gradient is locally aggregated by using the following formula:
Figure BDA0004012093370000041
wherein w is t Is the model gradient with mask for the t-th round.
Preferably, the terminal is an internet of things device.
Preferably, the edge node is a base station or a wifi access point.
Compared with the prior art, the invention has the following beneficial effects:
in the invention, the edge node constructs the terminal communication topological graph based on the minimum spanning tree according to the communication time delay among the terminals, so that the communication degree of the terminal communication graph is greatly reduced; meanwhile, the terminal only generates a symmetric key with the neighbor terminal and calculates a mask, so that the calculation overhead of the system is reduced.
Furthermore, the invention uses the minimum spanning tree as the terminal connected topological graph, and the distribution and sharing of the secret key are both carried out through the neighbor terminal for forwarding broadcast instead of the edge node, thereby well reducing the workload and communication overhead of the edge node.
And the minimum spanning tree structure can minimize the delay of key broadcasting, thereby effectively improving the convergence rate of the model. A large number of experimental results show that compared with the traditional safe polymerization method, when the number of terminals is 10, the safe polymerization method can at least reduce the federal learning operation time by 28.2 percent on the basis of not reducing the federal learning safety level and the model precision.
Drawings
FIG. 1 is a diagram of a system architecture for implementing an embodiment of the present invention.
Fig. 2 is a schematic diagram of a process of modifying a minimum spanning tree based terminal connectivity topology in an embodiment of the present invention.
Fig. 3 is a schematic diagram of a process of broadcasting by a terminal in an embodiment of the present invention.
FIG. 4 is a system runtime comparison of the present invention-examples with the conventional secure aggregation schemes CCESA and SA.
FIG. 5 is a graph showing the model accuracy of the present invention-embodiment compared to the conventional safe polymerization schemes CCESA and SA.
FIG. 6 is a diagram showing the comparison of the security of the present invention-examples with the conventional security polymerization schemes CCESA and SA.
Fig. 7 is a diagram of a terminal connectivity topology based on a minimum spanning tree generated according to a communication latency between terminals in an embodiment of the present invention.
Detailed Description
The invention will be further illustrated by the following description and examples, which include but are not limited to the following examples.
Examples
The embodiment provides the federal learning security aggregation method suitable for the edge computing scene, which can solve the problem of privacy leakage in federal learning, avoid the need of a large amount of extra computing and communication expenditure, and improve the convergence rate of the model.
The following defines a system for a three-layer federal learning security aggregate overall framework in the edge computing scenario in this embodiment, including the overall framework diagram of the system and the definition of entities participating in federal learning in the system.
Federal learning is a model paradigm of a distributed machine learning model, on the basis of which this embodiment introduces federal learning into edge computation, defining a "cloud-edge-end" three-layer model. From the whole, the terminal trains the model locally, then transmits the gradient encryption of the model to the edge node, and the edge node carries out safe aggregation according to the encryption gradient to obtain a local aggregation model. And then the edge node transmits the local aggregation model to a cloud computing center for further aggregation to obtain a global aggregation model. A schematic of federal learning of a "cloud-edge-end" three-tier architecture is shown in fig. 1.
The whole three-layer edge computing architecture comprises three types of entities, wherein the first layer is a cloud computing processing center and is used for aggregating models from edge nodes and simultaneously issuing the finally aggregated models to the edge nodes; the second layer is an edge node for aggregating the local models from the terminals. The third layer is the terminal, where the user data is generated. And the terminal performs local model training by using the user data, and finally, the trained model gradient is uploaded to the edge node for aggregation.
1. Terminal
The Internet of things equipment such as mobile phones, personal computers and the like is located at an edge layer of an edge computing three-layer architecture. In this embodiment, the main function of the terminal is to train the model with local data and communicate with the neighboring terminal in the manner of broadcast key, and at the same time calculate the mask and encrypt the model gradient, and then send to the edge node.
2. Edge node
And an edge server in the middle layer of the edge computing three-layer architecture, such as a base station or a wifi access point. In this embodiment, the edge node is used as a local aggregation center in federal learning, receives the encryption gradient transmitted by the terminal, performs local aggregation and removes the mask, and finally sends the local model to the cloud for further aggregation.
3. Cloud computing processing center
The method is in the highest layer of the three-layer architecture of edge computing, and has the main functions of receiving the local aggregation model gradient from the edge nodes, performing aggregation again to form a global model, and finally transmitting the global model to the edge nodes to provide services for the terminal.
In the federal learning model of the present embodiment, the proposed secure aggregation scheme is introduced into the "edge-end" edge layer in edge computation, so as to achieve the purposes of minimizing the edge node overhead and protecting the data privacy of the terminal. Assuming that the model objective function of the terminal in the edge-end federal learning is F k (w) the objective function of the edge node appears as
Figure BDA0004012093370000051
Meanwhile, under the conditions of ensuring that the global model converges, the accuracy is high and the data privacy is well protected, smaller additional communication and calculation overhead of the system are required to be ensured. Assuming that the computational overhead is a and the communication overhead is b, the problem of the model is defined as:
Figure BDA0004012093370000052
s.t.min(a+b)(2)
parameter definition
The definitions of some of the parameters used in this embodiment are described in detail herein, as shown in tables 1 and 2.
TABLE 1 definition of parameters
Sequence number Parameter name Description of related Art
1 V Terminal set participating in federal learning
2 neighbor i (j) Neighbor terminal set of terminal i
3 t i,j Transmission delay between nodes i and j
4 s i pk Public key of terminal i
5 s i sk Private key of terminal i
6 bu i Random number of terminal i for double masking
7 bu i,j Bu generated by terminal i i Share, which is sent to terminal j
8 s i,j sk S generated by terminal i i sk Share, which is sent to terminal j
9 Θ i Terminal i original model gradient
10 Θ i Model mask gradient for terminal i
11 e i,j Para-bu i,j Sum s i,j sk Ciphertext generated by encryption
Table 2 Algorithm definition
Figure BDA0004012093370000061
Considering that in edge computation, some tasks of different terminals requiring a larger computation amount may be offloaded to edge nodes to be performed, such as training and reasoning processes of models, the overhead of the edge nodes is generally larger. At this time, if a large amount of tasks of broadcasting terminal information in the security aggregation process are put at the edge node, the edge node may be down, so that the training effect of the federal learning model of the overall three-layer edge computing architecture is affected. In this regard, the present embodiment places a significant amount of communication computation overhead locally, not broadcasting the public key and key shares of the terminal through the edge node, but letting the terminal itself broadcast.
As can be seen from fig. 1, the terminals i and j participating in federal learning communicate with each other and delay the communication time t i,j Transmitting the communication topological graph structure between the terminals to an edge node, and modifying the communication topological graph structure between the terminals from the full connected graph G to the terminal connected topological graph G based on the minimum spanning tree by the edge node through a minimum spanning tree algorithm . According to the structure of the figure, the terminal and the neighbor terminal are mutually forwarded to achieve the purpose of broadcasting the secret key. The terminal generates random vector as mask of gradient through pseudo random generator PRG by using symmetric key between the terminal and the neighbor terminal, and transmits the masked gradient to the edge node for local model aggregation and combinationDecoding. And finally, the edge nodes transmit the aggregated model gradient to a cloud computing center for further aggregation, and finally, a global model is obtained. The global model is issued to the edge node in the next iteration and serves the terminal.
Each step of the flow of the embodiment will be described in detail.
Firstly, the edge node sequentially selects the edge with the minimum weight by using the minimum spanning tree algorithm, simultaneously ensures that the currently selected edge and the selected edge do not generate loops until all terminals are positioned in one connected component, and finally modifies the communication topological graph structure among all terminals participating in federal learning from a full connected graph to a terminal connected topological graph based on the minimum spanning tree.
As shown in fig. 2, the left is a connected graph between terminals in an edge computation scenario, where there are 7 terminals (nodes of the graph) v= { a, b, c, d, e, f, g }. The line indicates that communication (edge of the figure) e= { E is possible between the terminals a,b ,e a,d ,e a,f ,e b,c ,e b,d ,e c,e ,e d,e ,e d,f ,e e,f ,e e,g ,e f,g Digital representation of communication delay (weight of edge). The right is a terminal connected topology graph of a minimum spanning tree selected by the minimum spanning tree algorithm. The algorithm process is to select the side with the smallest weight from the left connected graph, wherein the smallest weight is the side e f,g Weight is 3, corresponding terminals are f and g, and e is simultaneously calculated f,g And terminals f, g are added to the selected terminal set V Is a kind of medium. The selection of the least weighted one of the remaining edges then continues, assuming e x,y And the corresponding terminals are x and y. The smallest spanning tree can be selected for this edge composition as long as the two terminals x, y of this edge are located in two different connected components (the largest connected subgraph in the undirected graph is called connected component). For V-do the following classifications:
if V has only one connected component, x, y need to satisfy one of three conditions:
Figure BDA0004012093370000071
if V-contains more than one connected component, it is assumed that V- =V 1 ∪V 2 ∪…∪V m Wherein V is i (i∈[1,m]) V-to-other V j (j∈[1,m]J+.i) non-connected sub-connected components. The terminals x, y need to meet one of four conditions:
Figure BDA0004012093370000072
and sequentially selecting the minimum weighted sides of the original terminal connected graph according to the rule until all terminals are positioned in one connected component. In the edge computing scene, when n terminals exist, the edge node needs to select a proper n-1 communication paths according to the communication time delay between the terminals to generate a terminal communication topological graph based on a minimum spanning tree.
And then, each terminal trains a federal learning model by using local data, communicates with the neighbor terminals in a broadcast key mode according to the modified terminal connection topological graph, and then collects key shares of each terminal by the edge node.
This example uses FedAvg as the federal learning algorithm on the edge side. Specifically, the terminal node performs iterative training for multiple times locally to form a local model, and the training process of the model adopts a gradient descent method SGD to update parameters, and the formula is as follows:
Figure BDA0004012093370000073
wherein w is t,k Is the updated parameter of the terminal k t round, w t-1,k Is the parameter of the (t-1) th round, eta is the learning rate,
Figure BDA0004012093370000081
is an objective function F k (w) gradient direction for parameter w.
The key of the terminal used in the security aggregation process is sent to the neighbor terminal, and the neighbor terminal forwards the key. In the forwarding process, the neighbor terminal does not send the key back to the terminal which originally sends the key, so that the unlimited propagation of the key is avoided.
Fig. 3 shows a procedure of key broadcasting by the terminal d. d sends the key to the neighbor terminals a and e first, then a and e receive the key from d and then help d forward to achieve the broadcasting effect, namely a forwards b again and e forwards g again, but does not transmit to d again. The whole process does not need intervention of edge nodes, so that workload of the edge nodes is reduced. The following is a specific broadcast algorithm based on neighbor terminal forwarding:
Figure BDA0004012093370000082
Figure BDA0004012093370000083
Figure BDA0004012093370000091
the algorithm 1 is used for transmitting broadcast data in an initial stage of the terminal, namely, the terminal transmits the public key and the key share to all neighbor terminals; algorithm 2 is used to receive data sent by a neighbor terminal (with the index last_id) and at the same time help forward the data to all other neighbor terminals of the own that do not include the index last_id. The following is the step of the terminal sharing the broadcast key:
for the terminal:
1. using the t-out-of-n algorithm to transfer private key s i pk And bu i Divided into n parts, SS. Share (t, bu) i )→{bu i,j },
SS.share(t,s i sk )→{s i,j sk },j∈neighbor i (j);
2. Using other terminal public key pairs { bu i,j Sum { s } i,j sk The encryption is performed, and the data is encrypted,
Figure BDA0004012093370000092
3. according to G-the shares { i, j, e } i,j Sum of public key s i pk To the neighbor terminal (advertisement ()).
4. Receiving a set of shares { j, i, e j,i Store share e belonging to oneself j,i Forwarding { j, i, e simultaneously j,i Sum of public key s i pk To other neighbors outside j (Transmit ()).
For edge nodes:
1. collecting public key { s } from terminal i pk ,i∈V}。
Next, each terminal generates a random vector by a pseudo random generator PRG using a symmetric key with a neighbor terminal, and uses it as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to the edge node. And the edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient.
After the terminal training is completed, encrypting the model gradient using mask, and transmitting the encrypted gradient to the edge node for local aggregation, wherein the formula is as follows:
Figure BDA0004012093370000093
wherein n is k And n is the total data volume of all terminals. W obtained at this time t Is the model gradient with the mask for the t-th round, and the next step requires the edge node to get the key shares from the terminal and reconstruct the key, thereby removing the mask. The specific algorithm is as follows:
Figure BDA0004012093370000101
and finally, the cloud computing processing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and issues the global aggregation model to the edge node to provide services for the terminal.
Summarizing, it can be summarized as follows:
for the terminal:
1. decryption acquisition using own private key
Figure BDA0004012093370000111
2. All shares { (bu) j,i ,s j,i sk ) Transmitting to the edge node;
for edge nodes:
1. collecting all key shares { (bu) from terminals j,i ,s j,i sk )};
2. Ss.recon ({ bu) using t share keys i,j })→bu i ,SS.recon({s i,j sk })→s i sk
3. FedAvg was used as a federal average algorithm for federal learning aggregation functions:
Figure BDA0004012093370000112
4. the edge node removes the mask:
Figure BDA0004012093370000113
for a cloud computing processing center:
1. collecting model gradients from the edge nodes;
2. calculating an aggregation model gradient: theta (theta) cloud =∑Θ edge
3. Issuing global model Θ cloud To the edge side, the next iteration is performed.
The present example protocol is experimentally compared and analyzed with conventional protocols as follows. The experiment is mainly carried out from three dimensions of running time, model accuracy and safety, and analysis and comparison are carried out on the scheme of the embodiment and the CCESA and SA of the traditional safe polymerization scheme. The running time, accuracy and safety of three federal learning security aggregation schemes were tested using the Resnet18 model and the Vgg16 model on the CIFAR10 and CIFAR100 datasets, respectively, and selecting different numbers of terminals. All code is implemented in the python language and the pytorch framework, and all code runs on the GPU.
1. Experimental configuration
The three federally learned secure aggregation schemes described above were tested on the CIFAR10, CIFAR100 datasets using the Resnet18 model and Vgg16 model, respectively. We performed several experiments and set different terminal numbers, n=5, n=7, n=10; the threshold t=n/2+1 of the reconstruction key; setting the local training iteration number of the terminal in edge-side federal learning to be local_epochs=3; global iteration number is global_epochs=60; the number of samples for one training is batch_size=32; the learning rate is set to η=0.001.
2. Analysis of experimental results
(1) Run-time analysis
Table 3 shows a comparison of the average of global per round training times on the Cifar10 data set, cifar100 data set, including broadcast shared key time, encryption and decryption time, and individual terminal local training time, using the Resnet18 model, vgg16 model for each security aggregation scheme, where the number of terminals is n=5, n=7, n=10, respectively.
Table 3 run time comparison
Figure BDA0004012093370000121
Table 3 shows the time for a round of global training for each security aggregation scheme with different numbers of terminals, different models and data sets. From the point of view of the number of terminals, for the same security aggregation scheme, when the number of terminals increases, the corresponding global per-round training time increases. The reason is that the mask calculation amount of the terminal as a whole becomes large, and the time of broadcasting the key, key sharing becomes longer as the number of terminals increases. From the view of the terminal topology, the global per-round training time required in this embodiment is smaller than the CCESA and SA no matter the number of terminals is 5, 7 or 10, and the running time difference between the global per-round training time and the CCESA and the SA is larger as the number of terminals increases. This is particularly true for the Cifar100 dataset. As can be seen from the table, the EFLSAS is reduced by 16.24 and 17.42 seconds per round of average run time for CCESA and SA, respectively, for a terminal number of 5 and a data set of Cifar 100. In the case of 7 terminals and Cifar100 data sets, this embodiment is 35.29 and 39.63 seconds more than the CCESA and SA, respectively. In the case of 10 terminals and Cifar100 data sets, the present embodiment is 47.73 and 77.07 seconds more than the CCESA and SA, respectively.
Fig. 4 is a run time of one round of global training on the CIFAR100 dataset using Vgg16 model for each security aggregation scheme when the number of edge side terminals is 7 and 10.
As can be seen from fig. 4, the system of this embodiment has the shortest running time, followed by the CCESA scheme and finally the SA scheme. Specifically, fig. 4 (a) shows the result of running on the Cifar100 data set using the Vgg16 model when the number of terminals of each scenario is 7, and fig. 4 (b) shows the result when the number of terminals of each scenario is 10. In fig. 4 (a), the present embodiment is significantly lower overall than the other two schemes, with the run time always maintained around 170 seconds. In fig. 4 (b), the run time of this embodiment was maintained at 196 seconds. The reason why the operation time rises from 170 seconds to 196 seconds is that the number of terminals increases, and both the calculation overhead and the communication overhead increase. The CCESA scheme and the SA scheme are not quite run-time different from the overall point of view. Calculated, this embodiment reduces the run time on the Cifar100 dataset by 18.9% and 28.2% compared to the SA scheme when the number of terminals is 7 and 10, respectively. The main reason that the running time can be reduced in this embodiment is that connectivity of the terminal connectivity topology map based on the minimum spanning tree is low, the number of neighbor terminals of the terminal is a constant level O (1), overhead of calculating the mask by using the PRG is O (m), and m is a vector data size. In contrast, the number of neighbors of the SA is the O (n) level, and the overhead of computing the mask is the O (mn) level. Meanwhile, since communication overhead is offloaded from the edge node to the terminal,the communication overhead of the edge node is also significantly reduced, and the edge node is formed by O (n 2 +mn) is reduced to O (n+mn). And the terminal broadcast key is broadcast according to the minimum spanning tree structure, so that the communication time delay is greatly shortened. As can be seen from table 3, the greater the number of terminals, the greater the difference in system run times for each scenario.
(2) Accuracy analysis
Fig. 5 shows the model accuracy of the res net18 model after each round of the global set of CIFAR10 data and the model accuracy of the Vgg16 model after each round of the global set of CIFAR100 data when the number of terminals is 7.
As can be seen from fig. 5, the model accuracy of the three different secure aggregation schemes of the present embodiment, CCESA and SA are not very different, i.e., the EFLSAS scheme proposed herein can guarantee the model accuracy while reducing the runtime of federal learning. Specifically, each secure aggregation scheme begins to converge to 85% on the Cifar10 dataset, and begins to converge to 65% on the Cifar100 dataset, essentially on the first 10 rounds.
(3) Security analysis
For security, different security aggregation schemes were attacked with federal learning FedAvg without any security measures using gradient leakage attack DLG proposed by L.Zhu et al in 2019. To perform an attack, a pair of pseudo-inputs and tags are first randomly generated, and then normal forward and backward propagation is performed. After deriving the pseudo-gradients from the pseudo-data, the model weights are not optimized as in typical training, but instead the pseudo-inputs and labels are optimized to minimize the distance between the pseudo-gradients and the real gradients, bringing the pseudo-data close to the original data by matching the gradients. Figure 6 shows the behavior of different security aggregation schemes and FedAvg in the face of gradient leakage attacks. The dataset used was tested Cifar10.
As can be seen from fig. 6, after revealing the gradient to the attacker, the attacker can basically restore the original training data by attacking the DLG through gradient revealing through about 30 rounds of iteration. In contrast, this embodiment exhibits the same security as CCESA and SA in the iterative attack of 270 rounds before DLG. Therefore, the embodiment can prove that on the premise of ensuring the accuracy and the safety of the model, the calculation and communication costs of the terminal and the edge node are reduced to a great extent, and the overall convergence time of the model is reduced.
3. Performance analysis
Comparing the performance analysis of the embodiment with the CCESA scheme and the SA scheme, the embodiment proves that the embodiment can realize reliable safe aggregation and is superior to the SA and CCESA algorithms in calculation efficiency and communication efficiency.
(1) System overhead analysis
Fig. 7 shows a generation manner of the minimum connected subgraph, namely, a terminal connected topological graph structure based on the minimum spanning tree is generated according to the communication time delay between terminals.
According to fig. 7, the mask formula of the original SA scheme, terminal a, is:
Θ i =Θ i +PRG(bu a )+PRG(s a,b )+PRG(s a,c )+PRG(s a,d )+PRG(s a,e ) (3)
in this embodiment, the mask formula of the terminal a is:
Θ i = Θ i +PRG(bu a )+PRG(s a,d )+PRG(s d,e ) (4)
according to equation (4), the present embodiment also achieves privacy protection with less resources than the SA scheme, compared to federal learning without privacy protection measures. Table 4 shows the calculation overhead and communication overhead (n is the number of terminals, and m is the vector data transmitted) of each algorithm deduced through mathematical proof.
Table 4 comparison of computational overhead and communication overhead for various security aggregation algorithms
Figure BDA0004012093370000141
(2) Terminal overhead analysis
Calculation overhead: o (n) 2 +m). The terminal computing overhead is divided into the following: 1. due to minimum spanning treeThere are n-1 edges in the topology, and all nodes need to perform 2 (n-1) times the key agreement protocol in total, and each node performs 2 (n-1)/n times the key agreement protocol on average, and when n is large, this process is of a constant level, i.e. it needs to consume O (1). 2. Sharing keys and bu using the t-out-of-n algorithm requires the expense of O (n 2 ). 3. K mask vectors of length m are generated with the key and bu as parameters of the randomizer PRG, k being a constant (number of neighbor terminals), and O (m) is consumed. In total, O (n) 2 +m)。
Communication overhead: o (m+n). The communication overhead of the terminal is divided into the following:
1) The terminal needs to transmit a vector of 1 communication delay with other terminals and receive the terminal topology.
2) In the worst case, n public keys are transmitted and n public keys are received.
3) In the worst case, 2 (n-1) shares of the key and bu are sent, and 2 (n-1) shares of the key and bu are received.
4) The edge node 2n keys and the shares of bu are sent.
5) An encryption gradient of length m is sent to the edge node. A total communication overhead 1 +a 2 +2na 3 +2(2n-n)a 4 +m, where a 1 To record the bit number of communication delay vector between terminals, a 2 A is the bit number of the topological graph 3 A is the bit number of the public key, a 4 The number of bits that are key shares, m is the number of bits of the gradient vector. The total communication overhead is O (m+n).
(3) Edge node overhead analysis
Calculation overhead: o (n) 2 +m). The edge node computation overhead is divided into the following:
1) The edge node needs to reconstruct the key and bu for each end device, which requires the cost of O (n 2 )。
2) Generating a minimum spanning tree topology requires O (n 2 )。
3) Generating a mask vector of length m with the key and bu as parameters for the randomizer PRG while eliminating the mask requires the expense of O (mn). In total, O (n) 2 +mn)。
Communication overhead: o (mn+n). The edge node communication overhead is divided into the following:
1) The edge node needs to receive n delay vectors, and needs to consume O (n).
2) N topologies are transmitted, which requires the expense of O (n).
3) Receiving 2n shares of keys and bu requires the expense of O (n).
4) Receiving n mask gradient vectors of length m requires the cost of O (mn). The total communication overhead is na 1 +na 2 +2na 3 +mn, where a 1 To record the bit number of communication delay vector between terminals, a 2 A is the bit number of the topological graph 3 The number of bits that are key shares, m is the number of bits of the gradient vector. A total of O (mn+n) is required.
The invention not only reduces the communication time of the system and the calculation cost of the terminal to the maximum extent through reasonable design of the safe aggregation scheme, but also reduces the whole training time of the system on the premise of ensuring the accuracy of the model and the safety of the user data. Therefore, compared with the prior art, the invention has obvious technical progress and outstanding substantive characteristics and remarkable progress.
The above embodiment is only one of the preferred embodiments of the present invention, and should not be used to limit the scope of the present invention, but all the insubstantial modifications or color changes made in the main design concept and spirit of the present invention are still consistent with the present invention, and all the technical problems to be solved are included in the scope of the present invention.

Claims (8)

1. The federal learning security aggregation method suitable for the edge computing scene is characterized by comprising the following steps of:
(1) The edge nodes in the edge calculation middle layer take the communication time delay between all terminals in the edge calculation middle layer as the weight of the edges of the full communication topological graph of the terminals, and the communication topological graph structure between all terminals participating in federal learning is modified from the full communication graph to the terminal communication topological graph based on the minimum spanning tree;
(2) Each terminal trains a federal learning model by using local data, communicates with a neighbor terminal in a broadcast key mode according to a modified terminal connection topological graph, and then collects key shares of each terminal by an edge node;
(3) Each terminal generates a random vector by using a symmetric key between the terminal and a neighbor terminal through a pseudo random generator PRG, uses the random vector as a mask for encrypting the model gradient, and then transmits the encrypted model gradient to an edge node;
(4) The edge node receives the encrypted model gradient transmitted by the terminal, eliminates the mask by utilizing the collected terminal key share and then carries out local aggregation to obtain a local aggregation model gradient;
(5) The edge node sends the local aggregation model gradient to a cloud computing processing center of the edge computing highest layer;
(6) And the cloud computing center receives the local aggregation model gradient from the edge node, performs aggregation again to form a global aggregation model, and transmits the global aggregation model to the edge node to provide services for the terminal.
2. The federal learning security aggregation method applicable to the edge computing scenario according to claim 1, wherein in the step (1), the modification process of the terminal connectivity topology map based on the minimum spanning tree is: the edge node sequentially selects the edge with the minimum weight by using the minimum spanning tree algorithm, and simultaneously ensures that the currently selected edge and the already selected edge do not generate loops until all terminals are positioned in one connected component, and finally, the communication topological graph structure among all terminals participating in federal learning is modified from a full connected graph to a terminal connected topological graph based on the minimum spanning tree.
3. The federal learning security aggregation method according to claim 2, wherein in the step (2), the model training process updates parameters by using a gradient descent method SGD, and the formula is as follows:
Figure FDA0004012093360000011
wherein w is t,k Is the updated parameter of the terminal k t round, w t-1,k Is the parameter of the (t-1) th round, eta is the learning rate,
Figure FDA0004012093360000012
is an objective function F k (w) gradient direction for parameter w.
4. A federal learning security aggregation method suitable for use in an edge computing scenario according to claim 3, wherein in step (2), the process of the terminal communicating with the neighbor terminal is as follows:
(a) Using the t-out-of-n algorithm to transfer private key s i sk And a random number bu i Each divided into t shares, i.e. ss.share (t, bu i )→{bu i,j },SS.share(t,s i sk )→{s i,j sk },j∈neighbor i (j) The method comprises the steps of carrying out a first treatment on the surface of the Wherein ss.share () stands for key sharing protocol, neighbor i (j) A neighbor terminal set representing terminal i;
(b) Using other terminal public keys s j pk For { bu i,j Sum { s } i,j sk Encryption, i.e
Figure FDA0004012093360000013
e i,j Representative pair bu i,j Sum s i,j sk Ciphertext generated by encryption->
Figure FDA0004012093360000014
Representative uses private key s j sk Is a cryptographic algorithm of (a);
(c) According to the modified terminal connection topological graph, shares { i, j, e i,j Sum of public key s i pk Transmitting to a neighbor terminal (advertisement ());
(d) The terminal receives the share set { j, i, e } from the neighbor terminal j,i Store share e belonging to oneself j,i Forwarding { j, i, e simultaneously j,i Sum of public key s i pk To other neighbor terminals outside j (Transmit ()).
5. The method of federal learning security marshaling in an edge computing scenario of claim 4, wherein in step (4), masking is eliminated and local marshaling is performed using the following formula:
Figure FDA0004012093360000021
Figure FDA0004012093360000022
in the formula Θ edge For local aggregate model gradients, Θ i Model gradient encrypted for terminal i, n k And n is the total data volume of all terminals.
6. The method for federal learning security convergence in an edge computing scenario of claim 5, wherein in step (4), model gradients are locally converged using the following formula:
Figure FDA0004012093360000023
wherein w is t Is the model gradient with mask of the t-th round, n k And n is the total data volume of all terminals.
7. The federal learning security aggregation method applicable to the edge computing scene according to any one of claims 1 to 6, wherein the terminal is an internet of things device.
8. The federal learning security aggregation method applicable to the edge computing scenario according to any one of claims 1 to 7, wherein the edge node is a base station or a wifi access point.
CN202211657554.XA 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene Active CN116094993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211657554.XA CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211657554.XA CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Publications (2)

Publication Number Publication Date
CN116094993A true CN116094993A (en) 2023-05-09
CN116094993B CN116094993B (en) 2024-05-31

Family

ID=86198382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211657554.XA Active CN116094993B (en) 2022-12-22 2022-12-22 Federal learning security aggregation method suitable for edge computing scene

Country Status (1)

Country Link
CN (1) CN116094993B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596065A (en) * 2023-07-12 2023-08-15 支付宝(杭州)信息技术有限公司 Gradient calculation method and device, storage medium, product and electronic equipment
CN116720594A (en) * 2023-08-09 2023-09-08 中国科学技术大学 Decentralized hierarchical federal learning method
CN117010485A (en) * 2023-10-08 2023-11-07 之江实验室 Distributed model training system and gradient protocol method in edge scene
CN117077186A (en) * 2023-10-18 2023-11-17 南方电网科学研究院有限责任公司 Power load prediction method for realizing privacy protection by federal learning
CN117196014A (en) * 2023-09-18 2023-12-08 深圳大学 Model training method and device based on federal learning, computer equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
CN112100659A (en) * 2020-09-14 2020-12-18 电子科技大学 Block chain federal learning system and Byzantine attack detection method
CN112565331A (en) * 2020-11-02 2021-03-26 中山大学 Edge calculation-based end-edge collaborative federated learning optimization method
CN113791895A (en) * 2021-08-20 2021-12-14 北京工业大学 Edge calculation and resource optimization method based on federal learning
US20210406782A1 (en) * 2020-06-30 2021-12-30 TieSet, Inc. System and method for decentralized federated learning
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN114154646A (en) * 2021-12-07 2022-03-08 南京华苏科技有限公司 Efficiency optimization method for federal learning in mobile edge network
CN114298331A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data processing method and device, equipment and storage medium
CN114492739A (en) * 2022-01-04 2022-05-13 北京邮电大学 Federal learning method based on Internet of vehicles, roadside unit, vehicle node and base station
CN115017541A (en) * 2022-06-06 2022-09-06 电子科技大学 Cloud-side-end-collaborative ubiquitous intelligent federal learning privacy protection system and method
CN115277015A (en) * 2022-07-16 2022-11-01 西安邮电大学 Asynchronous federal learning privacy protection method, system, medium, equipment and terminal
EP4102351A1 (en) * 2021-06-11 2022-12-14 Mellanox Technologies, Ltd. Secure network access device
CN115484042A (en) * 2021-06-14 2022-12-16 迈络思科技有限公司 Machine learning assisted network device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks
US20210406782A1 (en) * 2020-06-30 2021-12-30 TieSet, Inc. System and method for decentralized federated learning
CN112100659A (en) * 2020-09-14 2020-12-18 电子科技大学 Block chain federal learning system and Byzantine attack detection method
CN112565331A (en) * 2020-11-02 2021-03-26 中山大学 Edge calculation-based end-edge collaborative federated learning optimization method
EP4102351A1 (en) * 2021-06-11 2022-12-14 Mellanox Technologies, Ltd. Secure network access device
CN115484042A (en) * 2021-06-14 2022-12-16 迈络思科技有限公司 Machine learning assisted network device
CN113791895A (en) * 2021-08-20 2021-12-14 北京工业大学 Edge calculation and resource optimization method based on federal learning
CN114116198A (en) * 2021-10-21 2022-03-01 西安电子科技大学 Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN114154646A (en) * 2021-12-07 2022-03-08 南京华苏科技有限公司 Efficiency optimization method for federal learning in mobile edge network
CN114298331A (en) * 2021-12-29 2022-04-08 中国电信股份有限公司 Data processing method and device, equipment and storage medium
CN114492739A (en) * 2022-01-04 2022-05-13 北京邮电大学 Federal learning method based on Internet of vehicles, roadside unit, vehicle node and base station
CN115017541A (en) * 2022-06-06 2022-09-06 电子科技大学 Cloud-side-end-collaborative ubiquitous intelligent federal learning privacy protection system and method
CN115277015A (en) * 2022-07-16 2022-11-01 西安邮电大学 Asynchronous federal learning privacy protection method, system, medium, equipment and terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DIANLEI XU: ""Edge Intelligence: Empowering Intelligence to the Edge of Network"", 《PROCEEDINGS OF THE IEEE》, 1 November 2021 (2021-11-01) *
刘庆祥;许小龙;张旭云;窦万春: "基于联邦学习的边缘智能协同计算与隐私保护方法", 计算机集成制造系统, no. 009, 31 December 2021 (2021-12-31) *
王亚珅;: "面向数据共享交换的联邦学习技术发展综述", 无人系统技术, no. 06, 15 November 2019 (2019-11-15) *
程帆: ""边缘场景下动态权重的联邦学习优化方法"", 《计算机科学》, 15 December 2022 (2022-12-15) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596065A (en) * 2023-07-12 2023-08-15 支付宝(杭州)信息技术有限公司 Gradient calculation method and device, storage medium, product and electronic equipment
CN116596065B (en) * 2023-07-12 2023-11-28 支付宝(杭州)信息技术有限公司 Gradient calculation method and device, storage medium, product and electronic equipment
CN116720594A (en) * 2023-08-09 2023-09-08 中国科学技术大学 Decentralized hierarchical federal learning method
CN116720594B (en) * 2023-08-09 2023-11-28 中国科学技术大学 Decentralized hierarchical federal learning method
CN117196014A (en) * 2023-09-18 2023-12-08 深圳大学 Model training method and device based on federal learning, computer equipment and medium
CN117196014B (en) * 2023-09-18 2024-05-10 深圳大学 Model training method and device based on federal learning, computer equipment and medium
CN117010485A (en) * 2023-10-08 2023-11-07 之江实验室 Distributed model training system and gradient protocol method in edge scene
CN117010485B (en) * 2023-10-08 2024-01-26 之江实验室 Distributed model training system and gradient protocol method in edge scene
CN117077186A (en) * 2023-10-18 2023-11-17 南方电网科学研究院有限责任公司 Power load prediction method for realizing privacy protection by federal learning
CN117077186B (en) * 2023-10-18 2024-02-02 南方电网科学研究院有限责任公司 Power load prediction method for realizing privacy protection by federal learning

Also Published As

Publication number Publication date
CN116094993B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN116094993B (en) Federal learning security aggregation method suitable for edge computing scene
Wang et al. Privacy-preserving federated learning for internet of medical things under edge computing
Liu et al. Privacy-preserving federated k-means for proactive caching in next generation cellular networks
Li et al. Scalable privacy-preserving participant selection for mobile crowdsensing systems: Participant grouping and secure group bidding
Jiang et al. Federated dynamic gnn with secure aggregation
CN109347829B (en) Group intelligence perception network truth value discovery method based on privacy protection
Li et al. Practical privacy-preserving federated learning in vehicular fog computing
CN115310121B (en) Real-time reinforced federal learning data privacy security method based on MePC-F model in Internet of vehicles
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
Shen et al. Ringsfl: An adaptive split federated learning towards taming client heterogeneity
CN110730064A (en) Data fusion method based on privacy protection in crowd sensing network
CN116340986A (en) Block chain-based privacy protection method and system for resisting federal learning gradient attack
CN116187482A (en) Lightweight trusted federation learning method under edge scene
Zhu et al. Enhanced federated learning for edge data security in intelligent transportation systems
Ergün et al. Communication-efficient secure aggregation for federated learning
CN113204788B (en) Fine granularity attribute matching privacy protection method
Fan et al. Id-based multireceiver homomorphic proxy re-encryption in federated learning
Liang et al. Secure and efficient hierarchical Decentralized learning for Internet of Vehicles
Liu et al. ESA-FedGNN: Efficient secure aggregation for federated graph neural networks
Xu et al. Efficient and privacy-preserving federated learning with irregular users
CN116502223A (en) Method for protecting user data privacy and resisting malicious attacker under federal learning
Liu et al. PPEFL: An Edge Federated Learning Architecture with Privacy‐Preserving Mechanism
CN111581663B (en) Federal deep learning method for protecting privacy and facing irregular users
Sutradhar et al. An efficient simulation of quantum secret sharing
Choudhary Optimized security algorithm for connected vehicular network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant