CN114595830A - Privacy protection federal learning method under edge computing-oriented scene - Google Patents
Privacy protection federal learning method under edge computing-oriented scene Download PDFInfo
- Publication number
- CN114595830A CN114595830A CN202210157685.5A CN202210157685A CN114595830A CN 114595830 A CN114595830 A CN 114595830A CN 202210157685 A CN202210157685 A CN 202210157685A CN 114595830 A CN114595830 A CN 114595830A
- Authority
- CN
- China
- Prior art keywords
- server
- aggregation
- parameter
- byzantine
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000004220 aggregation Methods 0.000 claims abstract description 70
- 238000012549 training Methods 0.000 claims abstract description 56
- 230000002776 aggregation Effects 0.000 claims abstract description 51
- 238000001514 detection method Methods 0.000 claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000006116 polymerization reaction Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 11
- 238000012545 processing Methods 0.000 abstract description 5
- 238000004891 communication Methods 0.000 abstract description 4
- 230000007123 defense Effects 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Bioethics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention provides a privacy protection federal learning method in an edge computing-oriented scene, which uses a dual-server architecture to perform model aggregation and Byzantine robustness. Firstly, a server issues initial model parameters to a client; secondly, the client side carries out repeated iterative training by using the local data set and the initial parameters, and obtains the training result of the current round; then, the client carries out secret sharing processing on the training results and uploads the training results to different servers respectively; and finally, the double servers carry out cooperative Byzantine node detection to obtain quasi-aggregation parameters, and carry out cooperative model aggregation on the basis to obtain the training result of the global model of the current round. The above process is iterated continuously until an optimal solution is trained. According to the method, through a double-server architecture, the defense of Byzantine nodes is realized while the data privacy is protected, the calculation communication overhead is low, and the problem of cooperative training in the marginal calculation scene can be solved.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a privacy protection federal learning method under an edge computing scene.
Background
In recent years, with the development of edge computing, more and more edge nodes have data computing and collecting capabilities. How to utilize these massive edge nodes becomes a research hotspot in academia and industry. Many service providers wish to have the ability to use their own edge nodes for machine learning training. For example, automated driving service providers have trained models that they want to continue optimizing the model using the computing power and data of the sold vehicle. For consumers, they may not want to reveal their data privacy information to the service provider. To address this issue, the service provider may utilize federated learning techniques to co-train the model with the consumer without sharing the consumer's local raw training data.
Federal learning, as a distributed learning paradigm to protect privacy, has been widely used in edge computing scenarios such as internet of vehicles (IoV), smart homes, smart phones, and the internet of things for medical care (IoMT). In particular, to effectively utilize the computing resources of smartphones, google first proposed a so-called FedAvg federal learning scheme that uses a server to aggregate model parameters for training participants, thereby enabling efficient collaborative training. Based on this, researchers have also proposed a federal learning method for mobile phone keypad input prediction that can enable model training without directly revealing user data privacy to the server. Because of the privacy disclosure problem of federal learning itself, many schemes use cryptographic primitives or differential privacy to achieve privacy-preserving federal learning.
However, these solutions face major limitations. On the one hand, for privacy-preserving federal learning, password-based schemes can place a significant burden on the federal learning system. For example, schemes based on homomorphic encryption or secret sharing add additional computational complexity to the participants, reducing the training efficiency of the model. More importantly, the computing power of the edge node is generally limited, and the support for the complex cryptographic algorithm is poor. On the other hand, the byzantine robust scheme requires access to model updates of the participants, making comparisons, and lacks protection of participant privacy. In the edge computing scenario, the edge nodes are very fragile, may exit frequently, and are less robust.
Disclosure of Invention
The invention provides a privacy protection federal learning method under an edge computing scene, which is used for solving the technical problem that the existing scheme can not protect privacy and simultaneously realize defense on Byzantine nodes.
In order to solve the technical problem, the invention provides a privacy protection federated learning method facing to an edge computing scene, which is applied to a framework comprising a polymerization server SP, a Byzantine detection server TP and a federated learning client, and the method comprises the following steps:
s1: the aggregation server SP and the federal learning client side carry out synchronization of initial global model parameters;
s2: the federated learning client performs iterative training by using the corresponding local data set and the initial global model parameters to obtain new local model parameters;
s3: the federated learning client obtains a first local model parameter and a second local model parameter based on the new local model parameter, uploads the first local model parameter to a Byzantine detection server TP and uploads the second local model parameter to a convergence server SP in a secret sharing mode, wherein the obtained first local model parameter and the second local model parameter are in a secret sharing form of the new local model parameter, and the new local model parameter can be recovered through the first local model parameter and the second local model parameter;
s4: the aggregation server SP and the Byzantine detection server TP carry out cooperative Byzantine node detection based on the first local model parameter and the second local model parameter to obtain a quasi-aggregation parameter;
s5: and the aggregation server SP and the Byzantine detection server TP carry out cooperative model aggregation based on the quasi-aggregation parameters to obtain a global model training result.
In one embodiment, step S1 includes:
s1.1: federal learning client PiAccessing a federated learning training network and sending a corresponding identity id to an aggregation server SP, wherein PiThe client-side is used for representing the ith federated learning client-side, and i represents the number of the federated learning client-side;
s1.2: the aggregation server SP issues the initial global model parameters to the corresponding federal learning client according to the identity id;
s1.3: and the federated learning client receives the corresponding initial global model parameters to realize parameter synchronization.
In one embodiment, step S2 includes:
s2.1: federal learning client PiCalculating a gradient using the received initial global model parameters and the local data set, the calculation formula beingWherein D(i)Is a federal learning client PiW denotes the initial global model parameters, giRepresenting federated learning client PiOn the data set D(i)The resulting gradient is trained to be a gradient,representing a calculated gradient;
s2.2: federal learning client PiUpdating original local model parameters according to the learning rate and the gradient, wherein the formula is w'i=wi-ηgiWherein w isiRepresenting original local model parameters, eta is learning rate, w'iAre new local model parameters.
In one embodiment, step S3 includes:
s3.1: the federal learning client obtains a first local model parameter and a second local model parameter according to the following formula,
wherein, wi (1)Representing first local model parameters, wi (2)Denotes a second local model parameter, w'iThe new local model parameters are represented and,for federated learning client PiSelf-generated random noise;
s3.2: the first local model parameters are uploaded to the byzantine test server TP and the second local model parameters are uploaded to the aggregation server SP.
In one embodiment, step S4 includes:
s4.1: w received by Byzantine detection Server TPi (1)Summing to obtain z1The aggregation server SP pair received wi (2)The values are summed to obtain z2Then Byzantine detection Server TP will z1Transmitting to the aggregation server SP;
s4.2: the aggregation server SP calculates by formulaObtaining model parameters of the round, calculating an intermediate value A, and transmitting the intermediate value to a Byzantine detection server TP, wherein,the model parameters of the round are n, and the number of the clients selected in the round is n;
s4.3: byzantine detection server TP pass formulaCalculating the distance between the uploaded model parameters of each client and the model parameters of the current round, and calculating the median of the distance on the basis;
s4.4: and selecting k number of model parameters as quasi-aggregation parameters according to the difference value of the s and the median value and the sequence from small to large by the Byzantine detection server TP, wherein k is a preset integer.
In one embodiment, step S5 includes:
s5.1: the byzantine test server TP calculates a first global parameter based on the selected quasi aggregation parameter,
k is a preset integer,means for summing a first form of secret sharing of the selected k quasi aggregated parameters, W1Is a first one of the global parameters that,
s5.2: the aggregation server SP calculates a second global parameter according to the selected quasi-aggregation parameter,
W2as a second global parameter, the global parameter,the second secret sharing mode of the selected k quasi-aggregation parameters is summed;
s5.3: byzantine detection server TP compares W1Sending the data to a polymerization server SP, and calculating W ═ W by the SP1+W2And W is a global model training result and is a model parameter obtained by the training of the current round.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
firstly, an aggregation server issues initial global model parameters to a federated learning client; then the federated learning client performs multiple iterative training by using the local data set and the initial global model parameters, and obtains the training result of the current round; then the client carries out secret sharing processing on the training results and uploads the training results to different servers respectively; and finally, the double servers carry out cooperative Byzantine node detection to obtain quasi-aggregation parameters, and carry out cooperative model aggregation on the basis to obtain the training result of the global model of the current round. The above process is iterated continuously until an optimal solution is trained. According to the method, through a double-server architecture, defense on Byzantine nodes is achieved while data privacy is protected, computing communication overhead is low, and the method is a light-weight and safe method and can solve the problem of collaborative training in an edge computing scene.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a privacy protection federal learning method under an edge-oriented computing scenario according to an embodiment of the present invention;
fig. 2 is a scene schematic diagram of a federal learning method according to an embodiment of the present invention.
Detailed Description
The invention provides a lightweight privacy protection federal learning method facing an edge computing scene, which uses a dual-server architecture (an aggregation server SP and a Byzantine detection server TP) to carry out model aggregation and Byzantine robustness, realizes defense on Byzantine nodes while protecting data privacy, and has the technical effect of low computing communication overhead.
In order to achieve the technical effects, the main concept of the invention is as follows:
firstly, an aggregation server issues initial model parameters to a federal learning client; secondly, the client side carries out repeated iterative training by using the local data set and the initial parameters, and obtains the training result of the current round; then, the client carries out secret sharing processing on the training results and uploads the training results to different servers respectively; and finally, the double servers carry out cooperative Byzantine node detection to obtain quasi-aggregation parameters, and carry out cooperative model aggregation on the basis to obtain the training result of the global model of the current round. The above process is iterated continuously until an optimal solution is trained.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a privacy protection federated learning method under an edge computing-oriented scene, which is applied to a framework comprising a polymerization server SP, a Byzantine detection server TP and a federated learning client, and the method comprises the following steps:
s1: the aggregation server SP and the federal learning client side carry out synchronization of initial global model parameters;
s2: the federated learning client performs iterative training by using the corresponding local data set and the initial global model parameters to obtain new local model parameters;
s3: the federated learning client obtains a first local model parameter and a second local model parameter based on the new local model parameter, uploads the first local model parameter to a Byzantine detection server TP and uploads the second local model parameter to a convergence server SP in a secret sharing mode, wherein the obtained first local model parameter and the second local model parameter are in a secret sharing form of the new local model parameter, and the new local model parameter can be recovered through the first local model parameter and the second local model parameter;
s4: the aggregation server SP and the Byzantine detection server TP carry out cooperative Byzantine node detection based on the first local model parameter and the second local model parameter to obtain a quasi-aggregation parameter;
s5: and the aggregation server SP and the Byzantine detection server TP carry out cooperative model aggregation based on the quasi-aggregation parameters to obtain a global model training result.
Referring to fig. 1 and fig. 2, fig. 1 is a flowchart of a privacy protection federal learning method in an edge-oriented computing scenario provided in an embodiment, and fig. 2 is a scenario diagram of the federal learning method in the embodiment of the present invention.
In federal learning, there are multiple clients (e.g., edge nodes, internet of things devices, and smartphones) and one service provider called an aggregator. Specifically, each participant has a local data set and trains a model locally, and then only exchanges intermediate parameters with the aggregator, not privacy-sensitive training data. The aggregator then aggregates the model parameters of the different participants. In this way, the service provider can perform model training without transmitting training data.
In fig. 2, each federal learning client may be a smart phone, an automobile, or the like, the locally trained model is a local model, and the aggregation server SP and the byzantine detection server TP aggregate the local models to obtain a global model.
Specifically, step S1 is synchronization of the initial global model parameters, and the aggregation server issues the initial global model parameters to the federal learning client; step S2, the federal learning client side carries out a plurality of times of iterative training by using the local data set and the initial global model parameters, and obtains the training result of the current round; then, in step S3, the client performs secret sharing processing on the training result, and uploads the training result to different servers (SP and TP), respectively; and finally, the double servers carry out cooperative Byzantine node detection to obtain quasi-aggregation parameters, and carry out cooperative model aggregation on the basis to obtain the training result of the global model of the current round.
The above process (steps S1-S5) is iterated until the optimal solution is trained, i.e. the model is saved after all rounds of training are completed. According to the method, through a double-server architecture, the problems that most of the existing federal learning schemes are too high in cost, low in practicability and difficult to achieve model robustness and sum are solved.
In one embodiment, step S1 includes:
s1.1: federal learning client PiAccessing a federated learning training network and sending a corresponding identity id to an aggregation server SP, wherein PiThe client-side is used for representing the ith federated learning client-side, and i represents the number of the federated learning client-side;
s1.2: the aggregation server SP issues the initial global model parameters to the corresponding federal learning client according to the identity id;
s1.3: and the federated learning client receives the corresponding initial global model parameters to realize parameter synchronization.
In one embodiment, step S2 includes:
s2.1: federal learning client PiCalculating a gradient using the received initial global model parameters and the local data set, the calculation formula beingWherein D(i)Is a federal learning client PiW denotes the initial global model parameters, giRepresenting federated learning client PiOn the data set D(i)The resulting gradient is trained to be a gradient,representing a calculated gradient;
s2.2: federal learning client PiUpdating original local model parameters according to the learning rate and the gradient, wherein the formula is w'i=wi-ηgiWherein w isiRepresenting original local model parameters, eta is learning rate, w'iAre new local model parameters.
In one embodiment, step S3 includes:
s3.1: the federal learning client obtains a first local model parameter and a second local model parameter according to the following formula,
wherein wi (1)Representing first local model parameters, wi (2)Representing a second local model parameter, wiRepresenting the parameters of the original local model and,for federated learning client PiSelf-generated random noise;
s3.2: the first local model parameters are uploaded to the byzantine test server TP and the second local model parameters are uploaded to the aggregation server SP.
In particular, wi (1)、wi (2)Is wiBy wi (1)、wi (2)Can recover wi。
In this embodiment, the Federal learning client generates a random number (random noise), then wiThen pass through two equations therein (w above)i (1)、wi (2)The formula) can be calculated, in this way, the server can be guaranteed to carry out w receptioni (1)、wi (2)And when aggregation (direct summation) is carried out, the generated random numbers are offset, and the sum of local parameters (namely global parameters) uploaded by the client is obtained.
Compared with a layered federal learning method applying differential privacy protection in the prior art, the learning method mainly adopts a blinding idea to protect data privacy, namely, a blinding factor (random noise) is added to the uploaded parameters, and then the blinding factor is offset in the process of dual-server aggregation. Since the local training parameters are not subjected to differential privacy processing by using a differential privacy technology, noise is not introduced. In this way, the accuracy of the model is not affected. The difference privacy has the defect that the prediction precision of the model is influenced, and the training precision of the model cannot be guaranteed.
In addition, the technical scheme of the invention focuses on the protection of federal learning privacy besides the robustness to data pollution. In the prior art, a scheme of simultaneously considering data privacy and model robustness in a training process is not provided, the invention provides a dual-server architecture to manage the training process of the model, and the data privacy of participants is ensured while preventing the global model from being influenced by malicious attack.
In one embodiment, step S4 includes:
s4.1: w received by Byzantine detection Server TPi (1)Summing to obtain z1The aggregation server SP pair received wi (2)The values are summed to obtain z2Then Byzantine detection Server TP will z1Transmitting to the aggregation server SP;
s4.2: the aggregation server SP calculates by formulaObtaining model parameters of the round, calculating an intermediate value A, and transmitting the intermediate value to a Byzantine detection server TP, wherein,n is the number of clients selected in the round as the model parameter of the round
S4.3: byzantine detection server TP pass formulaCalculating the distance between the uploaded model parameters of each client and the model parameters of the current round, and calculating the median of the distance on the basis;
s4.4: and selecting k number of model parameters as quasi-aggregation parameters according to the difference value of the s and the median value and the sequence from small to large by the Byzantine detection server TP, wherein k is a preset integer.
In particular, the purpose of the intermediate value A is to calculate the distance in step S4.3, so that TP can directly calculate the sum of the values (i.e. A) calculated by the formulaThe summation is performed. The value of k can be preset according to actual conditions.Is calculated by SP which subtracts itAn intermediate value a can be obtained and then SP sends a into TP. In step S4.3, the distance refers to the distance between the uploaded model parameter of each client and the model parameter of the current round.
It should be noted that the above processes (steps S1 to S5) are continuously iterated until an optimal solution is trained, wherein the judgment basis for the end of the related training process is based on the number of training rounds set up or the accuracy reaches a certain threshold, which may be set up according to the actual situation.
In one embodiment, step S5 includes:
s5.1: the byzantine test server TP calculates a first global parameter based on the selected quasi aggregation parameter,
k is a preset integer,means for summing a first form of secret sharing of the selected k quasi aggregated parameters, W1Is a first global parameter that is a function of,
s5.2: the aggregation server SP calculates a second global parameter according to the selected quasi-aggregation parameter,
W2as a second global parameter, the global parameter,the second secret sharing mode of the selected k quasi-aggregation parameters is summed;
s5.3: byzantine detection server TP compares W1Sending the data to the aggregation server SP, and calculating W as W by the SP1+W2And W is a global model training result and is a model parameter obtained by the training of the current round.
Compared with the prior art, the invention has the beneficial effects that:
(1) different from other federal learning methods in the prior art, the method does not use complex homomorphic encryption, safe multiparty, differential privacy and other technologies, and can effectively reduce the calculation and communication overhead of the federal learning security scheme.
(2) The prior federal learning method rarely considers the data privacy and the model robustness in the training process at the same time, and the invention provides a dual-server architecture to manage the training process of the model, thereby realizing the prevention of the global model from being influenced by malicious attacks while ensuring the data privacy of the participants.
(3) The method can be used for collaborative training in the edge computing scene, the privacy of the edge computing node is protected, the dynamic exit of the edge node is supported, and the actual use requirement is met.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. The privacy protection federal learning method facing to the edge computing scene is applied to a framework comprising an aggregation server SP, a Byzantine detection server TP and a federal learning client, and comprises the following steps:
s1: the aggregation server SP and the federal learning client side carry out synchronization of initial global model parameters;
s2: the federated learning client performs iterative training by using the corresponding local data set and the initial global model parameters to obtain new local model parameters;
s3: the federated learning client obtains a first local model parameter and a second local model parameter based on the new local model parameter, uploads the first local model parameter to a Byzantine detection server TP and uploads the second local model parameter to a convergence server SP in a secret sharing mode, wherein the obtained first local model parameter and the second local model parameter are in a secret sharing form of the new local model parameter, and the new local model parameter can be recovered through the first local model parameter and the second local model parameter;
s4: the aggregation server SP and the Byzantine detection server TP carry out cooperative Byzantine node detection based on the first local model parameter and the second local model parameter to obtain a quasi-aggregation parameter;
s5: and the aggregation server SP and the Byzantine detection server TP carry out cooperative model aggregation based on the quasi-aggregation parameters to obtain a global model training result.
2. The privacy preserving federal learning method as claimed in claim 1, wherein step S1 includes:
s1.1: federal learning client PiAccessing a Federal learning training network and sending a corresponding identity id to an aggregation server SP, wherein PiThe client-side is used for representing the ith federated learning client-side, and i represents the number of the federated learning client-side;
s1.2: the aggregation server SP issues the initial global model parameters to the corresponding federal learning client according to the identity id;
s1.3: and the federated learning client receives the corresponding initial global model parameters to realize parameter synchronization.
3. The privacy preserving federal learning method as claimed in claim 1, wherein step S2 includes:
s2.1: federal learning client PiCalculating a gradient using the received initial global model parameters and the local data set, the calculation formula beingWherein D(i)Is a federal learning client PiW denotes the initial global model parameters, giRepresenting federated learning client PiOn the data set D(i)The resulting gradient is trained to be a gradient,representing a calculated gradient;
s2.2: federal learning client PiUpdating original local model parameters according to the learning rate and the gradient, wherein the formula is w'i=wi-ηgiWherein w isiRepresenting original local model parameters, eta is learning rate, w'iAre new local model parameters.
4. The privacy preserving federal learning method as claimed in claim 1, wherein step S3 includes:
s3.1: the federal learning client obtains a first local model parameter and a second local model parameter according to the following formula,
wherein, wi (1)Representing first local model parameters, wi (2)Representing a second local model parameter, w'iThe new local model parameters are represented and,for federated learning client PiSelf-generated random noise;
s3.2: the first local model parameters are uploaded to the byzantine test server TP and the second local model parameters are uploaded to the aggregation server SP.
5. The privacy preserving federal learning method as claimed in claim 1, wherein step S4 includes:
s4.1: w received by Byzantine detection Server TPi (1)Summing to obtain z1The aggregation server SP receives wi (2)The values are summed to obtain z2Then Byzantine detection Server TP will z1Transmitting to the aggregation server SP;
s4.2: the aggregation server SP calculates by formulaObtaining model parameters of the round, calculating an intermediate value A, and transmitting the intermediate value to a Byzantine detection server TP, wherein, the model parameters of the round are n, and the number of the clients selected in the round is n;
s4.3: byzantine detection server TP pass formulaCalculating uploading model parameters and current round model of each clientThe distance of the parameters, and the median of the distance is calculated on the basis;
s4.4: and selecting k model parameters as quasi-aggregation parameters according to the difference value of the s and the median value and the sequence from small to large by the Byzantine detection server TP, wherein k is a preset integer.
6. The privacy preserving federal learning method as claimed in claim 1, wherein step S5 includes:
s5.1: the byzantine test server TP calculates a first global parameter based on the selected quasi aggregation parameter,
k is a preset integer,means for summing a first form of secret sharing of the selected k quasi aggregated parameters, W1Is a first global parameter that is a function of,
s5.2: the aggregation server SP calculates a second global parameter according to the selected quasi-aggregation parameter,
W2as a second global parameter, the global parameter,the second secret sharing mode of the selected k quasi-aggregation parameters is summed;
s5.3: byzantine detection server TP compares W1Sending the data to a polymerization server SP, and calculating W ═ W by the SP1+W2And W is a global model training result and is a model parameter obtained by the training of the current round.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210157685.5A CN114595830B (en) | 2022-02-21 | 2022-02-21 | Privacy protection federation learning method oriented to edge computing scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210157685.5A CN114595830B (en) | 2022-02-21 | 2022-02-21 | Privacy protection federation learning method oriented to edge computing scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114595830A true CN114595830A (en) | 2022-06-07 |
CN114595830B CN114595830B (en) | 2024-07-05 |
Family
ID=81805930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210157685.5A Active CN114595830B (en) | 2022-02-21 | 2022-02-21 | Privacy protection federation learning method oriented to edge computing scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114595830B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117808082A (en) * | 2024-02-29 | 2024-04-02 | 华侨大学 | Federal learning method, device, equipment and medium for privacy protection against Bayesian attack |
CN118133328A (en) * | 2024-05-10 | 2024-06-04 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Decentralised chemistry method, system and related equipment |
CN118468041A (en) * | 2024-07-11 | 2024-08-09 | 齐鲁工业大学(山东省科学院) | Federal learning Bayesian node detection method and device based on generation of countermeasure network and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111600707A (en) * | 2020-05-15 | 2020-08-28 | 华南师范大学 | Decentralized federal machine learning method under privacy protection |
CN113361694A (en) * | 2021-06-30 | 2021-09-07 | 哈尔滨工业大学 | Layered federated learning method and system applying differential privacy protection |
CN113806768A (en) * | 2021-08-23 | 2021-12-17 | 北京理工大学 | Lightweight federated learning privacy protection method based on decentralized security aggregation |
-
2022
- 2022-02-21 CN CN202210157685.5A patent/CN114595830B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111600707A (en) * | 2020-05-15 | 2020-08-28 | 华南师范大学 | Decentralized federal machine learning method under privacy protection |
CN113361694A (en) * | 2021-06-30 | 2021-09-07 | 哈尔滨工业大学 | Layered federated learning method and system applying differential privacy protection |
CN113806768A (en) * | 2021-08-23 | 2021-12-17 | 北京理工大学 | Lightweight federated learning privacy protection method based on decentralized security aggregation |
Non-Patent Citations (1)
Title |
---|
方俊杰;雷凯;: "面向边缘人工智能计算的区块链技术综述", 应用科学学报, no. 01, 30 January 2020 (2020-01-30) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117808082A (en) * | 2024-02-29 | 2024-04-02 | 华侨大学 | Federal learning method, device, equipment and medium for privacy protection against Bayesian attack |
CN117808082B (en) * | 2024-02-29 | 2024-05-14 | 华侨大学 | Federal learning method, device, equipment and medium for privacy protection against Bayesian attack |
CN118133328A (en) * | 2024-05-10 | 2024-06-04 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Decentralised chemistry method, system and related equipment |
CN118468041A (en) * | 2024-07-11 | 2024-08-09 | 齐鲁工业大学(山东省科学院) | Federal learning Bayesian node detection method and device based on generation of countermeasure network and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114595830B (en) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114595830B (en) | Privacy protection federation learning method oriented to edge computing scene | |
Nguyen et al. | Federated learning for industrial internet of things in future industries | |
Xiong et al. | Toward lightweight, privacy-preserving cooperative object classification for connected autonomous vehicles | |
Wan et al. | Reinforcement learning based mobile offloading for cloud-based malware detection | |
Liu et al. | Privacy-preserving federated k-means for proactive caching in next generation cellular networks | |
CN112714106A (en) | Block chain-based federal learning casual vehicle carrying attack defense method | |
CN109347829B (en) | Group intelligence perception network truth value discovery method based on privacy protection | |
CN113065866B (en) | Internet of things edge computing system and method based on block chain | |
Kefayati et al. | Secure consensus averaging in sensor networks using random offsets | |
CN115841133A (en) | Method, device and equipment for federated learning and storage medium | |
Yang et al. | Efficient and secure federated learning with verifiable weighted average aggregation | |
CN112954680A (en) | Tracing attack resistant lightweight access authentication method and system for wireless sensor network | |
CN113792890B (en) | Model training method based on federal learning and related equipment | |
CN116340986A (en) | Block chain-based privacy protection method and system for resisting federal learning gradient attack | |
Li et al. | An Adaptive Communication‐Efficient Federated Learning to Resist Gradient‐Based Reconstruction Attacks | |
CN114760023A (en) | Model training method and device based on federal learning and storage medium | |
CN117171814B (en) | Federal learning model integrity verification method, system, equipment and medium based on differential privacy | |
Lyu et al. | Secure and efficient federated learning with provable performance guarantees via stochastic quantization | |
Yin et al. | Ginver: Generative model inversion attacks against collaborative inference | |
CN117609621A (en) | Method for resource recommendation in multiple nodes | |
CN117113413A (en) | Robust federal learning privacy protection system based on block chain | |
CN117349685A (en) | Clustering method, system, terminal and medium for communication data | |
CN115510472B (en) | Multi-difference privacy protection method and system for cloud edge aggregation system | |
CN116305186A (en) | Security aggregation method with low communication overhead and decentralization | |
CN112183612B (en) | Joint learning method, device and system based on parameter expansion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |