CN113467928A - Block chain decentralization-based federated learning member reasoning attack defense method and device - Google Patents

Block chain decentralization-based federated learning member reasoning attack defense method and device Download PDF

Info

Publication number
CN113467928A
CN113467928A CN202110553163.2A CN202110553163A CN113467928A CN 113467928 A CN113467928 A CN 113467928A CN 202110553163 A CN202110553163 A CN 202110553163A CN 113467928 A CN113467928 A CN 113467928A
Authority
CN
China
Prior art keywords
model
gradient
block chain
edge
compressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110553163.2A
Other languages
Chinese (zh)
Inventor
李伟
邱炜伟
蔡亮
匡立中
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qulian Technology Co Ltd
Original Assignee
Hangzhou Qulian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qulian Technology Co Ltd filed Critical Hangzhou Qulian Technology Co Ltd
Priority to CN202110553163.2A priority Critical patent/CN113467928A/en
Publication of CN113467928A publication Critical patent/CN113467928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a block chain decentralization-based federated learning member reasoning attack defense method and a device, which comprise the following steps: the edge terminals participating in the federal learning use the local data to carry out model training to obtain the gradient of the local model; the edge terminal carries out workload certification through computing power, acquires the accounting right of the account book block according to the workload certification, uploads the compressed model gradient to the account book block after the gradient of the local model is compressed, and broadcasts the account book block containing the compressed gradient to the edge terminals corresponding to other account book blocks on the block chain after the account book block is taken as a block chain link; and randomly selecting 1 edge end on the block chain as a temporary central server in each round, aggregating the compressed model gradient of each edge end on the block chain to obtain an aggregation model of each round, and issuing the aggregation model to the edge end for next training. The method and the device can achieve the purpose of defending data member reasoning attack.

Description

Block chain decentralization-based federated learning member reasoning attack defense method and device
Technical Field
The invention belongs to the technical field of federal learning safety, and particularly relates to a block chain decentralization-based method and device for defending member reasoning attack of federal learning.
Background
With the continuous development of artificial intelligence technology, people feel that science and technology brings convenience, and meanwhile, the demand for privacy protection is gradually improved, and with the increasingly wide commercial range of deep learning technology, serious worry is generated about valuable data privacy. Data leakage events are endlessly characterized by data leakage during data storage, data transmission, and data sharing, resulting in serious interests and security problems for data owners and providers.
Federal learning is a new distributed learning framework that has emerged in recent years that allows training data to be shared among multiple participants without compromising their data privacy. The machine learning framework is an efficient machine learning framework developed among multiple parties or multiple computing nodes, and the design goal of the machine learning framework is to guarantee information safety and protect terminal data and personal data privacy during big data exchange. But this novel learning mechanism may still be compromised by security and privacy threats from various attackers.
Federal learning allows multiple participants to participate in the training of a model, each participant using a local training data set to ensure that local data is not revealed by local training and updating the training model by periodically exchanging model gradients with the server side. But information about participant training data is leaked during the transfer and update of the model gradients.
Since each data point in the training set updates the model parameters through a gradient-based algorithm, its impact on model training loss is minimized. The local gradient of the loss on the target data set with respect to a given parameter requires that the size and direction of the parameter be altered in order to adapt the model to the data record. To minimize the expected loss of the model, gradient-based algorithms continually update the model parameters such that the loss gradient of the entire training data set approaches zero. Thus, each training data sample will leave a resolvable feature on the loss function gradient over the model parameters. In the model training and updating process of federal learning, once an attacker obtains the intermediate gradient, the attacker can deduce the membership of the data, namely whether the target data participates in the training process of the model. Under a federal learning framework, gradient interaction and updating are required to be carried out between an edge end and a server end, so that the server end is required to be a mechanism trusted by each edge end, otherwise, once the server end is leaked or is acquired by a malicious attacker, huge threats are generated on data privacy safety of each edge end.
The blockchain technology is generated along with digital currency bitcoin, and the blockchain provides a safe and reliable solution among a plurality of untrusted participants by virtue of the characteristics of anonymity, non-tampering, distribution and the like. Current blockchain techniques have found wide application in many frontier areas. The essence of the blockchain is a distributed book, the biggest characteristics of which are as follows: the traditional centralized scheme is changed into a distributed network structure, and the safety of data on a chain is ensured by the cryptography technologies such as asymmetric encryption and the like; meanwhile, reliability of data on the chain is guaranteed among a plurality of untrusted distributed participants through a consensus mechanism, intelligent contracts and the like.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for defending against member inference attacks in federated learning based on block chain decentralization, which replace the traditional server end in federated learning through the decentralization technology of a block chain, effectively prevent member inference attacks caused by parameter leakage due to server end aggregation, and authenticate and manage different edge ends participating in model training by using identity verification and consensus mechanisms of the block chain, so as to achieve the goal of defending against member inference attacks on data.
In a first aspect, an embodiment provides a block chain decentralization-based federal learning member reasoning attack defense method, which includes the following steps:
the edge terminals participating in the federal learning use the local data to carry out model training to obtain the gradient of the local model;
the edge terminal carries out workload certification through computing power, acquires the accounting right of the account book block according to the workload certification, uploads the compressed model gradient to the account book block after the gradient of the local model is compressed, and broadcasts the account book block containing the compressed gradient to the edge terminals corresponding to other account book blocks on the block chain after the account book block is taken as a block chain link;
and randomly selecting 1 edge end on the block chain as a temporary central server in each round, aggregating the compressed model gradient of each edge end on the block chain to obtain an aggregation model of each round, and issuing the aggregation model to the edge end for next training.
In a second aspect, an embodiment provides a block chain decentralization-based federal learning member inference attack defense device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the block chain decentralization-based federal learning member inference attack defense method according to the first aspect when executing the computer program.
The technical scheme provided by the above has the beneficial effects of at least comprising:
through the decentralized technology of the block chain, a workload certification mechanism is utilized to replace a central server, and the data security of the edge end is prevented from being threatened due to the leakage of the central server in the model aggregation process. In order to further protect the gradient security of the edge model, the method uses a depth gradient sparsification technology to encrypt the gradient uploading process, reinforces a federal learning framework from a depocenter angle and a model gradient compression angle, and protects the data privacy security of the edge.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a federated learning membership inference attack defense method based on blockchain decentralization in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
Aiming at the problem that the lower edge end and the server end are easy to be attacked by data member reasoning of a malicious attacker in the information interaction process under a federal scene, the attacker acquires gradient information which is passed from the edge end to the server end, completes the reasoning attack on the data member of the edge end through reverse gradient calculation and loss-based gradient feature extraction, and steals the privacy of the data of the edge end.
In order to improve data privacy security in a federal scene and prevent a local model at an edge end from being attacked by member reasoning of a malicious attacker in the process of gradient transmission and updating of a server, so that local training data is leaked, the embodiment provides a block chain decentralized federal learning member reasoning attack defense method and device, which can be applied to the field of medical health, and hospitals use respective patient treatment records to train the local model, and train a more effective disease prediction model together after aggregating parameters of all parties.
FIG. 1 is a flow diagram of a federated learning membership inference attack defense method based on blockchain decentralization in an embodiment. As shown in fig. 1, the method for defending against inference attacks of federal learning members based on block chain decentralization provided by the embodiment includes:
step 1, preprocessing the local data of the edge terminal.
In federal learning, before each edge terminal participating in federal learning utilizes local data to perform model training, class mark division, data size adjustment and division of a training set and a test set are performed on sample data.
When the method is used for constructing the aggregation model for face recognition based on image data, a CIFAR-10 data set and an ImageNet data set can be adopted as the sample data set, wherein the CIFAR-10 data set comprises 60000 RGB color images, each image is 32 × 32 in size and is divided into 10 types, and 6000 samples are counted in each type. Of these, 50000 were used for training and 10000 were used for testing; the ImageNet data sets are 1000 types in total, each type comprises 1000 samples, each picture is an RGB color image, the size of each sample is 224, 30% of pictures in each type are randomly extracted to serve as a test set, and the rest of pictures serve as a training set.
And 2, training an edge end model.
Before local training is carried out on the edge terminals, uniform configuration is set for each edge terminal, wherein the configuration comprises an initial model and an integral training round NUMepochThe method comprises the following steps of counting M edge end equipment participating in federal learning, initializing model weight and bias parameters (W, b), training by adopting random gradient descent, using an adam optimizer, setting learning rate to be eta, using a cross entropy loss function as a loss function, and adding a regularization parameter lambda;
the edge terminal utilizes local data to train the initial model or the aggregation model of the last round of issuing according to the following Loss function Loss according to the unified configuration:
Figure BDA0003076077800000051
wherein p () represents the true label of the ith sample data, q () represents the prediction probability of the model to the ith sample data, w represents the model parameter, λ represents the regularization coefficient, and n is the total amount of samples.
And 3, the edge end performs workload certification through calculation force, and obtains the accounting right of the account book block according to the workload certification.
In the embodiment, the edge terminal selects the packing node of the block chain through local computing competition. Specifically, the edge terminal workload certification by calculating force includes: the edge terminals mutually compete to solve the SHA-256 security coding algorithm problem based on respective computing power, so as to carry out workload certification. And obtaining the accounting right of the next account book block through the workload certification. And after the accounting authority of the account book block is obtained, the edge end uploads the gradient information of the local model to the account book block.
In the embodiment, in the process of carrying out workload certification on the edge end, the local model obtained by training at the time is also subjected to quality certification by using the test sample. The average absolute difference can be used to measure the local model quality:
Figure BDA0003076077800000061
where MAE represents the mean absolute error, N represents the number of samples, xiSample representing input, yiDenotes the sample tag value, f (x)i) And representing the model predicted value.
And 4, the edge end compresses the gradient of the local model and then uploads the compressed model gradient to the account book block.
After each round of training the local model of each edge end, the local model gradient is uploaded to the account book block and stored in each block chain node in a transaction form. Because the account book block is not falsifiable, once data leakage of the edge end occurs, the abnormal edge end can be found by checking the account book records of each edge end, so that the reliability of each edge end is improved under a federal learning framework.
In order to improve the performance of the local model gradient of the edge end and reduce the situation that a malicious attacker infers the membership relationship due to the fact that the gradient redundancy carries too much local data, the embodiment adds the gradient compression operation in the process of uploading the edge end model gradient.
Gradient sparsification is adopted to carry out gradient compression on the gradient of the local model, and the gradient of the model is compressed by only sending important gradient information, namely the compressed model gradient is composed of the important gradient information, wherein the important gradient information is screened by setting a gradient threshold value, and the gradient information larger than the gradient threshold value is considered as the important gradient information.
Meanwhile, in order to avoid loss of training information, when the edge end performs local model compression, the remaining unimportant gradients are screened by setting a gradient threshold value and accumulated in real time, and the accumulated unimportant gradients are uploaded to an account block in a delayed manner after being accumulated to a certain degree. In an embodiment, when the remaining unimportant gradient accumulation reaches a preset threshold, the remaining unimportant gradient accumulation is uploaded to the ledger block in a delayed manner.
In the embodiment, after the gradient is thinned, the large gradient is directly uploaded to the book block, and the small gradient is uploaded to the book block in a delayed manner after accumulation, so that the purpose of protecting the gradient is achieved.
And each edge terminal determines the attribution of the block right through the workload certification, obtains the edge terminal of the block right, and broadcasts the edge terminal to the edge terminals corresponding to other account book blocks on the block chain through the account book block.
And 5, selecting a temporary central server to perform model aggregation.
In the embodiment, 1 edge end on the block chain is randomly selected as a temporary central server in each round, the compressed model gradients of each edge end on the block chain are aggregated to obtain an aggregation model of each round, and the aggregation model is issued to the edge end for next training.
In an embodiment, the temporary central server aggregates the compressed model gradients in an average aggregation or weighted aggregation manner to obtain an aggregation model.
Repeating the steps 2 to 5 until the total number of rounds NUM is reachedepochAnd updating the edge end model and finishing the aggregation and uploading of the model at the same time.
In the block chain decentralized federal learning member reasoning attack defense method, decentralized model training is realized through the block chain ledger technology, the block chain ledger technology is used for replacing a traditional central aggregation server, and the purposes of data privacy at the edge end, protection and resisting model member reasoning attack are achieved.
The embodiment also provides a block chain decentralization-based federal learning member reasoning attack defense device which comprises a memory, a processor and a computer program which is stored in the memory and can be executed on the processor, wherein the processor realizes the block chain decentralization-based federal learning member reasoning attack defense method when executing the computer program.
In a specific application, the memory may be a volatile memory at the near end, such as a RAM, a non-volatile memory, such as a ROM, a FLASH, a floppy disk, a mechanical hard disk, and the like, and may also be a remote storage cloud. The processor can be a Central Processing Unit (CPU), a microprocessor unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA), i.e., the step of block chain decentralization-based federal learning membership inference attack defense can be realized by the processor.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A block chain decentralization-based federated learning member reasoning attack defense method is characterized by comprising the following steps:
the edge terminals participating in the federal learning use the local data to carry out model training to obtain the gradient of the local model;
the edge terminal carries out workload certification through computing power, acquires the accounting right of the account book block according to the workload certification, uploads the compressed model gradient to the account book block after the gradient of the local model is compressed, and broadcasts the account book block containing the compressed gradient to the edge terminals corresponding to other account book blocks on the block chain after the account book block is taken as a block chain link;
and randomly selecting 1 edge end on the block chain as a temporary central server in each round, aggregating the compressed model gradient of each edge end on the block chain to obtain an aggregation model of each round, and issuing the aggregation model to the edge end for next training.
2. The defense method for member inference attack of federal learning based on block chain decentralization as claimed in claim 1, wherein gradient sparseness is used to compress gradient of local model, and model gradient is compressed by sending only important gradient information, i.e. compressed model gradient is composed of important gradient information, wherein important gradient information is screened by setting gradient threshold, and gradient information larger than gradient threshold is considered as important gradient information.
3. The method as claimed in claim 2, wherein when the edge performs local model compression, the remaining unimportant gradients are filtered by setting a gradient threshold and accumulated in real time, and the accumulated unimportant gradients are uploaded to the book block in a delayed manner after a certain degree.
4. The block chain decentralization-based federal learning member reasoning attack defense method of claim 2, wherein before local training of edge terminals, a uniform configuration is set for each edge terminal, including an initial model, an overall training round, the number of edge terminal devices participating in federal learning, model weight and bias parameters are initialized, random gradient descent is adopted for training, an adam optimizer, a learning rate is η, a loss function adopts a cross entropy loss function, and a regularization parameter λ is added;
the edge terminal utilizes local data to train the initial model or the aggregation model of the last round of issuing according to the following Loss function Loss according to the unified configuration:
Figure FDA0003076077790000021
wherein p () represents the true label of the ith sample data, q () represents the prediction probability of the model to the ith sample data, w represents the model parameter, λ represents the regularization coefficient, and n is the total amount of samples.
5. The block-chain-decentralization-based federated learning member inference attack defense method of claim 2, wherein an edge-end workload-on-computation-certification comprises: the edge terminals mutually compete to solve the SHA-256 security coding algorithm problem based on respective computing power, so as to carry out workload certification.
6. The method as claimed in claim 2, wherein in the process of workload certification by the edge, the local model trained at this time is further subjected to quality certification by using test samples.
7. The method as claimed in claim 2, wherein the temporary central server aggregates the compressed model gradients in an average aggregation or weighted aggregation manner to obtain an aggregate model.
8. A defense device for federal learning member inference attacks based on block chain decentralization, which comprises a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the computer program is executed by the processor to realize the defense method for federal learning member inference attacks based on block chain decentralization according to any one of claims 1 to 7.
CN202110553163.2A 2021-05-20 2021-05-20 Block chain decentralization-based federated learning member reasoning attack defense method and device Pending CN113467928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110553163.2A CN113467928A (en) 2021-05-20 2021-05-20 Block chain decentralization-based federated learning member reasoning attack defense method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110553163.2A CN113467928A (en) 2021-05-20 2021-05-20 Block chain decentralization-based federated learning member reasoning attack defense method and device

Publications (1)

Publication Number Publication Date
CN113467928A true CN113467928A (en) 2021-10-01

Family

ID=77871102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110553163.2A Pending CN113467928A (en) 2021-05-20 2021-05-20 Block chain decentralization-based federated learning member reasoning attack defense method and device

Country Status (1)

Country Link
CN (1) CN113467928A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934578A (en) * 2021-10-28 2022-01-14 电子科技大学 Method for data recovery attack in federated learning scene
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN114154392A (en) * 2021-10-15 2022-03-08 海南火链科技有限公司 Model co-construction method, device and equipment based on block chain and federal learning
CN114372581A (en) * 2022-02-25 2022-04-19 中国人民解放军国防科技大学 Block chain-based federal learning method and device and computer equipment
CN114785608A (en) * 2022-05-09 2022-07-22 中国石油大学(华东) Industrial control network intrusion detection method based on decentralized federal learning
CN114978893A (en) * 2022-04-18 2022-08-30 西安交通大学 Decentralized federal learning method and system based on block chain
CN116109608A (en) * 2023-02-23 2023-05-12 智慧眼科技股份有限公司 Tumor segmentation method, device, equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154392A (en) * 2021-10-15 2022-03-08 海南火链科技有限公司 Model co-construction method, device and equipment based on block chain and federal learning
CN113934578A (en) * 2021-10-28 2022-01-14 电子科技大学 Method for data recovery attack in federated learning scene
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN114372581A (en) * 2022-02-25 2022-04-19 中国人民解放军国防科技大学 Block chain-based federal learning method and device and computer equipment
CN114372581B (en) * 2022-02-25 2024-03-19 中国人民解放军国防科技大学 Federal learning method and device based on block chain and computer equipment
CN114978893A (en) * 2022-04-18 2022-08-30 西安交通大学 Decentralized federal learning method and system based on block chain
CN114978893B (en) * 2022-04-18 2024-04-12 西安交通大学 Block chain-based decentralization federation learning method and system
CN114785608A (en) * 2022-05-09 2022-07-22 中国石油大学(华东) Industrial control network intrusion detection method based on decentralized federal learning
CN114785608B (en) * 2022-05-09 2023-08-15 中国石油大学(华东) Industrial control network intrusion detection method based on decentralised federal learning
CN116109608A (en) * 2023-02-23 2023-05-12 智慧眼科技股份有限公司 Tumor segmentation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113467928A (en) Block chain decentralization-based federated learning member reasoning attack defense method and device
CN111600707B (en) Decentralized federal machine learning method under privacy protection
CN110035066B (en) Attack and defense behavior quantitative evaluation method and system based on game theory
CN112714106A (en) Block chain-based federal learning casual vehicle carrying attack defense method
CN112906903A (en) Network security risk prediction method and device, storage medium and computer equipment
Zhang et al. Dubhe: Towards data unbiasedness with homomorphic encryption in federated learning client selection
Lycklama et al. Rofl: Robustness of secure federated learning
CN113468264B (en) Block chain-based federal learning method and device for poisoning defense and poisoning traceability
CN116405187A (en) Distributed node intrusion situation sensing method based on block chain
CN115481431A (en) Dual-disturbance-based privacy protection method for federated learning counterreasoning attack
Li et al. An adaptive communication-efficient federated learning to resist gradient-based reconstruction attacks
CN112560059B (en) Vertical federal model stealing defense method based on neural pathway feature extraction
Deng et al. NVAS: a non-interactive verifiable federated learning aggregation scheme for COVID-19 based on game theory
CN117216788A (en) Video scene identification method based on federal learning privacy protection of block chain
CN115329388B (en) Privacy enhancement method for federally generated countermeasure network
Li et al. Privacy-Preserving and Poisoning-Defending Federated Learning in Fog Computing
CN116050546A (en) Federal learning method of Bayesian robustness under data dependent identical distribution
Kargupta et al. A game theoretic approach toward multi-party privacy-preserving distributed data mining
Masuda et al. Model fragmentation, shuffle and aggregation to mitigate model inversion in federated learning
Liu et al. Federated Learning with Anomaly Client Detection and Decentralized Parameter Aggregation
CN115983389A (en) Attack and defense game decision method based on reinforcement learning
Mozaffari et al. Fedperm: Private and robust federated learning by parameter permutation
Li et al. Research on the application of data encryption technology in communication security
CN112422552B (en) Attack and defense evolution method under DoS attack of uplink channel in micro-grid secondary control
CN111581663B (en) Federal deep learning method for protecting privacy and facing irregular users

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination