CN115758350B - Aggregation defense method and device for resisting poisoning attack and electronic equipment - Google Patents

Aggregation defense method and device for resisting poisoning attack and electronic equipment Download PDF

Info

Publication number
CN115758350B
CN115758350B CN202211397225.6A CN202211397225A CN115758350B CN 115758350 B CN115758350 B CN 115758350B CN 202211397225 A CN202211397225 A CN 202211397225A CN 115758350 B CN115758350 B CN 115758350B
Authority
CN
China
Prior art keywords
client
model
model parameters
parameters
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211397225.6A
Other languages
Chinese (zh)
Other versions
CN115758350A (en
Inventor
高胜
胡乘源
朱建明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central university of finance and economics
Original Assignee
Central university of finance and economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central university of finance and economics filed Critical Central university of finance and economics
Priority to CN202211397225.6A priority Critical patent/CN115758350B/en
Publication of CN115758350A publication Critical patent/CN115758350A/en
Application granted granted Critical
Publication of CN115758350B publication Critical patent/CN115758350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides an aggregation defense method, an aggregation device and electronic equipment for resisting poisoning attack, wherein the method mainly comprises the following steps: the client downloads the global model and trains the local data; the client compresses the model parameters by utilizing a top-k algorithm and uploads the model parameters, and the server receives the model parameters uploaded by the client; performing data dimension reduction on model parameters by utilizing principal component analysis, extracting core parameters, and judging the properties of a client by using an aggregation algorithm by a server; pruning the model parameters uploaded by the malicious client side by the server, and ignoring the pruned model parameters in the updating of the model parameters of the round; the server aggregates the gradient to update the global model, and returns the updated global model to the client; and the client updates the local model by using the updated global model. Compared with the prior art, the invention can realize the general defense on the poisoning attack, and protect the personal privacy safety more safely and efficiently.

Description

Aggregation defense method and device for resisting poisoning attack and electronic equipment
Technical Field
The invention relates to an aggregation defense method and an aggregation device for resisting a poisoning attack and electronic equipment, and belongs to the technical field of machine learning.
Background
The traditional joint modeling has the problems of at least one party of data ex-warehouse, weaker constraint force of a secret protocol, low cost of a leakage mode, homogenization of a solution, difficult breakthrough and the like. When personal data or enterprise data is authorized to be used externally, a safe, reliable and interpretable technical guarantee system is needed, and the maximum value of the data is exerted safely and compliance.
Federal learning (Federated Learning) is used as a distributed machine learning framework capable of enabling data to be utilized by multiple parties without leaving the local area, provides a brand-new solution idea for solving the problems of data island and privacy protection, and has wide application prospects in various fields such as medical treatment, finance, internet of things and the like.
At present, an ideal mode is still from an aggregation defense method aiming at the defense of federal learning poisoning attack, but the current aggregation defense method still has the problems of low accuracy, poor robustness, limitation in defense, easiness in privacy leakage and the like.
In view of the foregoing, it is necessary to provide an aggregation defense method, an aggregation device and an electronic device for resisting a poisoning attack, so as to solve the above-mentioned problems.
Disclosure of Invention
The invention aims to provide an aggregation defense method, an aggregation device and electronic equipment for resisting a poisoning attack, which can protect personal privacy more safely and efficiently.
In order to achieve the above purpose, the present invention provides an aggregation defense method for resisting a poisoning attack, which mainly comprises the following steps:
s101, a client downloads a global model and trains local data;
s102, the client compresses model parameters by utilizing a top-k algorithm and uploads the model parameters, and the server receives the model parameters uploaded by the client;
s103, performing data dimension reduction on the model parameters by utilizing principal component analysis, extracting core parameters, and judging the properties of the client by using an aggregation algorithm;
s104, pruning the model parameters uploaded by the malicious client side by the server, and ignoring the pruned model parameters in the updating of the model parameters of the round;
s105, the server aggregates the gradient to update the global model, and returns the updated global model to the client;
s106, the client updates the local model by using the updated global model.
As a further improvement of the present invention, in step S101,the parameter of the global model is theta 0 The total number of clients is N, the number of malicious clients is M, and N>2*M。
As a further development of the invention, in step S102, in the training wheel r a subset of k clients is selected according to availability, where k N is equal to each participant P in the subset i ∈P r Using local dataset D i Executing a training algorithm to obtain an updated parameter θ r,i The model parameters are compressed by using a top-k algorithm and sent to a server.
As a further improvement of the present invention, in step S103, for the model parameter update after the compression process, the server receives the parameters θ uploaded by the k clients r,i The server receives k n-dimensional samples, i.e., x= { X 1 ,X 2 ,...,X k (wherein X is j Indicating model parameters uploaded by the jth client of the k clients received by the server.
As a further improvement of the present invention, in step S103, the aggregation algorithm specifically includes:
s1, obtaining an average value of model parameters uploaded by k clients in the round of model parameter updating
S2, performing data dimension reduction on the model parameters by using principal component analysis, extracting core parameters, and calculating dimension reduced model parameters X of each client j Average value ofDistance D between March j To obtain D= { D 1 ,D 2 ,...,D k };
S3, D= { D 1 ,D 2 ,...,D k Sorting to obtain d= { d 1 ,d 2 ,...,d k };
S4, according to the basic property of the Marshall distance, d j Smaller model parameter update X indicating client j j The closer the model parameter updates corresponding to the model principal components are, therefore d j The greater indicates that the greater the likelihood that client j is a malicious client;
s5, in the update of the present round, the pair d= { d 1 ,d 2 ,...,d k/2 The corresponding client in the process is temporarily judged to be the honest client, and d= { d k/2+1 ,d k/2+2 ,...,d k The client corresponding to the client is marked as a malicious client, and model parameter theta is obtained by average aggregation of model parameter updates of the client determined to be honest r And as global model parameter updating, the global model parameter updating is issued to each client by the server in the (r+1) th round of iteration to train the local data.
As a further improvement of the present invention, in step S105, the server aggregates the model parameters remaining after pruning, and updates the global model parameters to θ using the average aggregation most commonly used in federal learning r The updated global parameter theta r Returns to each client and is used for the next round of training, the r+1st round.
As a further improvement of the present invention, in step S106, each client downloads the updated global parameter θ from the server r And utilizes local data D i Training, starting a new round of r+1 iteration, and updating model parameters of each client to be named as theta r+1,i After global training of R round, the parameter theta is used R A final model M is determined.
In order to achieve the above purpose, the invention also provides a polymerization device, which applies the polymerization defense method for resisting the poisoning attack.
As a further improvement of the present invention, the polymerization apparatus includes: the training module, the uploading module, the pruning module, the aggregation module and the updating module, wherein,
the training module generates a global model at the server and distributes the global model to all clients, and the clients train the local data by using the global model;
the uploading module is used for updating the model parameters uploaded by the client to the server;
the pruning module judges model parameters uploaded by the client by utilizing an aggregation algorithm based on a mahalanobis distance;
the aggregation module is used for aggregating the model parameters determined to be the honest client, updating the global parameters and returning the global parameters to the honest client;
the updating module is used for downloading the updated global parameters from the server to the client, training by utilizing the local data and starting a new iteration.
In order to achieve the above purpose, the invention also provides an electronic device, which applies the aggregation defense method for resisting the poisoning attack.
The beneficial effects of the invention are as follows: the invention can be applied to scenes related to data security and privacy protection in various fields such as medical treatment, finance, the Internet of things and the like, realizes general defense against poisoning attacks such as data poisoning, model poisoning, backdoor attack and the like, solves the limitation of privacy leakage caused by the fact that malicious adversaries damage the precision and performance of a global model through the poisoning attacks in the existing federal learning model, simultaneously provides a federal learning aggregation algorithm based on effective pruning abnormal model parameters of the mahalanobis distance, and realizes balance of algorithm accuracy, high efficiency and robustness.
Drawings
FIG. 1 is a schematic diagram of a model structure of a method for polymerizing defense against a poisoning attack according to the present invention.
Fig. 2 is a schematic flow chart of the aggregation defense method for resisting the poisoning attack.
Fig. 3 is a flowchart of an aggregation algorithm at pruning stage in the aggregation defense method for resisting the poisoning attack according to the present invention.
Fig. 4 is a block diagram of an aggregation device applying an aggregation defense method against a poisoning attack according to the present invention.
Fig. 5 is a schematic structural diagram of an electronic device to which the aggregation defense method for resisting a poisoning attack of the present invention is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
In this case, in order to avoid obscuring the present invention due to unnecessary details, only the structures and/or processing steps closely related to the aspects of the present invention are shown in the drawings, and other details not greatly related to the present invention are omitted.
In addition, it should be further noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1 to 5, the invention provides an aggregation defense method for resisting a poisoning attack based on federal learning, and entities involved in a scene include an honest client, a malicious client, a server and an aggregation rule, wherein (1) - (6) are logic structures of a model formed by the aggregation defense method for resisting the poisoning attack.
Specifically, as shown in fig. 2, the aggregation defense method for resisting the poisoning attack mainly includes the following steps:
s101, a client downloads a global model and trains local data;
s102, the client compresses model parameters by utilizing a top-k algorithm and uploads the model parameters, and the server receives the model parameters uploaded by the client;
s103, performing data dimension reduction on the model parameters by utilizing principal component analysis, extracting core parameters, and judging the properties of the client by using an aggregation algorithm;
s104, pruning the model parameters uploaded by the malicious client side by the server, and ignoring the pruned model parameters in the updating of the model parameters of the round;
s105, the server aggregates the gradient to update the global model, and returns the updated global model to the client;
s106, the client updates the local model by using the updated global model.
Steps S101 to S106 will be specifically described below.
In step S101, assuming that the total number of clients is N, the number of malicious clients is M, and N is assumed>2*M. In the federal learning environment, N participants P each hold their own private training data set D 1 ,...,D N . First, the server generates a parameter θ 0 And published to all clients. Instead of sharing their proprietary raw data locally, the client executes a training algorithm locally and then uploads the updated model parameters to the server. In the process, because of the self property of federal learning, the existence of a malicious client is unavoidable, and malicious operations such as tampering are performed on local data or model update parameters, so that the accuracy and performance of a global model are damaged.
In step S102, in training wheel r, a subset of k clients (k.ltoreq.N) is selected based on availability. Subset each participant P i ∈P r In utilizing the local data set D in step S101 i Executing a training algorithm to obtain an updated parameter θ r,i The model parameters are compressed by using a top-k algorithm and sent to a server. And the communication overhead of federal learning is reduced, and the efficiency of a federal learning model is improved. And after all the model parameter vectors are obtained, sorting vector elements according to the absolute value of each element in the vector. After sorting, k gradient values with the largest absolute value are selected, and then the k gradient values are uploaded to a server. And the subset includes possible portions of malicious clients. The server receives the parameter theta uploaded by the clients r,i
In step S103, unlike FedAvg, which is most commonly used at present for federal learning, an aggregation algorithm based on mahalanobis distance is selected based on the assumption that the distance between parameter updates uploaded by honest participants and parameter updates uploaded by malicious participants is greater than the distance between parameter updates uploaded by honest participants, pruning abnormal parameter updates, and identifying malicious clients. The invention adopts an aggregation algorithm based on the mahalanobis distance to find out the abnormal parameter update of the malicious client.
Model update parameter θ r,i Often very high dimensional, in order to reduce federal learning communication overhead, while guaranteeing the implementation of subsequent mahalanobis distance-based aggregation algorithms. The dimension needs to be reduced as reasonably as possible before the accuracy of the model is ensured.
As shown in fig. 3, for the model parameter update after the compression process, the server receives the parameters θ uploaded by k clients r,i The server receives k n-dimensional samples, i.e., x= { X 1 ,X 2 ,...,X k (wherein X is j Indicating model parameters uploaded by the jth client of the k clients received by the server. The aggregation algorithm specifically comprises the following steps:
s1, obtaining an average value of model parameters uploaded by k clients in the round of model parameter updating
S2, performing data dimension reduction on the model parameters by using principal component analysis, extracting core parameters so as to meet the calculation conditions of the mahalanobis distance, improving the calculation efficiency and ensuring the efficient realization of a subsequent aggregation algorithm; calculating the model parameter X of each client after dimension reduction j Average value ofDistance D between March j To obtain D= { D 1 ,D 2 ,...,D k };
S3, D= { D 1 ,D 2 ,...,D k Sorting to obtain d= { d 1 ,d 2 ,...,d k };
S4, according to the basic property of the Marshall distance, d j Smaller model parameter update X indicating client j j The closer the model parameter updates corresponding to the model principal components are, therefore d j The greater indicates that the greater the likelihood that client j is a malicious client;
s5, in the update of the present round, the pair d= { d 1 ,d 2 ,...,d k/2 The corresponding client in the process is temporarily judged to be the honest client, and d= { d k/2+1 ,d k/2+2 ,...,d k The client corresponding to the client is marked as a malicious client, and model parameter updating determined as honest client is average aggregated to obtain model parameter theta r And as global model parameter updating, the global model parameter updating is issued to each client by the server in the (r+1) th round of iteration to train the local data.
Through the steps, the malicious client can be effectively identified, the influence of the malicious client on model updating is limited before model parameter updating, and meanwhile, information loss caused by dimension reduction is reduced to the greatest extent, so that a strong defense is provided for the global model in the step of model aggregation.
In step S104, after processing by the server based on the mahalanobis distance aggregation algorithm, d= { d k/2+1 ,d k/2+2 ,...,d k And marking the client corresponding to the client as a malicious client, and ignoring the model parameter update of the part of clients in the process of the model aggregation of the round.
In step S105, the server aggregates the model parameters remaining after pruning, and updates the global model parameters to θ using the average aggregation most commonly used in federal learning r The updated global parameter theta r Returns to each client and is used for the next round of training, the r+1st round. Experiments prove that the aggregate defense method has better defense functions for poisoning attacks including data poisoning, model poisoning and backdoor attack in a malicious client.
In step S106, each client downloads the updated global parameter θ from the server r And utilizes local data D i Training, starting a new round of r+1 iteration, and updating model parameters of each client to be named as theta r+1,i . After global training of R round, the parameter theta is used R A final model M is determined.
From the above flow, the core stage of the aggregation defense method for resisting the poisoning attack, namely the federal learning pruning stage, uses the mahalanobis distance-based aggregation algorithm to prune abnormal parameters, and effectively eliminates the influence of model update of a malicious client on model aggregation.
As shown in fig. 4, in other embodiments of the present invention, an aggregation device applying an aggregation defense method for resisting a poisoning attack is further provided, where the aggregation device includes a training module 201, an uploading module 202, a pruning module 203, an aggregation module 204, and an updating module 205.
Wherein, the training module 201 is configured to generate a parameter θ at the server 0 And published to all clients. Each client trains local data by using a global model of federal learning to obtain updating of model parameters;
the uploading module 202 is configured to update model parameters uploaded by k clients to a server, where the server compresses the parameters uploaded by k clients by using a top-k algorithm;
the pruning module 203 is configured to upload, by the server, model parameters θ to k clients through a mahalanobis distance-based aggregation algorithm r,i A determination is made that the probability that the last 50% of the ranks are malicious clients is greater.
The aggregation module 204 is configured to aggregate model parameters determined to be honest clients and update the global parameter θ r And returning to the honest client;
the updating module 205 is configured to download the updated global parameter θ from the server r To the client and utilize the local data D i Training is performed, and a new round of r+1 iteration is started. And finally, the global model is trained, so that the defense to the poisoning attack is effectively realized, and a safer and more efficient personal privacy protection method is further realized.
As shown in fig. 5, based on the above aggregation defense method and the aggregation device for resisting the poisoning attack, the present invention further provides an electronic device 500 applying the aggregation defense method for resisting the poisoning attack, where the electronic device 500 includes a memory 501, a processor 502, and a communication interface 503, and the memory 501, the processor 502, and the communication interface 503 are connected in series through a bus.
In summary, the invention can be applied to the scenes related to data security and privacy protection in various fields such as medical treatment, finance, the Internet of things and the like, realizes the general defense to poisoning attacks such as data poisoning, model poisoning, back door attack and the like, solves the limitation of privacy leakage caused by the fact that malicious adversaries damage the precision and performance of a global model through the poisoning attacks in the existing federal learning model, simultaneously provides a federal learning aggregation algorithm based on the parameters of an effective pruning abnormal model of a mahalanobis distance, and realizes the balance of algorithm accuracy, high efficiency and robustness.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention.

Claims (6)

1. The aggregation defense method for resisting the poisoning attack is characterized by mainly comprising the following steps of:
s101, a client downloads a global model and trains local data; in step S101, the global model has a parameter θ 0 The total number of clients is N, the number of malicious clients is M, and N>2*M;
S102, the client compresses model parameters by utilizing a top-k algorithm and uploads the model parameters, and the server receives the model parameters uploaded by the client; in step S102, in training wheel r, a subset of k clients is selected according to availability, where k N is equal to or less, each participant P in the subset i ∈P r Using local dataset D i Executing a training algorithm to obtain an updated parameter θ r,i Compressing the model parameters by using a top-k algorithm and sending the model parameters to a server;
s103, performing data dimension reduction on model parameters by utilizing principal component analysis, extracting core parameters, and judging the properties of the client by using an aggregation algorithm through the serverSetting; in step S103, for the model parameter update after the compression process, the server receives the parameters θ uploaded by the k clients r,i The server receives k n-dimensional samples, i.e., x= { X 1 ,X 2 ,...,X k (wherein X is j Indicating model parameters uploaded by a j-th client in k clients received by a server;
in step S103, the aggregation algorithm specifically includes:
s1, obtaining an average value of model parameters uploaded by k clients in the round of model parameter updating
S2, performing data dimension reduction on the model parameters by using principal component analysis, extracting core parameters, and calculating dimension reduced model parameters X of each client j Average value ofDistance D between March j To obtain D= { D 1 ,D 2 ,...,D k };
S3, D= { D 1 ,D 2 ,...,D k Sorting to obtain d= { d 1 ,d 2 ,...,d k };
S4, according to the basic property of the Marshall distance, d j Smaller model parameter update X indicating client j j The closer the model parameter updates corresponding to the model principal components are, therefore d j The greater indicates that the greater the likelihood that client j is a malicious client;
s5, in the update of the present round, the pair d= { d 1 ,d 2 ,...,d k/2 The corresponding client in the process is temporarily judged to be the honest client, and d= { d k/2+1 ,d k/2+2 ,...,d k The client corresponding to the client is marked as a malicious client, and model parameter theta is obtained by average aggregation of model parameter updates of the client determined to be honest r And as global model parameter update, issued by the server in the (r+1) th iterationEach client trains the local data;
s104, pruning the model parameters uploaded by the malicious client side by the server, and ignoring the pruned model parameters in the updating of the model parameters of the round;
s105, the server aggregates the gradient to update the global model, and returns the updated global model to the client;
s106, the client updates the local model by using the updated global model.
2. The aggregate defense method against poisoning attacks according to claim 1, wherein: in step S105, the server aggregates the model parameters remaining after pruning, and updates the global model parameters to θ using the average aggregation most commonly used in federal learning r The updated global parameter theta r Returns to each client and is used for the next round of training, the r+1st round.
3. The aggregate defense method against poisoning attacks according to claim 2, wherein: in step S106, each client downloads the updated global parameter θ from the server r And utilizes local data D i Training, starting a new round of r+1 iteration, and updating model parameters of each client to be named as theta r+1,i After global training of R round, the parameter theta is used R A final model M is determined.
4. A polymerization apparatus characterized in that: use of a method of polymeric defense against a challenge attack according to any of claims 1-3.
5. The polymerization apparatus of claim 4, wherein the polymerization apparatus comprises: the training module, the uploading module, the pruning module, the aggregation module and the updating module, wherein,
the training module generates a global model at the server and distributes the global model to all clients, and the clients train the local data by using the global model;
the uploading module is used for updating the model parameters uploaded by the client to the server;
the pruning module judges model parameters uploaded by the client by utilizing an aggregation algorithm based on a mahalanobis distance;
the aggregation module is used for aggregating the model parameters determined to be the honest client, updating the global parameters and returning the global parameters to the honest client;
the updating module is used for downloading the updated global parameters from the server to the client, training by utilizing the local data and starting a new iteration.
6. An electronic device, characterized in that: use of a method of polymeric defense against a challenge attack according to any of claims 1-3.
CN202211397225.6A 2022-11-09 2022-11-09 Aggregation defense method and device for resisting poisoning attack and electronic equipment Active CN115758350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211397225.6A CN115758350B (en) 2022-11-09 2022-11-09 Aggregation defense method and device for resisting poisoning attack and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211397225.6A CN115758350B (en) 2022-11-09 2022-11-09 Aggregation defense method and device for resisting poisoning attack and electronic equipment

Publications (2)

Publication Number Publication Date
CN115758350A CN115758350A (en) 2023-03-07
CN115758350B true CN115758350B (en) 2023-10-24

Family

ID=85368484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211397225.6A Active CN115758350B (en) 2022-11-09 2022-11-09 Aggregation defense method and device for resisting poisoning attack and electronic equipment

Country Status (1)

Country Link
CN (1) CN115758350B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468264A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based poisoning defense and poisoning source tracing federal learning method and device
CN113919507A (en) * 2021-10-12 2022-01-11 重庆邮电大学 Federal learning method based on DAG block chain
CN113965359A (en) * 2021-09-29 2022-01-21 哈尔滨工业大学(深圳) Defense method and device for federal learning data virus attack
CN114494771A (en) * 2022-01-10 2022-05-13 北京理工大学 Federal learning image classification method capable of defending backdoor attacks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11616804B2 (en) * 2019-08-15 2023-03-28 Nec Corporation Thwarting model poisoning in federated learning
US11651292B2 (en) * 2020-06-03 2023-05-16 Huawei Technologies Co., Ltd. Methods and apparatuses for defense against adversarial attacks on federated learning systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468264A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based poisoning defense and poisoning source tracing federal learning method and device
CN113965359A (en) * 2021-09-29 2022-01-21 哈尔滨工业大学(深圳) Defense method and device for federal learning data virus attack
CN113919507A (en) * 2021-10-12 2022-01-11 重庆邮电大学 Federal learning method based on DAG block chain
CN114494771A (en) * 2022-01-10 2022-05-13 北京理工大学 Federal learning image classification method capable of defending backdoor attacks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Data Poisoning Attacks Against Federated Learning Systems;V. Tolpegin et al.;《European Symposium on Research in Computer Security》;1-20 *
周振宇等.《电力物联网通信与信息安全技术》.机械工业出版社,2020,第105-106页. *

Also Published As

Publication number Publication date
CN115758350A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
He et al. Attacking and protecting data privacy in edge–cloud collaborative inference systems
CN110084365B (en) Service providing system and method based on deep learning
CN109951444B (en) Encrypted anonymous network traffic identification method
CN112906903A (en) Network security risk prediction method and device, storage medium and computer equipment
Zhu et al. An efficient and privacy-preserving biometric identification scheme in cloud computing
CN106657049A (en) System and method for real-time collection and fixing of electronic evidence
CN115017541A (en) Cloud-side-end-collaborative ubiquitous intelligent federal learning privacy protection system and method
CN111181930A (en) DDoS attack detection method, device, computer equipment and storage medium
CN115037556B (en) Authorized sharing method for encrypted data in smart city system
WO2021233183A1 (en) Neural network verification method, apparatus and device, and readable storage medium
Zheng et al. A novel video copyright protection scheme based on blockchain and double watermarking
Han et al. Zt-bds: A secure blockchain-based zero-trust data storage scheme in 6g edge iot
Firdaus et al. A secure federated learning framework using blockchain and differential privacy
CN115758350B (en) Aggregation defense method and device for resisting poisoning attack and electronic equipment
CN111832661B (en) Classification model construction method, device, computer equipment and readable storage medium
CN116825259B (en) Medical data management method based on Internet of things
CN117648994A (en) Efficient heterogeneous longitudinal federal learning method based on unsupervised learning
CN111553693A (en) Associated certificate storage method and system based on secondary hash
Shibly et al. Personalized federated learning for automotive intrusion detection systems
Zhao et al. PriFace: a privacy-preserving face recognition framework under untrusted server
CN116233844A (en) Physical layer equipment identity authentication method and system based on channel prediction
CN115643105A (en) Federal learning method and device based on homomorphic encryption and depth gradient compression
CN115695002A (en) Traffic intrusion detection method, apparatus, device, storage medium, and program product
CN111835720B (en) VPN flow WEB fingerprint identification method based on feature enhancement
CN114866310A (en) Malicious encrypted flow detection method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant