CN114169010A - Edge privacy protection method based on federal learning - Google Patents

Edge privacy protection method based on federal learning Download PDF

Info

Publication number
CN114169010A
CN114169010A CN202111514239.7A CN202111514239A CN114169010A CN 114169010 A CN114169010 A CN 114169010A CN 202111514239 A CN202111514239 A CN 202111514239A CN 114169010 A CN114169010 A CN 114169010A
Authority
CN
China
Prior art keywords
privacy
model
local
edge
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111514239.7A
Other languages
Chinese (zh)
Inventor
吴彩
葛斌
张天浩
沐里亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202111514239.7A priority Critical patent/CN114169010A/en
Publication of CN114169010A publication Critical patent/CN114169010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a federal learning-based edge privacy protection method, which comprises the following steps: the edge device downloads the model from the server and initializes the federal training model, and the edge device trains the model by using local data; adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, and uploading the encrypted parameters to a server; and the server receives the local model parameters subjected to the local differential privacy encryption, aggregates the local model parameters and updates the global model parameters. The local differential privacy is applied to federal learning, so that the local training and aggregation model of data in edge equipment can be completed when the edge users are guaranteed to share data safely, the data do not need to be uploaded in the whole process, the privacy safety of the edge user data is guaranteed, and the purpose of protecting model parameters is achieved while the accuracy of the model is guaranteed.

Description

Edge privacy protection method based on federal learning
Technical Field
The invention relates to the technical field of data security and privacy protection, in particular to an edge privacy protection method based on federal learning.
Background
Under the big data era, the internet of things equipment is continuously increased, and data in the network is increased explosively. In the background, edge computing of distributed learning is produced as soon as the traditional cloud computing for processing data in a centralized manner can not meet the requirements of networking equipment. The combination of edge calculation and machine learning greatly promotes the application and development of edge calculation. When a large amount of machine learning services become an indispensable part of daily life, more user data needs to be collected, and the security of user privacy data cannot be ensured. Therefore, corresponding measures have to be taken to protect the privacy of the user. However, the standard machine learning method often does not consider the problem of privacy too much, and the anonymization method is not strong enough in protecting privacy.
Edge computing is used as a new data service platform, and certain storage and computing capacity is brought to equipment in the Internet of things. The edge calculation and the federal learning are combined by utilizing the capability, and the purpose of protecting the safety of edge data is achieved. In the whole process of federal learning, data is always placed in the local edge device, and data information is not shared with other edge nodes, so that the method is a method for powerfully solving the problem of edge data safety and protecting privacy. However, the federated learning algorithm based on differential privacy does not take into account privacy protection during the parameter upload phase, i.e. gradient information of the client may be intercepted by hidden adversaries when uploading the local model to the server. In the scenario of edge computing, an edge user outsources private data through an edge server, so that the edge user cannot control the use of the outsourced data. Meanwhile, the edge device storing the outsourced data cannot provide reliable security guarantee. For the problems existing above, federal learning can solve these threats, because federal learning not only can provide strict privacy assurance, but also has the advantages of high traffic efficiency, low calculation and development cost, high equipment tolerance and the like. Therefore, random response is introduced from the perspective of model parameter uploading security and the perspective of local differential privacy protection, and the security and reliability of the model are ensured.
Disclosure of Invention
The invention aims to provide an edge privacy protection method based on federal learning, which utilizes local differential privacy to add data disturbance to model parameters of edge equipment in the federal learning and controls privacy loss by adjusting privacy parameters epsilon. The method can complete the localization training and the aggregation model of the data in the edge device when the edge user is ensured to share the data safely, the data does not need to be uploaded in the whole process, the privacy and the safety of the data of the edge user are ensured, and the purpose of protecting the model parameters is achieved while the accuracy of the model is ensured.
In order to achieve the above object, with reference to fig. 1, the present invention provides a federally-learned edge privacy protection method, including:
the edge device downloads the model from the server and initializes the federal training model, and the edge device trains the model by using local data; adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, and uploading the encrypted parameters to a server; and the server receives the local model parameters subjected to the local differential privacy encryption, aggregates the local model parameters and updates the global model parameters.
Further, the method further comprises:
s1, the edge device downloads the model from the server, transmits initial parameters, initializes the federal training model, and utilizes local data to train the model;
s2, adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, blurring the parameters of the local model, controlling privacy loss by adjusting a privacy parameter epsilon, and uploading the encrypted parameters to a server;
s3, the server receives the local model parameters encrypted by the local differential privacy for aggregation, and updates the global model parameters;
and S4, sending the updated model parameters to the edge equipment participating in training, and updating the local model of the edge equipment by the edge equipment until the loss function is converged.
Furthermore, the method does not need to upload data in the whole process, and the privacy and the safety of the data of the edge user are protected.
Further, in step S2, the privacy parameter e of the local differential privacy satisfies the following requirement Pr [ S (v) ═ v ∈ -*]≤eε*[S(v′)=v*]The similarity of the output results of any two input records is compared.
Further, the method aims to train a global model k (w) in the scene of the edge network:
Figure BDA0003406291610000021
minimizing an objective function F*(w) to provide a strong privacy guarantee, F*(w) is:
Figure BDA0003406291610000022
for each user UiOwned data set DiThe loss function of (d) is:
Figure BDA0003406291610000023
wherein f isj(w,Di) Is the jth data sample (x) of the model parameter wi,yi) Is measured.
Further, the method generally adopts a random gradient descent (SGD) algorithm, that is, a new round of weight update is calculated by multiplying a loss function trained by a local client model by a fixed learning rate ρ. Thus, the model weights for the local client are updated as follows:
Figure BDA0003406291610000024
the model aggregation of the tth round communication center server is updated as follows:
Figure BDA0003406291610000025
advantageous effects
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention combines federal learning and edge calculation to realize edge data sharing;
(2) the method applies the local differential privacy to the federal study, can complete the local training and aggregation model of the data in the edge device when ensuring the safe data sharing of the edge users, does not need to upload the data in the whole process, and ensures the privacy safety of the data of the edge users.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the edge privacy protection method based on federated learning according to the present invention.
Fig. 2 is a federal learning parameters protection diagram.
FIG. 3 is a schematic diagram of one of the procedures for implementing the federated learning method of the present invention.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown.
Embodiments of the present disclosure are not necessarily defined to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure. The invention or inventions are further illustrated by the following specific examples.
With reference to fig. 1, the present invention provides an edge-computation-oriented federal learning method, which includes:
the edge device downloads the model from the server and initializes the federal training model, and the edge device trains the model by using local data; adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, and uploading the encrypted parameters to a server; and the server receives the local model parameters subjected to the local differential privacy encryption, aggregates the local model parameters and updates the global model parameters.
The whole edge privacy protection framework can be divided into three layers: the system comprises a user equipment layer, an edge node layer and a cloud server layer.
(1) A terminal equipment layer: the user collects raw data through edge devices (such as a gateway, a monitoring camera and the like) and uploads the data to an edge server.
(2) An edge layer: and training the collected data on the edge server to be a local model, and adding disturbance to the local model.
(3) A cloud server layer: and completing model aggregation, and simultaneously obtaining the model from the cloud server layer by the edge server node according to the task requirement.
Assuming that N devices participate in a training process of global model aggregation in an edge computing environment, the N devices perform model training locally through data collected by terminal devices, and a random gradient descent (SGD) algorithm is adopted in an uploading process, namely, a new round of weight updating is calculated through a loss function of local client model training and a fixed learning rate rho. Thus, the model weights for the local client are updated as follows:
Figure BDA0003406291610000031
the model aggregation of the tth round communication center server is updated as follows:
Figure BDA0003406291610000032
in the uploading process of the local model, firstly, random disturbance is added to the parameters of the local model; then the model is polymerized; finally, the global model broadcasts the edge devices that participate in the training. The encryption process of the federally learned model parameters is shown in fig. 2.
In summary, the edge privacy protection method based on federal learning provided by the invention can protect data privacy by a method combining federal learning and localized differential privacy under edge calculation. Localized differential privacy is more suitable for the federated environment of edge computing than differential privacy. The method enhances the protection strength of the privacy security of the edge data, protects the security of the model parameters in the uploading process to a certain extent, and is suitable for the untrusted scene of the third-party server.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (7)

1. A federated learning-based edge privacy protection method, the method comprising:
the edge device downloads the model from the server and initializes the federal training model, and the edge device trains the model by using local data; adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, and uploading the encrypted parameters to a server; and the server receives the local model parameters subjected to the local differential privacy encryption, aggregates the local model parameters and updates the global model parameters.
2. The federated learning-based edge privacy protection method of claim 1, further comprising:
s1, the edge device downloads the model from the server, transmits initial parameters, initializes the federal training model, and utilizes local data to train the model;
s2, adding disturbance to the gradient parameters of the local model by using a localized differential privacy technology, blurring the parameters of the local model, controlling privacy loss by adjusting a privacy parameter epsilon, and uploading the encrypted parameters to a server;
s3, the server receives the local model parameters encrypted by the local differential privacy for aggregation, and updates the global model parameters;
and S4, sending the updated model parameters to the edge equipment participating in training, and updating the local model of the edge equipment by the edge equipment until the loss function is converged.
3. The federal learning-based edge privacy protection method as claimed in claim 2, wherein the method does not require uploading data all the way, and privacy security of edge user data is protected.
4. The federally-learned edge privacy protection method as claimed in claim 2, wherein in step S2, the privacy parameter e of the local differential privacy satisfies Pr [ S (v) ═ v £ v*]≤eε*[S(v′)=v*]The similarity of the output results of any two input records is compared.
5. The federated learning-based edge privacy protection method of claim 2, wherein the method aims to train a global model k (w) in the context of the edge network:
Figure FDA0003406291600000011
minimizationObjective function F*(w) to provide a strong privacy guarantee, F*(w) is
Figure FDA0003406291600000012
For each user UiOwned data set DiThe loss function of (d) is:
Figure FDA0003406291600000013
wherein f isj(w,Di) Is the jth data sample (x) of the model parameter wi,yi) Is measured.
6. The federated learning-based edge privacy protection method as claimed in claim 1, wherein the method usually employs a random gradient descent (SGD) algorithm, that is, a new round of weight update is calculated by a loss function trained by a local client model, multiplied by a fixed learning rate p. Thus, the model weights for the local client are updated as follows:
Figure FDA0003406291600000014
the model aggregation of the tth round communication center server is updated as follows:
Figure FDA0003406291600000015
7. the federated learning-based edge privacy protection method of claim 1, further comprising:
a random response mechanism is adopted for encryption in the local differential privacy, and the privacy protection degree is controlled by controlling the randomization probability P. To meet the requirement of epsilon-localization differential privacy, the randomization probability is:
Figure FDA0003406291600000021
epsilon is called privacy budget, and the smaller the epsilon value, the higher the protection degree of the algorithm on the privacy of the user. Epsilon can measure the loss degree of the DP algorithm to the privacy, and generally, the degree of the privacy loss degree is influenced by the level of the privacy budget value. As the value of the privacy budget increases, the accuracy of the privacy protected object increases, which results in an increased degree of loss of privacy of the protected object.
CN202111514239.7A 2021-12-13 2021-12-13 Edge privacy protection method based on federal learning Pending CN114169010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111514239.7A CN114169010A (en) 2021-12-13 2021-12-13 Edge privacy protection method based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111514239.7A CN114169010A (en) 2021-12-13 2021-12-13 Edge privacy protection method based on federal learning

Publications (1)

Publication Number Publication Date
CN114169010A true CN114169010A (en) 2022-03-11

Family

ID=80485822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111514239.7A Pending CN114169010A (en) 2021-12-13 2021-12-13 Edge privacy protection method based on federal learning

Country Status (1)

Country Link
CN (1) CN114169010A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357526A (en) * 2022-03-15 2022-04-15 中电云数智科技有限公司 Differential privacy joint training method for medical diagnosis model for resisting inference attack
CN114707606A (en) * 2022-04-11 2022-07-05 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
CN114969503A (en) * 2022-03-30 2022-08-30 贵州大学 Multi-data user portrait implementation method based on federal learning
CN114997420A (en) * 2022-08-03 2022-09-02 广州中平智能科技有限公司 Federal learning system and method based on segmentation learning and differential privacy fusion
CN115270001A (en) * 2022-09-23 2022-11-01 宁波大学 Privacy protection recommendation method and system based on cloud collaborative learning
CN115329388A (en) * 2022-10-17 2022-11-11 南京信息工程大学 Privacy enhancement method for federally generated countermeasure network
CN115840965A (en) * 2022-12-27 2023-03-24 光谷技术有限公司 Information security guarantee model training method and system
CN116148193A (en) * 2023-04-18 2023-05-23 天津中科谱光信息技术有限公司 Water quality monitoring method, device, equipment and storage medium
CN116611115A (en) * 2023-07-20 2023-08-18 数据空间研究院 Medical data diagnosis model, method, system and memory based on federal learning
CN116611118A (en) * 2023-07-21 2023-08-18 北京智芯微电子科技有限公司 Construction method and device of data privacy protection model based on improved differential privacy
WO2023216899A1 (en) * 2022-05-13 2023-11-16 北京字节跳动网络技术有限公司 Model performance evaluation method and apparatus, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866869A (en) * 2020-07-07 2020-10-30 兰州交通大学 Federal learning indoor positioning privacy protection method facing edge calculation
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN113591145A (en) * 2021-07-28 2021-11-02 西安电子科技大学 Federal learning global model training method based on difference privacy and quantification
CN113762525A (en) * 2021-09-07 2021-12-07 桂林理工大学 Federal learning model training method with differential privacy protection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866869A (en) * 2020-07-07 2020-10-30 兰州交通大学 Federal learning indoor positioning privacy protection method facing edge calculation
CN113434873A (en) * 2021-06-01 2021-09-24 内蒙古大学 Federal learning privacy protection method based on homomorphic encryption
CN113361694A (en) * 2021-06-30 2021-09-07 哈尔滨工业大学 Layered federated learning method and system applying differential privacy protection
CN113591145A (en) * 2021-07-28 2021-11-02 西安电子科技大学 Federal learning global model training method based on difference privacy and quantification
CN113762525A (en) * 2021-09-07 2021-12-07 桂林理工大学 Federal learning model training method with differential privacy protection

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357526A (en) * 2022-03-15 2022-04-15 中电云数智科技有限公司 Differential privacy joint training method for medical diagnosis model for resisting inference attack
CN114969503A (en) * 2022-03-30 2022-08-30 贵州大学 Multi-data user portrait implementation method based on federal learning
CN114969503B (en) * 2022-03-30 2024-04-02 贵州大学 Multi-data user portrait realizing method based on federal learning
CN114707606A (en) * 2022-04-11 2022-07-05 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
CN114707606B (en) * 2022-04-11 2023-12-22 中国电信股份有限公司 Data processing method and device based on federal learning, equipment and storage medium
WO2023216899A1 (en) * 2022-05-13 2023-11-16 北京字节跳动网络技术有限公司 Model performance evaluation method and apparatus, device and medium
CN114997420A (en) * 2022-08-03 2022-09-02 广州中平智能科技有限公司 Federal learning system and method based on segmentation learning and differential privacy fusion
CN114997420B (en) * 2022-08-03 2022-12-16 广州中平智能科技有限公司 Federal learning system and method based on segmentation learning and differential privacy fusion
CN115270001A (en) * 2022-09-23 2022-11-01 宁波大学 Privacy protection recommendation method and system based on cloud collaborative learning
CN115270001B (en) * 2022-09-23 2022-12-23 宁波大学 Privacy protection recommendation method and system based on cloud collaborative learning
CN115329388A (en) * 2022-10-17 2022-11-11 南京信息工程大学 Privacy enhancement method for federally generated countermeasure network
CN115840965A (en) * 2022-12-27 2023-03-24 光谷技术有限公司 Information security guarantee model training method and system
CN115840965B (en) * 2022-12-27 2023-08-08 光谷技术有限公司 Information security guarantee model training method and system
CN116148193B (en) * 2023-04-18 2023-07-18 天津中科谱光信息技术有限公司 Water quality monitoring method, device, equipment and storage medium
CN116148193A (en) * 2023-04-18 2023-05-23 天津中科谱光信息技术有限公司 Water quality monitoring method, device, equipment and storage medium
CN116611115A (en) * 2023-07-20 2023-08-18 数据空间研究院 Medical data diagnosis model, method, system and memory based on federal learning
CN116611118A (en) * 2023-07-21 2023-08-18 北京智芯微电子科技有限公司 Construction method and device of data privacy protection model based on improved differential privacy

Similar Documents

Publication Publication Date Title
CN114169010A (en) Edge privacy protection method based on federal learning
CN112348204B (en) Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
Arachchige et al. A trustworthy privacy preserving framework for machine learning in industrial IoT systems
US11902413B2 (en) Secure machine learning analytics using homomorphic encryption
CN112434280B (en) Federal learning defense method based on blockchain
CN113434873A (en) Federal learning privacy protection method based on homomorphic encryption
CN112580821A (en) Method, device and equipment for federated learning and storage medium
CN105025012A (en) An access control system and an access control method thereof oriented towards a cloud storage service platform
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
CN108270723A (en) A kind of acquisition methods in electric power networks Forecast attack path
CN112990276A (en) Federal learning method, device, equipment and storage medium based on self-organizing cluster
US9432344B2 (en) Secure storage and sharing of user objects
CN110874638B (en) Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system
CN111581648B (en) Method of federal learning to preserve privacy in irregular users
CN111291411A (en) Safe video anomaly detection system and method based on convolutional neural network
CN116127519A (en) Dynamic differential privacy federal learning system based on blockchain
Zhang et al. Visual object detection for privacy-preserving federated learning
CN113312635A (en) Multi-agent fault-tolerant consistency method and system based on state privacy protection
CN113326947A (en) Joint learning model training method and system
CN117216788A (en) Video scene identification method based on federal learning privacy protection of block chain
Yang et al. Cell based raft algorithm for optimized consensus process on blockchain in smart data market
WO2023159812A1 (en) Method and apparatus for detecting ami network intrusion, and medium
CN114398635A (en) Layered security federal learning method and device, electronic equipment and storage medium
Wei et al. Research on e-government information security risk assessment-based on fuzzy AHP and artificial neural network model
EP4083868A1 (en) Federated learning for preserving privacy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination