CN116226897A - Improved Prim block chain network transmission optimization method combining training loss and privacy loss - Google Patents

Improved Prim block chain network transmission optimization method combining training loss and privacy loss Download PDF

Info

Publication number
CN116226897A
CN116226897A CN202211008333.XA CN202211008333A CN116226897A CN 116226897 A CN116226897 A CN 116226897A CN 202211008333 A CN202211008333 A CN 202211008333A CN 116226897 A CN116226897 A CN 116226897A
Authority
CN
China
Prior art keywords
loss
privacy
training
node
prim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211008333.XA
Other languages
Chinese (zh)
Inventor
柏粉花
沈韬
刘英莉
张弛
杨俊�
曾凯
王青旺
宋健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202211008333.XA priority Critical patent/CN116226897A/en
Publication of CN116226897A publication Critical patent/CN116226897A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6236Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database between heterogeneous systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0407Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden

Abstract

The invention relates to an improved Prim block chain network transmission optimization method combining training loss and privacy loss, and belongs to the technical field of block chain and privacy calculation. Firstly, adding Laplacian noise into a data set to be trained, performing local training, and acquiring training loss L according to training results f The method comprises the steps of carrying out a first treatment on the surface of the Calculating privacy loss L from noise mechanism added to data set p . Next, according to L i =λL f +ηL p And calculating the comprehensive loss value. And finally, constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node. The invention can optimize the communication among nodes in the block chain network, improve the transmission expandability of the block chain network and balance the contradiction between the data training loss and the privacy loss.

Description

Improved Prim block chain network transmission optimization method combining training loss and privacy loss
Technical Field
The invention relates to an improved Prim block chain network transmission optimization method combining training loss and privacy loss, and belongs to the technical field of block chain and privacy calculation.
Background
Privacy calculation can open up a data island under the condition of ensuring privacy security, and is currently applied to government affairs, finance and medical treatment on a large scale. However, in the industrial field, privacy computation cannot meet the requirement of safe processing of massive industrial data. On one hand, privacy calculation is based on complex algorithms such as machine learning, cryptography and the like, and lacks a special acceleration chip, so that when the method faces to massive industrial data across industries and fields, the operation efficiency and the communication speed are low. For example, secure multiparty computing currently can only reach the ms-scale for a single operation, far from supporting massive data processing. On the other hand, the encrypted data is essentially the mapping of the original data under a certain rule, and an attacker can reversely push part or all of the original data, so that a great potential safety hazard exists. For example, federal learning faces multiple types of attacks, such as reconstruction attacks, inference attacks, and theft attacks.
Disclosure of Invention
The invention aims to solve the technical problem of providing an improved Prim block chain network transmission optimization method combining training loss and privacy loss, so that communication among nodes in a block chain network is optimized, the expandability of the block chain network transmission is improved, and the contradiction between data training loss and privacy loss is balanced. In addition, in order to ensure the quality of the training model, a loss threshold is set, and unreliable models are filtered.
The technical scheme of the invention is as follows: an improved Prim block chain network transmission optimization method combining training loss and privacy loss comprises the steps of firstly adding a data set to be trained into Laplacian noise, then carrying out local training, and obtaining training loss L according to training results f And privacy loss L p And then, calculating a comprehensive loss value, and finally, constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node.
The method comprises the following specific steps:
step1: at the local data set D to be trained i ={(x 1 ,y 1 ),...,(x i ,y i ),...,(x n ,y n ) Adding Laplacian noise Lap (b) into the model, and training the data set after adding the noise to obtain model parameters W i According to training results, obtaining training loss L f . Calculating privacy loss L from noise mechanism added to data set p
Step2: calculating a comprehensive loss evaluation value according to formula (1):
L i =λL f +ηL p (1)
wherein L is f Representing the local training loss of the FL i-th mobile edge device, λ and η represent constants.
The constants lambda and eta are adjustable to meet different requirements for accuracy and privacy loss. By setting the loss threshold value l, the model lower than the loss threshold value is ignored, the quality of the model is ensured, and the requirement of privacy protection is met.
Step3: and constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node. The efficiency of block verification is improved.
Step4: and verifying and storing the training result through the consensus node.
After Laplace noise is added in the Step1, the iterative formula of the model vector of the ith trainer in the kth round is as follows:
Figure SMS_1
in the formula, alpha represents the learning rate,
Figure SMS_2
representing a training loss function.
Calculating privacy loss L in Step1 p The method comprises the following steps:
selecting a random algorithm
Figure SMS_3
And ε, δ > 0.
The algorithm R satisfies the (epsilon, delta) local differential privacy if and only if, for all x, x' e χ,
Figure SMS_4
the following inequality is satisfied:
Figure SMS_5
where ε represents the privacy preserving budget and δ represents the failure probability.
The loss of privacy can be expressed as:
Figure SMS_6
the Step3 specifically comprises the following steps:
step3.1: the nodes of the block chain are connected into an undirected weighted graph, and the weights represent the comprehensive evaluation value L of training loss and privacy loss i
Step3.2: the tree T is initialized to a null tree and then n-1 edges are added to the tree T until the minimum spanning tree generates n-1 edges.
Step3.3: after the minimum spanning tree is obtained, the node with the smallest weight and the smallest weight is selected from the initial node 0 to be used as the consensus node of the blockchain.
Aiming at the problem of privacy leakage in the data transmission and sharing process in the industrial edge network scene, the differential privacy protection method is utilized to add gradient into Laplacian noise, so that the safety of the transmission process is ensured. Secondly, based on model training loss and privacy loss, a new evaluation mechanism for filtering unreliable nodes and unqualified models is provided so as to balance contradiction between federal learning model loss and data privacy protection and further improve blockchain consensus efficiency. And thirdly, selecting a blockchain consensus node to optimize the blockchain network by utilizing a Prim minimum spanning tree algorithm according to the training loss evaluation and the privacy disclosure degree of the node, improving the transmission expandability of the blockchain network, completing learning parameter consensus uplink storage and ensuring the safety of model parameters.
The beneficial effects of the invention are as follows: the invention can optimize the communication among nodes in the block chain network, improve the transmission expandability of the block chain network and balance the contradiction between the data training loss and the privacy loss. Furthermore, the proposed algorithm can guarantee the quality of the training model, since models below the loss threshold can be ignored.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a block chain consensus node selection diagram based on Prim in accordance with the present invention;
fig. 3 is a diagram of the communication overhead of the present invention.
Detailed Description
In order to make the application, technical scheme and advantages of the present invention more clear and understandable, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The illustrations provided in the examples below and the setting of specific parameter values in the models are mainly for illustrating the basic idea of the invention and for performing simulation verification on the invention, and in a specific application environment, the actual scene and the requirements can be appropriately adjusted.
Example 1: as shown in fig. 1, an improved Prim blockchain network transmission optimization method combining training loss and privacy loss includes the following steps:
step1: at the local data set D to be trained i ={(x 1 ,y 1 ),...,(x i ,y i ),...,(x n ,y n ) Adding Laplacian noise Lap (b) into the model, and training the data set after adding the noise to obtain model parameters W i . According to training result, obtaining training loss L f . Calculating privacy loss L from noise mechanism added to data set p
Step2: calculating a comprehensive loss evaluation value according to formula (1):
L i =λL f +ηL p (1)
wherein L is f Indicating the local training loss of the FL i mobile edge device. The constants lambda and eta are adjustable to meet different requirements for accuracy and privacy loss. By setting the loss threshold value l, the model lower than the loss threshold value is ignored, the quality of the model is ensured, and the requirement of privacy protection is met.
Step3: and constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node, thereby improving the efficiency of block verification.
Step4: and the consensus node verifies and stores the training result.
After Laplace noise is added in the Step1, the model vector iterative formula of the ith trainer in the kth round is as follows:
Figure SMS_7
in the formula, alpha represents the learning rate,
Figure SMS_8
representing a training loss function.
The privacy loss in Step1 is calculated according to the following formula:
selecting a random algorithm
Figure SMS_9
And ε, δ > 0. The algorithm R satisfies (ε, δ) local differential privacy if and only if ∈χ,>
Figure SMS_10
when the following inequality is satisfied
Figure SMS_11
Where ε represents the privacy preserving budget and δ represents the failure probability. The loss of privacy can be expressed as:
Figure SMS_12
as shown in fig. 2, the relationship between training loss and different privacy losses is shown and a trade-off is found between model convergence and privacy protection level. Since the loss assessment is related to training loss and privacy loss, by adjusting the parameters λ and η, different requirements of different users on the loss can be achieved. It can be seen from the figure that the greater the weight of the training loss (taking λ=0.9), the smaller the overall loss L after 30 iterations. In other words, the lower the degree of privacy protection (taking η=0.1, the more privacy information is compromised), the higher the accuracy of the FL model, and the smaller the overall loss L. As the degree of privacy protection η increases, the total loss L increases. From λ=0.9, η=0.1 to λ=0.4, η=0.6, different weight parameter settings will lead to a difference in losses of 18% and 23% respectively at the maximum loss of the initial round iteration and the minimum loss of the final round iteration. Different users can set different loss parameters according to own requirements.
The block chain network optimization process in Step3 is shown in fig. 3:
step3.1: the nodes of the block chain are connected into an undirected weighted graph, and the weights represent the comprehensive evaluation value L of training loss and privacy loss i
Step3.2: the tree T is initialized to a null tree and then n-1 edges are added to the tree T until the minimum spanning tree generates n-1 edges.
Step3.3: after the minimum spanning tree is obtained, the node with the smallest weight and the smallest weight is selected from the initial node 0 to be used as the consensus node of the blockchain.
As shown in fig. 4, the transmission overhead difference is shown under different processing schemes. The Prim optimization scheme has a much lower communication overhead (the former overhead is almost 5 times that of the latter) as the privacy loss increases compared to the non-optimized scheme. When variable laplace noise is added, the privacy loss Lp is calculated based on the above formula (4). If one chooses to clip some models that exceed the set threshold, the bandwidth overhead is reduced compared to when not optimized but still higher than the Prim optimization scheme proposed.
While the present invention has been described in detail with reference to the drawings, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (4)

1. An improved Prim block chain network transmission optimization method combining training loss and privacy loss is characterized in that: firstly, adding Laplacian noise into a data set to be trained, performing local training, and acquiring training loss L according to training results f And privacy loss L p And then, calculating a comprehensive loss value, and finally, constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node.
2. The improved Prim blockchain network transmission optimization method combining training loss and privacy loss according to claim 1, characterized by the specific steps of:
step1: at the local data set D to be trained i ={(x 1 ,y 1 ),...,(x i ,y i ),...,(x n ,y n ) Adding Laplacian noise Lap (b) to the mixture, and then addingTraining the noisy data set to obtain model parameters W i According to training results, obtaining training loss L f The method comprises the steps of carrying out a first treatment on the surface of the Calculating privacy loss L from noise mechanism added to data set p
Step2: calculating a comprehensive loss evaluation value according to formula (1):
L i =λL f +ηL p (1)
wherein L is f Representing local training loss of the i-th mobile edge device of the FL, and lambda and eta represent constants;
step3: constructing a minimum spanning tree by utilizing a Prim algorithm according to the comprehensive loss evaluation value of the node, and selecting the node on the branch with the minimum loss as a consensus node;
step4: and verifying and storing the training result through the consensus node.
3. The improved Prim blockchain network transmission optimization method combining training loss and privacy loss according to claim 2, wherein the privacy loss L is calculated in Step1 p The method comprises the following steps:
selecting a random algorithm
Figure FDA0003809867400000011
And ε, δ > 0;
the algorithm R satisfies the (epsilon, delta) local differential privacy if and only if, for all x, x' e χ,
Figure FDA0003809867400000012
the following inequality is satisfied:
Figure FDA0003809867400000013
wherein epsilon represents privacy protection budget and delta represents failure probability;
the loss of privacy can be expressed as:
Figure FDA0003809867400000014
4. the improved Prim blockchain network transmission optimization method combining training loss and privacy loss according to claim 2, wherein Step3 is specifically:
step3.1: the nodes of the block chain are connected into an undirected weighted graph, and the weights represent the comprehensive evaluation value L of training loss and privacy loss i
Step3.2: initializing a tree T into an empty tree, and then adding n-1 edges into the tree T until the minimum spanning tree generates n-1 edges;
step3.3: after the minimum spanning tree is obtained, the node with the smallest weight and the smallest weight is selected from the initial node 0 to be used as the consensus node of the blockchain.
CN202211008333.XA 2022-08-22 2022-08-22 Improved Prim block chain network transmission optimization method combining training loss and privacy loss Pending CN116226897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211008333.XA CN116226897A (en) 2022-08-22 2022-08-22 Improved Prim block chain network transmission optimization method combining training loss and privacy loss

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211008333.XA CN116226897A (en) 2022-08-22 2022-08-22 Improved Prim block chain network transmission optimization method combining training loss and privacy loss

Publications (1)

Publication Number Publication Date
CN116226897A true CN116226897A (en) 2023-06-06

Family

ID=86589784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211008333.XA Pending CN116226897A (en) 2022-08-22 2022-08-22 Improved Prim block chain network transmission optimization method combining training loss and privacy loss

Country Status (1)

Country Link
CN (1) CN116226897A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034356A (en) * 2023-10-09 2023-11-10 成都乐超人科技有限公司 Privacy protection method and device for multi-operation flow based on hybrid chain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034356A (en) * 2023-10-09 2023-11-10 成都乐超人科技有限公司 Privacy protection method and device for multi-operation flow based on hybrid chain
CN117034356B (en) * 2023-10-09 2024-01-05 成都乐超人科技有限公司 Privacy protection method and device for multi-operation flow based on hybrid chain

Similar Documents

Publication Publication Date Title
US7333923B1 (en) Degree of outlier calculation device, and probability density estimation device and forgetful histogram calculation device for use therein
CN111625816A (en) Intrusion detection method and device
Prakash et al. IoT device friendly and communication-efficient federated learning via joint model pruning and quantization
CN114510652B (en) Social collaborative filtering recommendation method based on federal learning
Rahnamayan et al. Center-based sampling for population-based algorithms
CN111340493B (en) Multi-dimensional distributed abnormal transaction behavior detection method
CN112884237A (en) Power distribution network prediction auxiliary state estimation method and system
CN115378813B (en) Distributed online optimization method based on differential privacy mechanism
CN116226897A (en) Improved Prim block chain network transmission optimization method combining training loss and privacy loss
CN113642715A (en) Differential privacy protection deep learning algorithm for self-adaptive distribution of dynamic privacy budget
CN115481441A (en) Difference privacy protection method and device for federal learning
CN109492816B (en) Coal and gas outburst dynamic prediction method based on hybrid intelligence
CN114708479A (en) Self-adaptive defense method based on graph structure and characteristics
Yang et al. Model optimization method based on vertical federated learning
CN117150566B (en) Robust training method and device for collaborative learning
Arapostathis et al. Convergence of the relative value iteration for the ergodic control problem of nondegenerate diffusions under near-monotone costs
KR20220027155A (en) Devices and methods for enumeration of grid points
CN115935436A (en) Deep learning model privacy protection method based on differential privacy
CN116996272A (en) Network security situation prediction method based on improved sparrow search algorithm
Gao et al. Zero‐sum game‐based security control of unknown nonlinear Markov jump systems under false data injection attacks
CN115994481A (en) Ship motion attitude prediction method based on multiple combinations
Sun et al. Neural‐network‐based event‐triggered adaptive security path following control of autonomous ground vehicles subject to abnormal actuator signal
Münker et al. Hierarchical model predictive control for local model networks
Zhang et al. Neural cryptography based on quaternion-valued neural network
Shioda et al. Adaptive weighted aggregation with step size control weight adaptation for multiobjective continuous function optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination