CN116451805A - Privacy protection federal learning method based on blockchain anti-poisoning attack - Google Patents

Privacy protection federal learning method based on blockchain anti-poisoning attack Download PDF

Info

Publication number
CN116451805A
CN116451805A CN202310354892.4A CN202310354892A CN116451805A CN 116451805 A CN116451805 A CN 116451805A CN 202310354892 A CN202310354892 A CN 202310354892A CN 116451805 A CN116451805 A CN 116451805A
Authority
CN
China
Prior art keywords
ciphertext
gradient
transaction
model
blockchain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310354892.4A
Other languages
Chinese (zh)
Inventor
马海英
杨天玲
黄双龙
杨及坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202310354892.4A priority Critical patent/CN116451805A/en
Publication of CN116451805A publication Critical patent/CN116451805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The invention provides a privacy protection federation learning method based on block chain poisoning attack resistance, and belongs to the technical fields of federation learning, privacy protection and block chain. The technical problems of poisoning attack and malicious aggregation in the privacy protection federal study are solved. The technical scheme is that the method comprises the following steps: s10, global initialization of a system; s20, training a local model; s30, malicious participant identification, and MO and SM cooperate to determine whether the participant performs normalization operation; s40, gradient ciphertext aggregation; s50, updating the model by the participant and the MO. The beneficial effects of the invention are as follows: the invention protects the privacy of the participant model gradient by using the homomorphic encryption algorithm CKS, identifies the malicious gradient in the gradient ciphertext by using a cosine similarity method, and calculates and verifies the aggregation result by using a block chain consensus mechanism.

Description

Privacy protection federal learning method based on blockchain anti-poisoning attack
Technical Field
The invention relates to the technical fields of federal learning, privacy protection and blockchain, in particular to a privacy protection federal learning method based on blockchain poisoning attack resistance.
Background
In the industrial internet of things (IIoT), the number of connected devices is increasing, the number of detected and collected data is gradually increasing, the increase of data can improve the service quality of emerging application programs, traditional machine learning needs to upload original data to a central server, the central server is responsible for using the original data to perform model training, the privacy leakage easily causes great loss of users in the process of data sharing, and the safety and privacy protection problems of data providers need to be emphasized.
Federal Learning (FL) can cooperate with multiple clients to perform model training together, and without sharing local data, participants have many data collected from IIoT, and participants save the data in their own hands to perform local training, and only submit local models, so that user privacy and data security can be ensured. The participants use the local data to train the local model and then upload the local gradients and model parameters to an aggregation server, which aggregates the local gradients of all participants and sends model updates back to all participants. Studies have demonstrated that the original data of a client can be derived from a gradient in the plaintext state, and in order to protect the privacy of the user from disclosure, a privacy protection technique is combined with federal learning. Privacy Preserving Federal Learning (PPFL), wherein the privacy preserving techniques used include differential privacy, secure multiparty computing, homomorphic encryption, etc., using cryptography or noise to hide the true value of the local gradient, allows model updates to be shared between the parties, and the aggregation server performs the aggregation operation on the premise that the true value of the local gradient is not available, thereby improving the accuracy and performance of the model.
The homomorphic encryption technology enables data to be calculated on the ciphertext under the condition that decryption is not performed, the volume of the homomorphic encrypted data is increased, huge calculation cost and data transmission capacity are caused, the homomorphic encryption algorithm Paillier is usually used for traditional privacy protection federal learning, but the homomorphic encryption algorithm Paillier can only encrypt integers, and gradients are floating points, so that the gradients are required to be quantized first and then encrypted one by one, and calculation and communication cost is increased.
The homomorphic encryption algorithm CKS realizes the approximate calculation that the plaintext is the floating point number, homomorphic multiplication of the ciphertext makes the data scale larger, and the CKS algorithm uses homomorphic multiplication keys to realize the scaling of the multiplication ciphertext so as to control the data scale to be in a smaller range, and the CKS algorithm can encrypt the floating point number and directly encrypt the vector, thereby reducing the calculation cost.
Introduction of the major algorithm for CKKS:
(1)KeyGen(1 λ i) → (pk, sk, evk, rk): the key generation algorithm inputs the security parameter lambda and takes an integer p and q 0 Let q l =p l ·q 0 (0 < l.ltoreq.L). A power m=m (λ, q) of 2 is selected L ) (M is a complex with lambda, q L Related values), an integer h=h (λ, q L ) An integer p=p (λ, q L ) And a real value σ=σ (λ, q L ). Randomly selecting s epsilon HWT (h), And e.epsilon.DG (sigma) 2 ) Then the private key sk= (1, s) is set and then the public key +.>Wherein b= -as+e (mod q L ). Randomly select->And e' e DG (sigma) 2 ) Computing an assisted computing keyWherein b ' = -a ' s+e ' +ps 2 (mod P·q L ). For phi and phi M Kappa, galois group of M mutual elements of (2)From the map kappa κ
Is formed withIsomorphism. Kappa (kappa) κ (c) Representing kappa for each component of a ciphertext vector κ Mapping, and using key-switching techniques, the ciphertext κ may be decrypted using the original private key sk κ (c) To obtain kappa κ (m). Thus when k=5 i When mod 2N, a key rk= (-as+e+p·s (x) is generated which rotates i times to the left k ),a)(mod P·q L ). Finally, a set of key sets (pk, sk, evk, rk) is output.
(2) Ecd (z, Δ) →m: the coding algorithm inputs an N/2-dimensional complex vectorAnd a scaling factor delta, calculating a vector +.>Output polynomial +.>
(3) Dcd (m, Δ) →z: the decoding algorithm inputs a polynomialAnd a scaling factor delta, calculating and outputting a vector +.>I.e. < ->
(4) Enc (pk, m) →c: the encryption algorithm inputs a plaintext polynomial m (X) and a public key pk, and randomly selects v E ZO (0.5) and e 0 ,e 1 ∈DG(σ 2 ) Output ciphertext c=v·pk+ (m+e) 0 ,e 1 )(mod q L )。
(5) Dec (sk, c) →m: the decryption algorithm inputs a ciphertext c and a private key sk, outputs a plaintext polynomial m=b+a·s (mod q l )。
(6)Add(c 1 ,c 2 )→c add : ciphertext adding algorithm inputs two ciphertextsOutput c add =c 1 +c 2 (mod q l )。
(7)Mult(c 1 ,c 2 ,evk)→c mult : ciphertext multiplication algorithm input auxiliary computation key evk and two ciphertextsCalculation (d) 0 ,d 1 ,d 2 )=(b 1 b 2 ,a 1 b 2 +a 2 b 1 ,a 1 a 2 )(mod q l ) Finally, outputting ciphertext->
(8) RS (c) →c': the rescaling algorithm inputs a ciphertextAt level l and a lower level l' < l, outputting ciphertext ++>This operation solves the CKKS additionScale up and error problems with the close scheme multiplication homomorphism.
(9) Rot (c, rk, i) →c': the rotation algorithm inputs ciphertext c and rotation key rk i The plaintext vector of output c is rotated (i.e., left shifted) by the ciphertext c' after i positions.
Since CKKS can perform addition, multiplication, and rotation operations on ciphertext. Assume an n-dimensional vector ciphertextDenoted as->The inner product operation under ciphertext may be accomplished by:
1) Using Mult (c) 1 ,c 2 Evk) function will be two n-dimensional vector ciphertextsMultiplication to obtain->Then rescaling the result using the RS (c) function;
2) Rotating ciphertext using Rot (c, rk, 1) functionObtain->
3) Using Add (c) 1 ,c 2 ) Function will ciphertextAnd->Adding to obtain
4) Using the Rot (c, rk, 1) function and Add (c) 1 ,c 2 ) Repeating the function rotation and addition for n-1 times to obtain ciphertext Wherein z is 1 =x 1 2 +x 2 2 +…x n 2
5) Using Mult (c) 1 ,c 2 Evk) function finally will ciphertextAnd->Multiplying to obtain the inner product result- > I.e. < ->Wherein +.is the inner product and RS (c) is used to apply +.>And (5) carrying out rescaling.
Thus, the inner product algorithm Dot (c) under one ciphertext is defined herein 1 ,c 2 Evk, rk, 1): inner product of ciphertext. Input auxiliary computation key evk, two ciphertextAnd rotating the key rk and outputting a result ciphertext c' after the inner product is calculated.
Algornd consensus mechanism: algornd is a consensus mechanism based on Proof of equity (PoS) and BFT, using committee-based PoS to generate blocks, and then achieving block consensus through BFT. Algornd achieves consensus by the following steps:
(1) Nodes in the block chain voluntarily participate in consensus, and become consensus nodes after passing the application;
(2) Randomly determining an aggregator and a committee member by adopting a method of combining a verifiable random function (Verifiable Random Functions, VRF) with the number of tokens, wherein the number of tokens of the consensus node determines the probability of being selected as the aggregator, and the selected aggregator and committee member form a consensus committee;
(3) The aggregator packages all transactions per unit time into a new block;
(4) Committee members vote on blocks generated by the aggregator, which are accepted by the backbone when more than 2/3 members agree to the block; otherwise, the next committee member in the block proposal sequence becomes an aggregator, and the block generation and voting process is repeated;
(5) After the committee reaches consensus, all members execute a broadcast protocol to broadcast new blocks to neighboring nodes, thereby achieving blockchain consensus.
Compared with PoW, algornd saves computing resources and power resources, and improves consensus efficiency; compared with DPoS, algornd improves the decentralization degree and the safety of a block chain system; algornd improves consensus efficiency and decentralization compared to PBFT. Algornd is therefore chosen herein as the consensus mechanism for blockchain.
In the federal learning process, malicious participants often upload large-scale malicious gradient vectors, and the influence of malicious gradients on the global model is amplified. The method for normalizing the local gradient sets the magnitude of all local gradient vectors to be 1, reduces the influence of malicious gradients, and normalizes the local gradient as follows:
wherein|| I is the modulus of the vector is that,is the normalized gradient vector, i.e., the unit gradient vector.
The cosine similarity measures the similarity between the vectors by calculating the cosine value of the included angle between the two vectors, when the two vectors are in the same direction, the cosine similarity is 1, when the two vectors are in the opposite direction, the cosine similarity is-1, and the larger the value is, the closer the two vector directions are. The general formula is as follows:
Wherein A, B is a vector, and as indicated by the term "". In the federal learning model training, a certain entity provides a root data set, the root data set is trained to obtain a baseline gradient, the updating direction of the global model is determined, if the local gradient uploaded by a participant is closer to the direction of the baseline gradient, the local gradient obtains higher weight during aggregation. The malicious gradient uploaded by the malicious client is opposite to the baseline gradient in direction, the cosine similarity value between the malicious gradient and the baseline gradient is smaller than 0, the similarity of the two gradient vectors is measured by using a cosine similarity method, and malicious participants are identified.
The cosine similarity value calculation process is performed in a ciphertext state, and the cosine similarity value cannot be directly compared with 0. Cheon proposes a CKKS homomorphic ciphertext value comparison function Max (a, b, d), where d=2α -3, α is the binary number of ciphertext, max (a, b, d) inputs two CKKS ciphertext a and b having plaintext ranges between [0,1], and returns the ciphertext having the largest plaintext value by an approximate iteration method.
During the first iteration of federal learning, the participants obtain an initial model, a local data set training model is used, the model herein being referred to as a neural network model denoted as f (x, W), where x is the input and W is the model parameter, the model is used to classify the task and measure the error between the predicted class and the real class using the loss function, and then the model parameter W is updated by back-propagating the error until the model accuracy is greater than a predetermined value, and finally a suitable model parameter W is obtained.
In federal learning, an aggregation server may suffer from malicious aggregation, and a federal average algorithm FedAvg of a common aggregation method adopted in federal learning performs weighted average on the number of training samples, so that the aggregation server does not have the capability of resisting malicious attacks. To construct trusted aggregation nodes in an untrusted distributed IIoT environment, some research approaches apply blockchains to the FL to verify the aggregation results. The blockchain integrates the technologies of distributed storage, peer-to-peer network, cryptographic algorithm, consensus mechanism and the like, time-series transaction data is stored in a distributed database in the form of a unidirectional blockchain table, and the blockchain has the characteristics of decentralization, disclosure transparency, non-tampering, traceability and the like, and can establish trust between nodes in an untrusted environment, so that the blockchain technology is combined with federal learning.
Since the PPFL uses cryptography to protect the privacy of the user, all participants upload local gradient ciphertexts, resulting in malicious gradients that are difficult to detect. Malicious participants initiate poisoning attacks, bypass the existing defense mechanisms, modify the labels of the data set during model training, upload the trained malicious gradients, or upload the carefully modified malicious gradients. The malicious participants submit toxic local models, so that the prediction results deviate from the correct trend, the accuracy of federal learning is reduced, and finally, a global model with extremely low accuracy is obtained, so that the model training of the honest participants becomes idle work, and the federal learning performance is reduced.
How to solve the technical problems is the subject of the present invention.
Disclosure of Invention
The invention aims to provide a privacy protection federal learning method based on blockchain anti-poisoning attack, which uses an isomorphic encryption algorithm CKS to protect the privacy of participant model gradients, uses a cosine similarity method to identify malicious gradients in gradient ciphertext, uses a homomorphic ciphertext comparison algorithm to remove the malicious gradients, uses a blockchain distributed ledger book to record the federal learning process, and uses a consensus mechanism to calculate and verify an aggregation result.
The invention is characterized in that: firstly, constructing a blockchain system by a system administrator SM, and registering a model owner MO and a participant to the blockchain; the MO issues a federal learning task, and participants voluntarily participate in the task; the SM uses a secret key generation algorithm of the homomorphic encryption algorithm CKS to generate a public and private key set for the task, and the MO also generates a group of public and private key sets and sends security parameters to all participants through a secret channel; the participants train the model to obtain a local gradient, normalize and encrypt the local gradient, and upload a model gradient ciphertext to a blockchain; the MO and SM cooperatively remove the malicious gradient, identify the malicious gradient by using cosine similarity, and remove the malicious gradient by using a homomorphic ciphertext comparison method; the consensus mechanism selects a block chain node as an aggregator, and weights and aggregates the gradient ciphertext of the honest participant received in each round; SM calculates to obtain an aggregation gradient and re-encrypts the aggregation gradient; participants and MOs download aggregation gradients from the blockchain and update the model; the blockchain records the federal learning process and uses the DFS to store relevant data files.
The specific contents are as follows: the system administrator SM builds a blockchain, the participants and model owners register in the blockchain, an account, a pair of own public and private keys, a wallet address, an identity number and deposit are obtained after registration, the participants and model owners use the wallet address to generate a transaction, and the deposit is stored on the blockchain as deposit; the model owner MO issues a federal learning task in a mode of issuing asset transaction statement, participants participate in the task in a mode of issuing asset transaction statement, the SM and the MO generate public and private key sets for the task by using a secret key generation algorithm of full homomorphic encryption CKS according to the transaction of the participants, and the MO sends security parameters to all the participants through a secret channel; the participants train the local data set to obtain a local gradient, normalize and encrypt the local gradient, upload normalized local gradient ciphertext to the DFS, generate a hash address, package the hash address to the transaction and upload to the blockchain; MO and SM cooperate to remove malicious participants, and MO calculates baseline gradient ciphertext and honest normalizationCosine similarity of the local gradient ciphertext, wherein the cosine similarity is negative, namely a malicious gradient uploaded by a malicious participant, and the SM eliminates the malicious gradient by using a homomorphic ciphertext comparison algorithm; the block chain uses Algornd consensus mechanism to select a part of nodes to form a committee, selects a committee member to be an aggregator, the aggregator weights and aggregates gradient ciphertexts of all honest normalized participants to generate a transaction containing an aggregate ciphertexts hash address, and packages all transactions in unit time into a new block, and the committee member verifies the correctness of the new block; when more than 2/3 members agree to the block, the block is upstretched, the aggregator gets rewards, all committee members broadcast the block to the block chain, otherwise, no deposit of the aggregator is rewarded to the committee members and the aggregator is reselected; SM obtains aggregate ciphertext from a blockchain query using CKS private key sk s Decrypting to obtain a plaintext aggregation result and a weight sum, calculating to obtain an aggregation gradient, and using the CKKS public key pk x Encrypting the aggregation gradient and uploading the aggregation gradient to a blockchain; MO and participants download aggregate gradient cipher text from blockchain using CKS private key sk x Decrypting to obtain an aggregation gradient and updating the local model; and after the MO updates the model, the accuracy rate of model prediction is tested, if the accuracy rate reaches the expected value, the task is ended, and otherwise, the next round of model training is continued.
In order to achieve the aim of the invention, the invention adopts the technical scheme that: a privacy protection federation learning method based on blockchain antitoxic attack mainly comprises five entities of a system administrator SM, a model owner MO, a blockchain, a participant and a distributed file system DFS, and comprises the following steps:
s10, global initialization of the system is carried out, a system administrator SM is responsible for constructing a blockchain system, participants and a model owner MO register in the blockchain system to obtain an account number, and a pair of public and private keys of the participants and the model owner MO are generated. MO issues a federal learning task that participants voluntarily join, and SM uses the key generation algorithm of the homomorphic encryption algorithm CKS to generate a set of public and private key sets (pk) for this federal learning task s ,sk s ,evk s ,rk s ) Will (pk) through the secret channel s ,evk s ,rk s ) And sent to the MO. The MO uses the key generation algorithm of CKKS to generate a set of public and private key sets (pk) for this federal learning task x ,sk x ,evk x ,rk x ) Secure parameter sp= { (pk) over secret channel x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta is sent to all participants, where delta is the scaling factor in CKKS encoding;
s20, training a local model, wherein a participant obtains a security parameter SP through a secret channel with an MO, the training model obtains a local gradient, the local gradient is normalized and encrypted, a local gradient ciphertext is stored in a distributed file system DFS, and finally, the address of the ciphertext in the DFS is packed into a transaction and uploaded to a blockchain;
s30, malicious participant identification, MO and SM cooperate to determine whether the participants execute normalization operation, and a correct normalized participant list L is generated. MO firstly trains a global model to obtain a baseline gradient, normalizes and encrypts the baseline gradient to obtain ciphertext, then calculates cosine similarity ciphertext between the baseline gradient ciphertext and local gradient ciphertext of each participant in L, and uses homomorphic ciphertext comparison algorithm; when the cosine similarity value is less than or equal to 0, the weight of the participant gradient is set to 0, otherwise, the weight of the participant gradient is set to the cosine similarity value, and the process is operated in a ciphertext state. Finally, storing the gradient weight ciphertext in the DFS to generate a transaction and uploading the transaction to the blockchain;
S40, gradient ciphertext aggregation, namely, a block chain invokes an Algornd consensus mechanism to select an aggregator, the aggregator calculates an aggregation ciphertext of the round by using the weight of the participant gradient, the aggregation ciphertext is stored in a distributed file system DFS, a transaction is generated in the aggregation process, and the transaction is packed into a new block. SM obtains aggregate ciphertext from a blockchain query transaction using its own private key sk s Decrypting to obtain the plaintext of the aggregate result and the weight sum of the participants in L, calculating to obtain the aggregate gradient by using the weight sum, and using the common key pk shared by MO and the participants x Encrypting the aggregate gradient, storing the aggregate gradient ciphertext in the DFS, generating a transaction, and uploading the transaction to the blockchain;
s50, updating a model by the participant and the MO, obtaining DFS addresses of the aggregation gradient ciphertext from the blockchain by the participant and the MO, downloading the aggregation gradient ciphertext from the DFS according to the addresses, and using a private key sk shared by the MO and the participant x And decrypting to obtain an aggregation gradient, and updating the local model. If the accuracy of the MO test model reaches a preset value, the federal learning task is ended, otherwise, the participant performs the next round of training, and the steps S20, S30, S40 and S50 are repeatedly executed;
Further, the step S10 includes the steps of:
s101, constructing a blockchain by a system administrator SM, wherein the traditional federal learning has a plurality of threats and challenges, such as imperfect protection of privacy security, and a blockchain network can be used to replace a central server for improving the security. The security can be improved through the non-tamperable block ledger, the damage of malicious data to a system can be avoided to a certain extent through the non-tamperable and traceability of information on the block chain, the risk of single-point faults can be reduced through the use of the block chain, the source of model parameters can be modified or updated in the block chain searching training, and each device can be used as a client to update a local model in the block chain federal learning.
SM determines that Algornd consensus protocol is adopted, participants and model owners MO register in the blockchain, each have an account number, and generate a pair of public and private keys(the public-private key pair may create a secret channel between sender and receiver), a wallet address wa, a unique identity id and deposit, the wallet address being used to generate a transaction, all participants, MO and consensus nodes need to lock a portion of the deposit on the blockchain as deposit and create a block that records their deposit ownership statement transactions;
S102, MO issues federal learning by issuing asset statement transactionsA task, the task comprising: initial model W 0 Model number mid, learning rate η. Model owner MO j Issuing an asset declaration transaction: wherein H (W) j,0 ) Is the initial model hash address stored at the DFS,is MO (metal oxide semiconductor) j Private key of->Is used to prove MO j Indeed, there is a digital signature of the model, "keys" represents the model description of the FL task. For ease of notation and description, { mid }' is removed j ,W j,0 ,η j ,MO j The subscript j of }, only one representative MO is described subsequently;
s103, SM trade TX according to MO MO Obtaining an initial model W 0 SM generates a set of CKKS public-private key sets (pk) for the FL task using a key generation algorithm that homomorphic encrypts CKKS s ,sk s ,evk s ,rk s ) Wherein pk is s Is a public key used for encrypting the plaintext polynomial; sk (sk) s The SM is used for secret storage for a private key and decrypting ciphertext of each round aggregation result; evk s The secret key is used for auxiliary calculation and is used for CKS homomorphic multiplication calculation to obtain a ciphertext product; rk s For rotating the key, for shifting the vector ciphertext to the left. The SM then transmits the security parameters (pk over the secret channel s ,evk s ,rk s ) Sending to MO;
s104, the participants voluntarily join in the FL task by adopting a mode of issuing data asset statement transaction, and one participant P is assumed i (i=1, 2, …) issue a data asset declaration transaction, FL task to join MO: where sil is P i Data set D of (2) i The number of the code is given, the code,is P i Is a private key of H (D i ) Is the data set D i Hash value of ++>Is used to prove P i Does have D i Digital signature of (1), H (TX MO ) Representing P i The FL task with MO added is transaction TX MO Is used to generate the hash value of (a). MO generates a set of CKKS public-private key sets (pk) using a key generation algorithm that homomorphic encrypts CKKS x ,sk x ,evk x ,rk x ) According to the transaction TX of the participants Di Secure parameter sp= { (pk) over secret channel x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta, which is the scaling factor in CKKS encoding, is sent to all participants.
Further, the step S20 includes the steps of:
S201、P i transaction TX via MO MO Obtaining learning rate eta and initial model W 0 Is based on address to download initial model W from DFS 0 . Then P i Using private keysThe decryption MO obtains CKS related information required by encryption and decryption through a security parameter SP sent by a secret channel: (pk) x ,sk x ,evk x ,rk x )、(pk s ,evk s ,rk s ) And a scaling factor delta;
S202、P i using local dataset D i Training a model, and setting f (x, W) as a neural network model, wherein x isThe input, W, is a model parameter, using a cross entropy function as a loss function:
wherein the method comprises the steps of<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a dataset D i For multi-classification problems, a cross entropy function combined with a softmax function is used, and the output probabilities of the respective classes are obtained by the softmax function.
On the r-th round, P i Using model parameters W of a previous round r-1 Training the model to obtain gradient g i,r . The goal of training neural network models is to find the model that makes L f (. Cndot.) minimum model parameters, the gradient descent method can be used to obtain the optimal solution, the gradient is calculated at each iteration, and then the model parameters are updated by back-propagation errors, thus P i Calculating the gradient of the loss function of the r-th round:
wherein the method comprises the steps ofIs a loss function L f Gradient of (-), D i * Is the data set D i Is a subset of (a);
s203, in order to cause larger influence on the model, the malicious client generally uploads a larger local gradient, normalizes the malicious gradient, and can reduce the influence of the malicious gradient. The gradient is regarded as a direction vector, the normalization method is used to make the local gradient mode be 1, the local gradient is converted into a unit vector, the unit vector is the same in size, and due to normalization, the cosine similarity of the gradient vector can be converted into the inner product of the vector to calculate cos=A+B.
The normalized local gradient is based on cosine similarity, and the calculation formula is as follows:
wherein A, B is a vector, by which is meant the inner product, the cos has a value of [ -1,1 []. The invention regards the gradient as a direction vector, and in order to calculate cosine similarity between gradients in a ciphertext state, the gradient needs to be normalized before encryption. P (P) i Local gradients were normalized using equation (8):
wherein|| I is the modulus of the vector is that,is the normalized gradient vector, i.e., the unit gradient vector;
s204, the gradient is used as a mapping of the local data set of the client, and the gradient in the plaintext state has been proved to be able to deduce the original data of the client. To protect participant local gradient privacy, participant P i The invention realizes the detection of malicious data in a ciphertext state by encrypting the local gradient by using a privacy protection technology and uploading the blockchain. The homomorphic encryption technology enables data to be calculated in a ciphertext state, the volume of the homomorphic encrypted data is increased, huge calculation cost and data transmission capacity are caused, the traditional privacy protection federal learning generally uses Paillier, but only can encrypt integers, and gradients are floating points, so that the gradients are required to be quantized first and then encrypted one by one, and the calculation and communication cost is increased. The CKS realizes the approximate calculation that the plaintext is the floating point number, homomorphic multiplication of the ciphertext makes the data scale larger, the CKS method uses homomorphic multiplication keys to realize the scaling of the multiplication ciphertext so as to control the data scale to be in a smaller range, and the CKS method can encrypt the floating point number and directly encrypt the vector, thereby reducing the calculation cost.
To protect privacy of local gradients, P i And taking the gradient of each layer of the neural network as a vector, and encrypting the vector layer by layer to obtain a gradient ciphertext of each layer. For convenience of description, the present invention will be described in detail with respect to only one layer of gradient vector ciphertext. P (P) i First call the coding function of CKSGet polynomial g (X) and then use SM's own public key pk s Calling the encryption function Enc (pk of CKKS s G (X)) to obtain a local gradient ciphertext +.>The above process can be described by equation (9):
wherein the method comprises the steps ofRepresenting the ciphertext of the vector x after encoding and encrypting the vector x using the public key pk;
S205、P i ciphertext of local gradientStored in DFS with hash address +.>Packaging into a transaction: transaction TX i,r Uploading to the blockchain.
Further, the step S30 includes the steps of:
s301, MO according to transaction TX i.r Obtaining local gradient densityHash address of a textObtaining gradient ciphertext of participants from DFS according to address>
S302, MO calls CKS inner product functionGet each P i Is modulo->Calculation result +.>Stored in DFS, ">Generated hash address->Packaging into a transaction: />Uploading the transaction to the blockchain;
s303, SM obtains P from block chain i Modulus of gradient vector of (2)Using its own private key sk s Decryption to obtain m i,r If m is i,r =1, handle P i Added to the participant list L. The MO stores the honest normalized participant list L in the DFS, generating a transaction containing the participant list L hash address H (L): /> Uploading to a blockchain;
s304 MO use root dataset D 0 Training neural network model f (x, W r-1 ) Obtaining a baseline gradient g 0,r . MO normalized local gradient derivationFirst call the coding function of CKS +.>Get polynomial g (X) and then use SM's own public key pk s Calling the encryption function Enc (pk of CKKS s G (X)) to obtain a local gradient ciphertext +.>The above process can be described by equation (10):
CKS assisted computation key evk by MO using SM s And a rotation key rk s Calling the inner product function of CKS Calculating gradient vector ciphertext of each participant in the list L>And MO gradient vector ciphertext->Cosine similarity ciphertext between->
S305, for each P j (P j E L), calculate P j Cosine similarity value cos between gradient vector and MO gradient vector j,r When cos j,r When less than or equal to 0, let P j Gradient weights of (2)No->
For each P j (P j E L), MO calls CKSFunction, will->And-> Is recorded as res j,r ,res j,r Has a value range of [0,2 ]]Then call CKKSFunction, res j,r And->The result of the multiplication is recorded as re j,r The re-scaling function RS (re j,r ) Obtaining the converted ciphertext rt j,r ,rt j,r Has a value range of [0,1 ]]When->At the time of conversion ++> Finally MO calls CKS homomorphic ciphertext numerical comparison functionWhere d=2α -3, α is ciphertext rt j,r Is used to determine the number of binary bits of (a),function return ciphertext rt j,r And->The above procedure can be described by equation (12) for ciphertext having the largest plaintext value:
s306, MO invoking CKKSFunction, will->And->The result of the multiplication is denoted ret j,r Then call +.>Function, ret j,r And->The result of the addition is recorded asThe above process can be described by equation (13):
wherein the method comprises the steps ofRepresenting participant P i Weight ciphertext of (a). MO uses formula (8) will +.>Converting back to the original value before the value field conversion.
MO calls |L| -Add of CKS 1 times (S j,r ,S j+1,r ) Function, P j E L, j=1, 2, |l| -1, resulting in the sum of the weights of the |l| participants
Where |L| represents the number of participants in the participant list L;
s307, MO willStored in DFS, will->Hash address +.>And packaging into a transaction: />H(TX MO ) "Keywords" }, transaction TX weight Uploading to the blockchain.
Further, the step S40 includes the steps of:
s401, using an Algornd consensus protocol of a blockchain to randomly select a part of committee from all consensus nodes by using a verifiable random function, and selecting one member from the committee to be an aggregator;
S402, the aggregator first queries the transaction TX from the blockchain L Obtaining a hash address H (L) of an honest normalized participant list L, and obtaining the L from the DFS according to the address; query transaction { TX j,r Gradient ciphertext is obtained by the |j epsilon L }Hash address of (a) Obtaining +.>Query transaction TX weight Hash address for obtaining weight ciphertext From address-dependent DFSWeight ciphertext->The aggregator then calculates key evk using the CKKS assistance of SM s For each P j (P j E L), call CKS +.>Function, will->And-> The result of the multiplication is recorded as agg j,r . For each P j (P j E L), the aggregator recalls the CKS rescaled function RS (agg) j,r ) Preventing the scale and error of ciphertext from increasing, and obtaining the converted ciphertext ag j,r . Finally the aggregator calls Add (ag) for |L| -1 CKS j,r ,ag j+1,r ) Function, P j E L, j=1, 2, |l| -1, resulting in an aggregate ciphertext for |l| participantsThe above procedure for the aggregator to perform gradient ciphertext aggregation is described by equation (15):
s403, after the aggregator calculates, aggregating the ciphertextStored in the DFS and generating a transaction Packaging all transactions of the round into a new block r ={TX Agg ,TX r,j I j e L. Next, the committee member validates the new block r And votes for it if block is agreed r Then a transaction is generated: if more than 2/3 of the committee members agree to this block r Then the block is admitted and the aggregator gets rewarded, all committee members broadcasting the block; otherwise, the deposit of the aggregator is not received and rewarded to other members of the committee, then the next committee member in the block proposal sequence becomes the aggregator, and the aggregation steps S402 and S403 are re-executed until the committee is about the block r Agree on;
s404, after achieving the block chain consensus, SM first queries transaction TX from the block chain Agg Obtaining the aggregate ciphertextFrom transaction TX weight Obtain the sum ciphertext of the weight->Then, the SM uses its own private key sk s Invoking the decryption function of CKS +.>The plaintext polynomials t (X) and sum (X) are respectively obtained, and then the decoding functions Dcd (t (X), delta) and Dcd (sum (X), delta) of CKS are respectively called to obtain an aggregation result g r Sum of sum weights sum r The above process can be described by equation (16):
finally, SM calculates an aggregation gradient:
SM calls the coding function of CKKSResulting in a polynomial m (X) and then using a public key pk common to all participants and MO x Calling the encryption function Enc (pk of CKKS x M (X)) to obtain the aggregation gradient ciphertext ++>The above process can be described by equation (18):
/>
Will beStored in DFS, ">Hash address +.>Packaging into a transaction: transaction TX Result Uploading to the blockchain.
Further, the step S50 includes the steps of:
s501, MO and P i Querying transaction TX from blockchain Result And downloading the aggregation gradient ciphertext from the DFS according to the hash addressThen use the MO's private key sk x Invoking the decryption function of CKS +.>Obtaining a plaintext polynomial p (X), calling a decoding function Dcd (p (X), delta) of CKKS to obtain a global aggregation gradient +.>The above process can be described by equation (19):
s502, MO and participants use aggregation gradientUpdating the local model:
s503, testing accuracy after the MO updates the model, and if the model accuracy is expected, generating a transaction and uploading the transaction to the blockchain:indicating that the FL task is over. Otherwise P i Proceeding to S201, the local model training of r+1 rounds is performed.
Compared with the prior art, the invention has the beneficial effects that:
the invention utilizes the homomorphic encryption algorithm CKS to protect the privacy of the participant model gradient, utilizes the cosine similarity method to identify the malicious gradient in the gradient ciphertext, utilizes the homomorphic ciphertext comparison algorithm to remove the malicious gradient, and utilizes the consensus mechanism of the blockchain to calculate and verify the aggregation result.
(1) According to the invention, the block chain is utilized to record the federal learning task flow, the homomorphic encryption algorithm CKS method is used for realizing inner product operation under the gradient ciphertext state, and the cosine similarity method is used for identifying malicious participants, so that the model training result of honest participants is protected, and the model training accuracy in the federal learning process is improved.
(2) The invention provides a privacy protection federal learning method based on blockchain anti-poisoning attack, which utilizes a CKS encryption method to encrypt gradients, and the traditional privacy protection federal learning method generally uses Pailler as a privacy protection means, but only can quantize integers, and the gradients need to be quantized first and then encrypted one by one, so that calculation and communication expenditure is increased, and CKS allows floating point numbers and vectors to be encrypted, so that calculation efficiency is improved.
(3) The invention provides a privacy protection federation learning method based on block chain anti-poisoning attack, which is characterized in that a block chain technology records federation learning process, the correctness of an aggregation result is ensured by using an Algorand consensus mechanism of a block chain, compared with a method for completing gradient aggregation by entities, the communication times among the entities are reduced, the problems of server malicious behaviors and single-point faults are solved, and the federation learning based on the block chain is reduced relative to the calculation communication cost of safe multiparty computing.
(4) The invention provides a privacy protection federal learning method based on blockchain anti-poisoning attack, which is verified by experiments, compared with the existing malicious participant detection method, the method reduces the calculation cost and the communication cost, and is more suitable for some lightweight Internet of things equipment.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a flowchart of a blockchain-based anti-poisoning attack privacy protection federal learning method according to embodiment 1 of the present invention.
Fig. 2 is a system model diagram of a blockchain-based anti-poisoning attack privacy protection federal learning method according to embodiment 1 of the present invention.
Fig. 3 is an initial round sequence diagram of the blockchain-based anti-poisoning attack privacy protection federal learning method of embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
Referring to fig. 1 to 3, the present invention provides a privacy protection federal learning method based on blockchain anti-poisoning attack, as shown in fig. 1, mainly including five entities including a system administrator SM, a model owner MO, a blockchain, a participant, and a distributed file system DFS, including the following steps:
S10, global initialization of the system is carried out, a system administrator SM is responsible for constructing a blockchain system, participants and a model owner MO register in the blockchain system to obtain an account number, and a pair of public and private keys of the participants and the model owner MO are generated. MO issues a federal learning task that participants voluntarily join, and SM uses the key generation algorithm of the homomorphic encryption algorithm CKS to generate a set of public and private key sets (pk) for this federal learning task s ,sk s ,evk s ,rk s ) Will (pk) through the secret channel s ,evk s ,rk s ) And sent to the MO. The MO uses the key generation algorithm of CKKS to generate a set of public and private key sets (pk) for this federal learning task x ,sk x ,evk x ,rk x ) Secure parameter sp= { (pk) over secret channel x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta to all participants, where delta is CKKSScaling factors in the code;
s20, training a local model, wherein a participant obtains a security parameter SP through a secret channel with an MO, the training model obtains a local gradient, the local gradient is normalized and encrypted, a local gradient ciphertext is stored in a distributed file system DFS, and finally, the address of the ciphertext in the DFS is packed into a transaction and uploaded to a blockchain;
s30, malicious participant identification, MO and SM cooperate to determine whether the participants execute normalization operation, and a correct normalized participant list L is generated. MO firstly trains a global model to obtain a baseline gradient, normalizes and encrypts the baseline gradient to obtain ciphertext, then calculates cosine similarity ciphertext between the baseline gradient ciphertext and local gradient ciphertext of each participant in L, and uses homomorphic ciphertext comparison algorithm; when the cosine similarity value is less than or equal to 0, the weight of the participant gradient is set to 0, otherwise, the weight of the participant gradient is set to the cosine similarity value, and the process is operated in a ciphertext state. Finally, storing the gradient weight ciphertext in the DFS to generate a transaction and uploading the transaction to the blockchain;
S40, gradient ciphertext aggregation, namely, a block chain invokes an Algornd consensus mechanism to select an aggregator, the aggregator calculates an aggregation ciphertext of the round by using the weight of the participant gradient, the aggregation ciphertext is stored in a distributed file system DFS, a transaction is generated in the aggregation process, and the transaction is packed into a new block. SM obtains aggregate ciphertext from a blockchain query transaction using its own private key sk s Decrypting to obtain the plaintext of the aggregate result and the weight sum of the participants in L, calculating to obtain the aggregate gradient by using the weight sum, and using the common key pk shared by MO and the participants x Encrypting the aggregate gradient, storing the aggregate gradient ciphertext in the DFS, generating a transaction, and uploading the transaction to the blockchain;
s50, updating a model by the participant and the MO, obtaining DFS addresses of the aggregation gradient ciphertext from the blockchain by the participant and the MO, downloading the aggregation gradient ciphertext from the DFS according to the addresses, and using a private key sk shared by the MO and the participant x And decrypting to obtain an aggregation gradient, and updating the local model. If the accuracy of the MO test model reaches the thresholdThe fixed value, the federation learning task is ended, otherwise, the participant performs the training of the next round, and the steps S20, S30, S40 and S50 are repeatedly executed;
The distributed file system DFS mainly stores data ciphertext, a data ciphertext storage address is stored in the block chain, and an entity in the system can download corresponding data ciphertext according to the address, so that the storage overhead of the block chain is reduced.
As shown in fig. 2, a privacy protection federal learning method based on blockchain anti-poisoning attack includes five entities of a system administrator SM, a model owner MO, a blockchain, a participant, and a distributed file system DFS. The system administrator SM is a trusted entity responsible for building blockchains and generates a set of public and private key sets for federal learning tasks. The model owner MO is willing to improve the entity of the product or service by using a higher performance model and pays rewards for the model, the MO is responsible for issuing a federal learning task, collecting a root dataset, the MO generates a set of public and private key sets for the federal learning task and sends security parameters to all participants over a secret channel. The participants are responsible for training the local data set to obtain a local gradient, normalizing and encrypting the local gradient, storing a local gradient ciphertext in the DFS to generate a hash address, packaging the hash address into a transaction, and uploading the transaction to the blockchain. MO and SM cooperate to complete the judgment of malicious participants. The DFS is a distributed file system that stores data files in the DFS, generates unique hash addresses for the file contents, and packages the hash addresses into transactions. The main interaction sequence between the five entities in the initial round is shown in fig. 3.
The step S10 includes the steps of:
s101, a system administrator SM constructs a blockchain, the SM determines that an Algorand consensus protocol is adopted, participants and model owners MO register in the blockchain and respectively own an account number to generate a pair of public and private keys of the participants and the model owners MO (the public-private key pair may create a secret channel between sender and receiver), a wallet address wa, a unique identity id and deposit, the wallet address being used to generate a transaction, all participants, MO and consensus nodes need to lock a portion of the deposit on the blockchain as deposit and create a block that records their deposit ownership statement transactions;
s102, the MO adopts a mode of issuing asset statement transaction to issue a federal learning task, and the task comprises the following steps: initial model W 0 Model number mid, learning rate η. Model owner MO j Issuing an asset declaration transaction: wherein H (W) j,0 ) Is the initial model hash address stored at the DFS,is MO (metal oxide semiconductor) j Private key of->Is used to prove MO j Indeed, there is a digital signature of the model, "keys" represents the model description of the FL task. For ease of notation and description, { mid }' is removed j ,W j,0 ,η j ,MO j The subscript j of }, only one representative MO is described subsequently;
S103, SM trade TX according to MO MO Obtaining an initial model W 0 SM generates a set of CKKS public-private key sets (pk) for the FL task using a key generation algorithm that homomorphic encrypts CKKS s ,sk s ,evk s ,rk s ) Wherein pk is s Is a public key used for encrypting the plaintext polynomial; sk (sk) s The SM is used for secret storage for a private key and decrypting ciphertext of each round aggregation result; evk s The secret key is used for auxiliary calculation and is used for CKS homomorphic multiplication calculation to obtain a ciphertext product; rk s For rotating the key, for shifting the vector ciphertext to the left. The SM then transmits the security parameters (pk over the secret channel s ,evk s ,rk s ) Sending to MO;
s104, the participants voluntarily join in the FL task by adopting a mode of issuing data asset statement transaction, and one participant P is assumed i (i=1, 2, …) issue a data asset declaration transaction, FL task to join MO: where sil is P i Data set D of (2) i The number of the code is given, the code,is P i Is a private key of H (D i ) Is the data set D i Hash value of ++>Is used to prove P i Does have D i Digital signature of (1), H (TX MO ) Representing P i The FL task with MO added is transaction TX MO Is used to generate the hash value of (a). MO generates a set of CKKS public-private key sets (pk) using a key generation algorithm that homomorphic encrypts CKKS x ,sk x ,evk x ,rk x ) According to the transaction TX of the participants Di Secure parameter sp= { (pk) over secret channel x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta, which is the scaling factor in CKKS encoding, is sent to all participants. />
The step S20 includes:
S201、P i transaction TX via MO MO Obtaining learning rate eta and initial model W 0 Is based on address to download initial model W from DFS 0 . Then P i Using private keysThe decryption MO obtains CKS related information required by encryption and decryption through a security parameter SP sent by a secret channel: (pk) x ,sk x ,evk x ,rk x )、(pk s ,evk s ,rk s ) And a scaling factor delta;
S202、P i using local dataset D i Training a model, setting f (x, W) as a neural network model, wherein x is an input, W is a model parameter, and using a cross entropy function as a loss function:
wherein the method comprises the steps of<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a dataset D i For multi-classification problems, a cross entropy function combined with a softmax function is used, and the output probabilities of the respective classes are obtained by the softmax function.
On the r-th round, P i Using model parameters W of a previous round r-1 Training the model to obtain gradient g i,r . The goal of training neural network models is to find the model that makes L f (. Cndot.) minimum model parameters, the gradient descent method can be used to obtain the optimal solution, the gradient is calculated at each iteration, and then the model parameters are updated by back-propagation errors, thus P i Calculating the gradient of the T-th round loss function:
Wherein the method comprises the steps ofIs a loss function L f Gradient of (-), D i * Is the data set D i Is a subset of (a);
s203, regarding the gradients as a direction vector, in order to calculate cosine phases between gradients in ciphertext stateSimilarity, the gradient needs to be normalized before encryption. P (P) i Local gradients were normalized using equation (3):
wherein|| I is the modulus of the vector is that,is the normalized gradient vector, i.e., the unit gradient vector;
s204, in order to protect privacy of local gradient, P i And taking the gradient of each layer of the neural network as a vector, and encrypting the vector layer by layer to obtain a gradient ciphertext of each layer. For ease of description, only one layer of gradient vector ciphertext is described in detail herein. P (P) i First call the coding function of CKSThe polynomial g (X) is obtained and then the encryption function Enc (pk) of the CKKS is called using the SM's own public key pks s G (X)) to obtain a local gradient ciphertext +.>The above process can be described by equation (4):
wherein the method comprises the steps ofRepresenting the ciphertext of the vector x after encoding and encrypting the vector x using the public key pk;
S205、P i ciphertext of local gradientStored in DFS with hash address +.>Packaging into a transaction: transaction TX i,r Uploading to the blockchain. The pseudo code for participant local model training and encryption is shown in table 1.
TABLE 1
The step S30 includes:
s301, MO according to transaction TX i.r Hash address for obtaining local gradient ciphertextObtaining gradient ciphertext of participants from DFS according to address>
S302, MO calls CKS inner product functionGet each P i Is modulo->Calculation result +.>Stored in DFS, ">Generated hash address->Packaging inTransaction: />Uploading the transaction to the blockchain;
s303, SM obtains P from block chain i Modulus of gradient vector of (2)Using its own private key sk s Decryption to obtain m i,r If m is i,r =1, handle P i Added to the participant list L. The MO stores the honest normalized participant list L in the DFS, generating a transaction containing the participant list L hash address H (L): /> Uploading to the blockchain. Pseudo codes of MO and SM normalization decisions are shown in table 2;
s304 MO use root dataset D 0 Training neural network model f (x, W r-1 ) Obtaining a baseline gradient g 0,r . MO normalized local gradient derivationFirst call the coding function of CKS +.>Get polynomial g (X) and then use SM's own public key pk s Calling the encryption function Enc (pk of CKKS s G (X)) to obtain a local gradient ciphertext +.>The above process can be described by equation (5):
TABLE 2
CKS assisted computation key evk by MO using SM s And a rotation key rk s Calling the inner product function of CKS Calculating gradient vector ciphertext of each participant in the list L>And MO gradient vector ciphertext->Cosine similarity ciphertext between->
S305, for each P j (P j E L), calculate P j Cosine similarity value cos between gradient vector and MO gradient vector j,r When cos j,r When less than or equal to 0, let P j Gradient weights of (2)No->
For each P j (P j E L), MO calls CKSFunction, will->And-> Is recorded as res j,r ,res j,r Has a value range of [0,2 ]]Then call CKKSFunction, res j,r And->The result of the multiplication is recorded as re j,r The re-scaling function RS (re j,r ) Obtaining the converted ciphertext rt j,r ,rt j,r Has a value range of [0,1 ]]When->At the time of conversion ++> Finally MO calls CKS homomorphic ciphertext numerical comparison functionWhere d=2α -3, α is ciphertext rt j,r Is used to determine the number of binary bits of (a),function return ciphertext rt j,r And->The above procedure can be described by equation (7) for ciphertext having the largest plaintext value:
the pseudo code of the numerical comparison function of CKKS is shown in table 3;
TABLE 3 Table 3
S306, MO invoking CKKSFunction, will->And->The result of the multiplication is denoted ret j,r Then call +.>Function, ret j,r And->The result of the addition is recorded asThe above process can be described by equation (8):
Wherein the method comprises the steps ofRepresenting participant P i Weight ciphertext of (a). MO uses formula (8) will +.>Converting back to the original value before the value field conversion.
MO calls |L| -Add of CKS 1 times (S j,r ,S j+1,r ) Function, P j E L, j=1, 2, |l| -1, resulting in the sum of the weights of the |l| participants
Where |L| represents the number of participants in the participant list L;
s307, MO willStored in DFS, will->Hash address +.>And packaging into a transaction: /> Transaction TX weight Uploading to the blockchain. The pseudo code for the MO calculation weights is shown in table 4. />
TABLE 4 Table 4
The step S40 includes:
s401, using an Algornd consensus protocol of a blockchain to randomly select a part of committee from all consensus nodes by using a verifiable random function, and selecting one member from the committee to be an aggregator;
s402, the aggregator first queries the transaction TX from the blockchain L Obtaining a hash address H (L) of an honest normalized participant list L, and obtaining the L from the DFS according to the address; query transaction { TX j,r Gradient ciphertext is obtained by the |j epsilon L }Hash address of (a) Obtaining +.>Query transaction TX weight Hash address for obtaining weight ciphertext Obtaining weight ciphertext from DFS according to address>The aggregator then calculates key evk using the CKKS assistance of SM s For each P j (P j E L), call CKS +. >Function, will->And-> The result of the multiplication is recorded as agg j,r . For each P j (P j E L), the aggregator recalls the CKS rescaled function RS (agg) j,r ) Preventing the scale and error of ciphertext from increasing, and obtaining the converted ciphertext ag j,r . Finally the aggregator calls Add (ag) for |L| -1 CKS j,r ,ag j+1,r ) Function, P j E L, j=1, 2, |l| -1, resulting in an aggregate ciphertext for |l| participantsThe above procedure for the aggregator to perform gradient ciphertext aggregation is described by equation (10):
s403, after the aggregator calculates, aggregating the ciphertextStored in the DFS and generating a transaction Packaging all transactions of the round into a new block r ={TX Agg ,TX r,j I j e L. Next, committeeThe member verifies the new block r And votes for it if block is agreed r Then a transaction is generated: if more than 2/3 of the committee members agree to this block r Then the block is admitted and the aggregator gets rewarded, all committee members broadcasting the block; otherwise, the deposit of the aggregator is not received and rewarded to other members of the committee, then the next committee member in the block proposal sequence becomes the aggregator, and the aggregation steps S402 and S403 are re-executed until the committee is about the block r Agree on; />
S404, after achieving the block chain consensus, SM first queries transaction TX from the block chain Agg Obtaining the aggregate ciphertextFrom transaction TX weight Obtain the sum ciphertext of the weight->Then, the SM uses its own private key sk s Invoking the decryption function of CKS +.>The plaintext polynomials t (X) and sum (X) are respectively obtained, and then the decoding functions Dcd (t (X), delta) and Dcd (sum (X), delta) of CKS are respectively called to obtain an aggregation result g r Sum of sum weights sum r The above process can be described by equation (11):
finally, SM calculates an aggregation gradient:
SM calls the coding function of CKKSResulting in a polynomial m (X) and then using a public key pk common to all participants and MO x Calling the encryption function Enc (pk of CKKS x M (X)) to obtain the aggregation gradient ciphertext ++>The above process can be described by equation (13):
will beStored in DFS, ">Hash address +.>Packaging into a transaction: transaction TX Result Uploading to the blockchain. The pseudo code for the aggregator gradient ciphertext aggregate is shown in table 5.
TABLE 5
The step S50 includes:
s501, MO and P i Querying transaction TX from blockchain Result And downloading the aggregation gradient ciphertext from the DFS according to the hash addressThen use the MO's private key sk x Invoking the decryption function of CKS +.>Obtaining a plaintext polynomial p (X), calling a decoding function Dcd (p (X), delta) of CKKS to obtain a global aggregation gradient +. >The above process can be described by equation (14):
s502, MO and participants use aggregation gradientUpdating the local model:
s503, testing accuracy after the MO updates the model, and if the model accuracy is expected, generating a transaction and uploading the transaction to the blockchain:indicating that the FL task is over. Otherwise Pi goes to S201Segment, performing r+1 rounds of local model training.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A privacy protection federal learning method based on blockchain anti-poisoning attack is characterized by comprising the following steps:
s10, global initialization of the system, wherein a system administrator SM is responsible for constructing a blockchain system, participants and model owners MO register in the blockchain system to obtain an account number and generate a pair of public and private keys of the participants, the MO issues a federal learning task, the participants voluntarily join the federal learning task, and the SM generates a group of public and private key sets (pk) for the federal learning task by using a key generation algorithm of an homomorphic encryption algorithm CKS s ,sk s ,evk s ,rk s ) Will (pk) through the secret channel s ,evk s ,rk s ) To the MO, the MO generates a set of public and private key sets (pk) for this federal learning task using a key generation algorithm that encrypts CKKS in an isomorphic manner x ,sk x ,evk x ,rk x ) Secure parameter sp= { (pk) over secret channel x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta is sent to all participants, where delta is the scaling factor in the isomorphic encrypted CKKS code;
s20, training a local model, wherein a participant obtains a security parameter SP through a secret channel with an MO, the training model obtains a local gradient, the local gradient is normalized and encrypted, a local gradient ciphertext is stored in a distributed file system DFS, and finally, the address of the ciphertext in the DFS is packed into a transaction and uploaded to a blockchain;
s30, malicious participant identification, MO and SM cooperate to determine whether participants execute normalization operation, a correct normalized participant list L is generated, MO firstly trains a global model to obtain a baseline gradient, normalizes and encrypts the baseline gradient to obtain ciphertext, then calculates cosine similarity ciphertext between the baseline gradient ciphertext and local gradient ciphertext of each participant in L, and uses homomorphic ciphertext comparison algorithm; when the cosine similarity value is less than or equal to 0, the weight of the participant gradient is set to 0, otherwise, the weight of the participant gradient is set to the cosine similarity value, the process is operated in a ciphertext state, and finally, the gradient weight ciphertext is stored in the DFS to generate a transaction and is uploaded to a blockchain;
S40, gradient ciphertext aggregation, namely, invoking an Algornd consensus mechanism to select an aggregator, calculating the aggregate ciphertext of the round by the aggregator by using the weight of the participant gradient, storing the aggregate ciphertext in a distributed file system DFS, generating a transaction in the aggregation process, packaging the transaction into a new block, inquiring the transaction by the SM from the blockchain to obtain the aggregate ciphertext, and using the private key sk of the SM s Decrypting to obtain the plaintext of the aggregate result and the weight sum of the participants in L, calculating to obtain the aggregate gradient by using the weight sum, and using the common key pk shared by MO and the participants x Encrypting the aggregate gradient, storing the aggregate gradient ciphertext in the DFS, generating a transaction, and uploading the transaction to the blockchain;
s50, updating a model by the participant and the MO, obtaining DFS addresses of the aggregation gradient ciphertext from the blockchain by the participant and the MO, downloading the aggregation gradient ciphertext from the DFS according to the addresses, and using a private key sk shared by the MO and the participant x Decrypting to obtain an aggregation gradient, updating a local model, if the accuracy of the MO test model reaches a preset value, ending the federal learning task, otherwise, performing the next training by the participant, and repeatedly executing the steps S20, S30, S40 and S50;
The distributed file system DFS stores the data ciphertext, the data ciphertext storage address is stored in the block chain, and an entity in the system downloads the corresponding data ciphertext according to the address so as to reduce the storage overhead of the block chain.
2. The blockchain-based anti-poisoning attack privacy protection federal learning method according to claim 1, wherein the step S10 includes the steps of:
s101, a system administrator SM constructs a blockchain, the SM determines that an Algorand consensus protocol is adopted, participants and model owners MO register in the blockchain and respectively own an account number to generate a pair of public and private keys of the participants and the model owners MO (the public-private key pair creates a secret channel between sender and receiver), a wallet address wα, a unique identity id and deposit, the wallet address being used to generate a transaction, all participants, MO and consensus nodes to lock a portion of the deposit on the blockchain as deposit, and create a block that records their deposit ownership statement transactions;
s102, the MO adopts a mode of issuing asset statement transaction to issue a federal learning task, and the task comprises the following steps: initial model W 0 Model number mid, learning rate η, model owner MO j Issuing an asset declaration transaction: wherein H (W) j,0 ) Is the initial model hash address stored at the DFS,is MO (metal oxide semiconductor) j Private key of->Is used to prove MO j Digital signatures with models are true, "keys" represents the model description of the FL task, with { mid removed for ease of notation and description j ,W j,0j ,MO j Subscript j of }, followed by performing only one representative MOA description;
s103, SM trade TX according to MO MO Obtaining an initial model W 0 SM generates a set of CKKS public-private key sets (pk) for the FL task using a key generation algorithm that homomorphic encrypts CKKS s ,sk s ,evk s ,rk s ) Wherein pk is s Is a public key used for encrypting the plaintext polynomial; sk (sk) s The SM is used for secret storage for a private key and decrypting ciphertext of each round aggregation result; evk s For auxiliary calculation keys, homomorphic multiplication calculation is used for CKS to obtain ciphertext products; rk s To rotate the key, for shifting the vector ciphertext to the left, the SM then passes the security parameters (pk over the secret channel s ,evk s ,rk s ) Sending to MO;
s104, the participants voluntarily join in the FL task by adopting a mode of issuing data asset statement transaction, and one participant P is assumed i I=1, 2, ··, a data asset declaration transaction is issued, FL task to add MO: where sil is P i Data set D of (2) i Number (F) >Is P i Is a private key of H (D i ) Is the data set D i Hash value of ++>Is used to prove P i Does have D i Digital signature of (1), H (TX MO ) Representing P i The FL task with MO added is transaction TX MO The MO generates a set of CKKS public-private key sets (pk) using a key generation algorithm that homomorphic encrypts CKKS x ,sk x ,evk x ,rk x ) According to the transaction TX of the participants Di Secure parameter sp= { } over secret channelpk x ,sk x ,evk x ,rk x ),(pk s ,evk s ,rk s ) Delta, which is the scaling factor in CKKS encoding, is sent to all participants.
3. The blockchain-based anti-poisoning attack privacy protection federal learning method according to claim 1 or 2, wherein the step S20 includes the steps of:
S201、P i transaction TX via MO MO Obtaining learning rate eta and initial model W 0 Is based on address to download initial model W from DFS 0 Then P i Using private keysThe decryption MO obtains the related information of the homomorphic encryption CKS required by encryption and decryption through the security parameter SP sent by the secret channel: (pk) x ,sk x ,evk x ,rk x )、(pk s ,evk s ,rk s ) And a scaling factor delta;
S202、P i using local dataset D i Training a model, setting f (x, W) as a neural network model, wherein x is an input, W is a model parameter, and using a cross entropy function as a loss function:
wherein the method comprises the steps of<x k ,y k >∈D i ,x k Is input, y k Is a label, n is a dataset D i For multi-classification problems, cross entropy functions combined with softmax functions are adopted, and the output probability of each class is obtained through the softmax functions;
On the r-th round, P i Using model parameters W of a previous round r-1 Training the model to obtain gradient g i,r The goal of training neural network models is to find the model that makes L f (. Cndot.) minimum model parameters, using gradient descent method to obtain optimal solution,calculating gradients at each iteration and then updating the model parameters by back-propagating errors, thus P i Calculating the gradient of the loss function of the r-th round:
wherein the method comprises the steps ofIs a loss function L f Gradient of (-), D i * Is the data set D i Is a subset of (a);
s203, regarding the gradient as a direction vector, normalizing the gradient before encryption, and P in order to calculate cosine similarity between gradients in ciphertext state i Local gradients were normalized using equation (3):
wherein|| I is the modulus of the vector is that,is the normalized gradient vector, i.e., the unit gradient vector;
s204, in order to protect privacy of local gradient, P i Taking the gradient of each layer of the neural network as a vector, encrypting the vector layer by layer to obtain a gradient ciphertext of each layer, and only describing one layer of gradient vector ciphertext in detail, P i Firstly, calling the coding function of the full homomorphic encryption CKSGet polynomial g (X) and then use SM's own public key pk s Calls the encryption function Enc (pk s G (X)) to obtain a local gradient ciphertext +.>The above process is described by equation (4):
wherein the method comprises the steps ofRepresenting the ciphertext of the vector x after encoding and encrypting the vector x using the public key pk;
S205、P i ciphertext of local gradientStored in DFS with hash address +.>Packaging into a transaction: /> Transaction TX i,r Uploading to the blockchain.
4. The blockchain-based anti-poisoning attack privacy protection federal learning method according to claim 1, wherein the step S30 includes the steps of:
s301, MO according to transaction TX i.r Hash address for obtaining local gradient ciphertextObtaining gradient ciphertext of participants from DFS according to address>
S302, MO calls inner product function of homomorphic encryption CKSGet each P i Is modulo->Calculation result +.>Stored in DFS, will->Corresponding hash addressPackaging into a transaction: />Uploading the transaction to the blockchain;
s303, SM obtains P from block chain i Modulus of gradient vector of (2)Using its own private key sk s Decryption to obtain m i,r If m is i,r =1, handle P i Added to the participant list L, the MO stores the honest normalized participant list L in the DFS, generating a transaction containing the participant list L hash address H (L): />H(TX MO ) "keys" }, upload to the blockchain;
s304 MO use root dataset D 0 Training neural network model f (x, W r-1 ) Obtaining a baseline gradient g 0,r MO normalized local gradientFirst, the coding function ++of the full homomorphic encryption CKS is called>Get polynomial g (X) and then use SM's own public key pk s Calls the encryption function Enc (pk s G (X)) to obtain local gradient ciphertextThe above process is described by equation (5):
CKS assisted computation key evk by MO using SM s And a rotation key rk s Calling inner product function of homomorphic encryption CKSCalculating gradient vector ciphertext of each participant in the list LAnd MO gradient vector ciphertext->Cosine similarity ciphertext between->
S305, for each P j (P j E L), calculate P j Gradient vector of (2) and MO gradient vectorValue cos of cosine similarity between j,r When cos j,r When less than or equal to 0, let P j Gradient weights of (2)No->
For each P j (P j E L), MO calls CKSFunction, will->Andis recorded as res j,r ,res j,r Has a value range of [0,2 ]]Then call CKKS +.> Function, res j,r And->The result of the multiplication is recorded as re j,r The re-scaling function RS (re j,r ) Obtaining the converted ciphertext rt j,r ,rt j,r Has a value range of [0,1 ]]When->At the time, the converted ciphertext Finally MO calls CKS homomorphic ciphertext value comparison function ++ >Where d=2α -3, α is ciphertext rt j,r The number of binary bits, +.>Function return ciphertext rt j,r And->The above procedure is described by equation (7) corresponding to the ciphertext having the largest plaintext value:
s306, MO invoking CKKSFunction, will->And->The result of the multiplication is denoted ret j,r Then call +.>Function, ret j,r And->The result of the addition is marked->The above process is described by equation (8):
wherein the method comprises the steps ofRepresenting participant P i MO uses formula (8) to apply +.>Converting back to the original value before value range conversion;
MO calls |L| -Add of CKS 1 times (S j,r ,S j+1,r ) Function, P j E L, j=1, 2, …, |l| -1, resulting in the sum of the weights of |l| participants
Where |L| represents the number of participants in the participant list L;
s307, MO willStored in DFS, will->Hash address +.>And-> Packaging into a transaction: />H(TX MO ) "Keywords" }, transaction TX weight Uploading to the blockchain.
5. The blockchain-based anti-poisoning attack privacy protection federal learning method according to claim 1, wherein the step S40 includes the steps of:
s401, using an Algornd consensus protocol of a blockchain to randomly select a part of committee from all consensus nodes by using a verifiable random function, and selecting one member from the committee to be an aggregator;
S402, the aggregator first queries the transaction TX from the blockchain L Obtaining a hash address H (L) of an honest normalized participant list L, and obtaining the L from the DFS according to the address; query transaction { TX j,r Gradient ciphertext is obtained by the |j epsilon L }Hash address of (a)Obtaining +.>Query transaction TX weight Hash address for obtaining weight ciphertextObtaining weight ciphertext from DFS according to address>The aggregator then calculates key evk using the CKKS assistance of SM s For each P j (P j E L), call CKS +.>Function, will-> Andthe result of the multiplication is recorded as agg j,r For each P j (P j E L), the aggregator recalls the CKS rescaled function RS (agg) j,r ) Preventing the scale and error of ciphertext from increasing, and obtaining the converted ciphertext ag j,r Finally, the aggregator calls Add (ag) for |L| -1 CKS j,r ,ag j+1,r ) Function, P j E L, j=1, 2, …, |l| -1, resulting in an aggregate ciphertext for |l| participantsThe above procedure for the aggregator to perform gradient ciphertext aggregation is described by equation (10):
s403, after the aggregator calculates, aggregating the ciphertextStore in DFS and generate a transaction +.> Packaging all transactions of the round into a new block r ={TX Agg ,TX r,j I j ε L, then the committee member verifies this new block r And votes for it if block is agreed r Then a transaction is generated: /> If more than 2/3 of the committee members agree to this block r Then the block is admitted and the aggregator gets rewarded, all committee members broadcasting the block; otherwise, the deposit of the aggregator is not received and rewarded to other members of the committee, then the next committee member in the block proposal sequence becomes the aggregator, and the aggregation steps S402 and S403 are re-executed until the committee is about the block r Agree on;
s404, after achieving the block chain consensus, SM first queries transaction TX from the block chain Agg Obtaining the aggregate ciphertextFrom transaction TX weight Obtain the sum ciphertext of the weight->Then, the SM uses its own private key sk s Invoking the decryption function ++equal-state encryption CKS>Respectively obtaining plaintext polynomials t (X) and sum (X), and calling decoding functions Dcd (t (X), delta) and Dcd (sum (X), delta) of the isomorphic encryption CKKS to respectively obtain an aggregation result g r Sum of sum weights sum r The above process is described by equation (11):
finally, SM calculates an aggregation gradient:
SM calls the coding function of homomorphic encryption CKSResulting in a polynomial m (X) and then using a public key pk common to all participants and MO x Calls the encryption function Enc (pk x Obtaining the aggregation gradient ciphertext by m (X))The above process is described by equation (13):
will beStored in DFS, ">Hash address +.>Packaging into a transaction: /> Transaction TX Result Uploading to the blockchain.
6. The blockchain-based anti-poisoning attack privacy preserving federal learning method of claim 1, wherein the step S50 includes the steps of:
s501, MO and P i Querying transaction TX from blockchain Result And downloading the aggregation gradient ciphertext from the DFS according to the hash addressThen use the MO's private key sk x Invoking the decryption function of CKS +.>Obtaining a plaintext polynomial p (X), calling a decoding function Dcd (p (X), delta) of CKKS to obtain a global aggregation gradient +.>The above process is described by equation (14):
s502, MO and participants use aggregation gradientUpdating the local model:
s503, testing accuracy after MO updating the model, if the model accuracy is expected, generating a transaction and uploading the transaction to
Blockchain:indicating that the FL-task is over,
otherwise P i Proceeding to S201, the local model training of r+1 rounds is performed.
CN202310354892.4A 2023-04-04 2023-04-04 Privacy protection federal learning method based on blockchain anti-poisoning attack Pending CN116451805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310354892.4A CN116451805A (en) 2023-04-04 2023-04-04 Privacy protection federal learning method based on blockchain anti-poisoning attack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310354892.4A CN116451805A (en) 2023-04-04 2023-04-04 Privacy protection federal learning method based on blockchain anti-poisoning attack

Publications (1)

Publication Number Publication Date
CN116451805A true CN116451805A (en) 2023-07-18

Family

ID=87124915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310354892.4A Pending CN116451805A (en) 2023-04-04 2023-04-04 Privacy protection federal learning method based on blockchain anti-poisoning attack

Country Status (1)

Country Link
CN (1) CN116451805A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822661A (en) * 2023-08-30 2023-09-29 山东省计算中心(国家超级计算济南中心) Privacy protection verifiable federal learning method based on double-server architecture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116822661A (en) * 2023-08-30 2023-09-29 山东省计算中心(国家超级计算济南中心) Privacy protection verifiable federal learning method based on double-server architecture
CN116822661B (en) * 2023-08-30 2023-11-14 山东省计算中心(国家超级计算济南中心) Privacy protection verifiable federal learning method based on double-server architecture

Similar Documents

Publication Publication Date Title
Liu et al. Privacy-enhanced federated learning against poisoning adversaries
US11374736B2 (en) System and method for homomorphic encryption
Tahir et al. CryptoGA: a cryptosystem based on genetic algorithm for cloud data security
CN110011784B (en) KNN classification service system and method supporting privacy protection
Sun et al. Quantum private comparison protocol based on cluster states
US20150381349A1 (en) Privacy-preserving ridge regression using masks
Jayapandian et al. Secure and efficient online data storage and sharing over cloud environment using probabilistic with homomorphic encryption
CN115037477A (en) Block chain-based federated learning privacy protection method
Abadi et al. Feather: Lightweight multi-party updatable delegated private set intersection
Liu et al. A quantum-based database query scheme for privacy preservation in cloud environment
Sun et al. An efficient secure k nearest neighbor classification protocol with high‐dimensional features
CN116451805A (en) Privacy protection federal learning method based on blockchain anti-poisoning attack
Xu et al. Efficient batch homomorphic encryption for vertically federated xgboost
CN113836447A (en) Safe track similarity query method and system under cloud platform
Huynh Cryptotree: fast and accurate predictions on encrypted structured data
CN116094686B (en) Homomorphic encryption method, homomorphic encryption system, homomorphic encryption equipment and homomorphic encryption terminal for quantum convolution calculation
CN117216805A (en) Data integrity audit method suitable for resisting Bayesian and hordeolum attacks in federal learning scene
Feng et al. Efficient and verifiable outsourcing scheme of sequence comparisons
Cui et al. Towards Multi-User, Secure, and Verifiable $ k $ NN Query in Cloud Database
Song et al. Anomaly detection as a service: an outsourced anomaly detection scheme for blockchain in a privacy-preserving manner
Li et al. Efficient oblivious transfer construction via multiple bits dual-mode cryptosystem for secure selection in the cloud
Nita et al. Homomorphic Encryption
Wang et al. A secure cloud-edge collaborative logistic regression model
Ghunaim et al. Secure kNN query of outsourced spatial data using two-cloud architecture
Li et al. PBFL: Privacy-Preserving and Byzantine-Robust Federated Learning Empowered Industry 4.0

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination