CN115967526B - Privacy protection method for gradient lifting decision tree outsourcing reasoning - Google Patents
Privacy protection method for gradient lifting decision tree outsourcing reasoning Download PDFInfo
- Publication number
- CN115967526B CN115967526B CN202211324597.6A CN202211324597A CN115967526B CN 115967526 B CN115967526 B CN 115967526B CN 202211324597 A CN202211324597 A CN 202211324597A CN 115967526 B CN115967526 B CN 115967526B
- Authority
- CN
- China
- Prior art keywords
- tree
- node
- user
- cloud server
- ahe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003066 decision tree Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012946 outsourcing Methods 0.000 title claims abstract description 13
- 230000006854 communication Effects 0.000 claims abstract description 18
- 238000004891 communication Methods 0.000 claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 abstract description 2
- 238000012360 testing method Methods 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a privacy protection method aiming at gradient lifting decision tree outsourcing reasoning, wherein a model owner sends a GBDT model after privacy tree conversion to a cloud server; the user encrypts the data to be predicted through the hash and the addition homomorphic key, and sends the encrypted data to be predicted to the cloud server; the cloud server and the user execute a security comparison protocol through D-round communication, and an encrypted prediction result is obtained and sent to the user; and obtaining a final prediction result after decryption by the user. Furthermore, in each round of communication process in the reasoning stage, the cloud server performs random tree replacement before sending the predicted value of the current node under the ciphertext. The invention uses the lightweight hash and addition homomorphic encryption technology for the outsourcing reasoning of the gradient promotion decision tree, and customized security comparison and random tree replacement protocols, which greatly accelerate the calculation, reduce the communication cost of the outsourcing reasoning, and simultaneously prevent users from deducing the privacy related to the gradient promotion decision tree.
Description
Technical Field
The invention relates to an information security technology, in particular to a privacy protection technology for a gradient lifting decision tree.
Technical Field
Classification tasks in the current industry scenarios of credit modeling, fraud detection, and medical diagnostics are often accomplished using gradient boost decision trees GBDT. GBDT builds a number of decision trees one by one, each trying to reduce the residual of the previous tree. In the prediction stage, the final result is derived from the addition of the outputs of all trees. With the widespread adoption of cloud computing, outsourcing GBDT reasoning services to cloud computing is gaining increasing popularity.
The GBDT reasoning service is outsourced to cloud computing, which can lead to key privacy problems in the communication process. First, models of model owners are often private because training an effective model requires significant investment, such as data sets, computing resources, and labor costs. Model owners do not naturally want to expose private models to cloud servers in plain text. Second, the user's query data may be sensitive, such as financial information or medical data. Directly sending data in the clear may easily compromise the privacy of the user. Third, the user may not want the server to know the true predicted outcome, such as financial judgment or medical diagnosis. At the same time, the user cannot infer information about GBDT from the predictions. The server should only make predictions and not know the true results. Thus, security issues must be embedded in the design of the inference outsourcing from the beginning in order to guarantee privacy of proprietary models, sensitive data and private predictions.
To alleviate such problems, some efforts have explored decision tree evaluation methods to design privacy preserving decision tree frameworks that either add significant overhead or do not achieve comprehensive privacy preservation of models, predictors, and intermediate values.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for realizing comprehensive privacy protection with low calculation amount and low communication overhead in the gradient lifting decision tree outsourcing reasoning process.
The technical scheme adopted by the invention for solving the technical problems is that the privacy protection method for gradient promotion decision tree outsourcing reasoning comprises the following steps:
an initialization stage:
the user and the model owner share a hash key and an addition homomorphic key;
model owners train a local gradient lifting decision tree GBDT model, wherein the local GBDT model consists of n decision trees; performing private tree conversion on the n decision trees to obtain n ciphertext privacy trees, and sending the GBDT model after the private tree conversion to a cloud server;
reasoning:
the user encrypts the data to be predicted: carrying out hash processing on the feature names by using a hash key, encrypting the data values by using an addition homomorphic key, and transmitting the encrypted data to be predicted to a cloud server by a user;
and executing a security comparison protocol between the cloud server and the user: the cloud server and the user complete communication by performing a secure comparison protocol under the ciphertext of the D round execution, each round of communication cloud server can send a judgment result of the current node under the ciphertext to the user, and the prediction result after the encryption of the homomorphic key of addition is obtained through the iteration of the D round, and the result is summed and then sent to the user;
and the user decrypts by using the addition homomorphic key to obtain a final prediction result.
Furthermore, in order to prevent the leakage of model information between the user and the cloud server in the process of executing the security comparison protocol, in each round of communication process in the reasoning stage, the cloud server performs random tree replacement before sending the predicted value of the current node under the ciphertext: for each node of the decision tree, carrying out hash processing on the characteristic name of the non-leaf node by using a hash key, and encrypting the threshold value of the non-leaf node and the predicted value of the leaf node by using an addition homomorphic key to protect the node privacy; replacing leaf nodes which do not reach the maximum depth by using random nodes to protect path privacy; the random node replacement of the leaf node which does not reach the maximum depth is specifically: carrying out hash processing on the feature names of the random nodes by using the hash key, encrypting the threshold value of the random nodes by using the addition homomorphic key, and redirecting the left and right child nodes of the random nodes to the replaced leaf nodes; the predicted value of the node of the decision tree after the random tree replacement is sent to the user; the user receives and decrypts the currently received predicted value and returns a plaintext; and the cloud server performs reverse random tree replacement after receiving the plaintext, recovers the correct mapping relation according to the offset table, and selects the next node for all the trees until the leaf nodes are reached.
The invention provides a privacy protection framework combining GBDT, hash, AHE and customized protocols, which is different from the existing privacy protection method. The lightweight hash and addition homomorphic encryption technology is used for outsourcing reasoning of the gradient lifting decision tree. Specifically, it first designs a transformation method for the gradient-lifted decision tree to protect the node and structure privacy of the model. On top of the protected model, the framework designs custom comparison and random tree permutation protocols that greatly speed up computation, reduce the communication costs of outsourcing reasoning, while preventing users from inferring privacy associated with gradient-enhanced decision trees. Compared with the current privacy preserving work, the running time of the framework is reduced by 47 times, and the traffic is reduced by 37 times.
The beneficial effects of the invention are as follows: the outsourcing reasoning process of the GBDT can provide efficient, safe and complete privacy protection; reducing computational and communication overhead.
Drawings
Fig. 1 is a schematic diagram of an embodiment.
Detailed Description
The meaning of the following parameter symbols is explained:
H s representing a hash process, s representing a hash key;
AHE means addition homomorphic encryption, pk and sk means public key and private key of AHE respectively, AHE.Enc means encryption process of addition homomorphic encryption, AHE.DecA means decryption process of addition homomorphic encryption,representing addition between ciphertexts->Representing subtraction between ciphertext;
data [ ] represents test Data, originally plain text, encrypted by user and then encrypted by homomorphic addition;
v represents a node, v i,j The j-th node of the i-th tree is represented, and the value range of i is 1 to n;
x represents a feature value, f represents a feature name, θ represents a threshold value, value represents a predicted value, and x;
v i,j .f、v i,j .θ、v i,j value represents the feature name, threshold value and predicted value of the j node of the i-th tree respectively.
As shown in fig. 1, the method of the present invention comprises the steps of:
(1) The User possesses test data, the test data is composed of characteristic value x 1 、x 2 、x 3 、x 4 Corresponding feature name f 1 、f 2 、f 3 、f 4 Composition; the Model Owner Model Owner and the user share a hash key s and an AHE key (including public key pk and private key sk).
(2) Model owners train Local GBDT of a decision tree, and n decision trees G are used in total 1 ,G 2 ,…,G n The composition is encrypted by using a private tree conversion algorithm, and n ciphertext privacy trees CPT (Cipher Private Tree) are obtained and sent to a Cloud Server.
The invention modifies each tree of the GBDT by private tree transformation: for node privacy, we use hashing to process the feature names of non-leaf nodes, i.e., H s (v i,j F), wherein v i,j F represents the feature name of the j node of the i-th tree, and the threshold value of the non-leaf node and the predicted value of the leaf node are encrypted by using AHE, namely AHE.Enc (pk, v i,j .θ)、AHE.Enc(pk,v i,j .value);
For path privacy, we replace the leaf node v that does not reach maximum depth with a random node (random feature name, feature value, and threshold value), again requiring Hash and AHE encryption processing, and redirect the left and right child nodes of the random node to v. From FIG. 1, G can be seen 1 And CPT 1 Is different from CPT 1 The leaf node value and the threshold information of the non-leaf node are encrypted by using AHE, and the feature name of the non-leaf node is anonymized by using Hash; and CPT (CPT) 1 Add random node, and G 1 The tree structure of (2) is not identical. On the premise of guaranteeing node privacy, the random node is addedThe point protects path privacy, and the cloud server cannot obtain real information about the tree structure.
(3) The user encrypts the test data: for the characteristic name f 1 、f 2 、f 3 、f 4 Hash to obtain H s (f 1 )、H s (f 2 )、H s (f 3 )、H s (f 4 ) For the characteristic value x) 1 、x 2 、x 3 、x 4 AHE encryption was used to obtain AHE.Enc (pk, x 1 )、AHE.Enc(pk,x 2 )、AHE.Enc(pk,x 3 )、AHE.Enc(pk,x 4 ) Sending the encrypted test data to a cloud server;
(4) The cloud server and the user perform node determination through D-round communication, wherein D represents the maximum depth of a tree in the GBDT:
(4-1) the cloud server and the user execute a safety comparison protocol, and calculate comparison results of all the trees under ciphertext, namely, the comparison result of each tree is obtained from a comparison root node of each tree;
the security comparison protocol refers to a process of determining the internal node result before the cloud server and the user. The security comparison protocol, when executed, cloud server obtains from model owner a threshold AHE.Enc (pk, v) of AHE encryption for each non-leaf node i,j θ) and hashed node feature name H from the user s (v i,j F) the step of. The cloud server extracts a random number r for each tree i And calculates the result e of the comparison of the encryption,wherein the value of the selected value x satisfies 2 x >(Data[v i,j .f]-v i,j .θ)×r i . After securely calculating the comparison result ahe.enc (pk, e), the user receives the result and decrypts the ahe.deca (ahe.enc (pk, e), sk). Finally, the user gets the result (Data v i,j .f]-v i,j .θ)×r i +2 x . The invention calculates the comparison result of encryption and r i Multiplication helps to hide the threshold v from the user i,j θ. If the (x + 1) th bit of e is 1,then Data v is indicated i,j .f]≥v i,j θ, if the (x+1) th bit of e is 0, data [ v ] is indicated i,j .f]<v i,j .θ。
Further, considering that the risk of leakage of model information still exists in the process of executing the security comparison protocol by the user and the cloud server, the user can crack the threshold information of each tree by finely adjusting the test data. We have designed a random tree permutation protocol to prevent users from obtaining GBDT information.
Optionally, steps (4-2) and (4-3) are added. The comparison result obtained by the user is not affected if (4-2) and (4-3) are not performed.
(4-2) the cloud server and the user perform random tree permutation RTP: the cloud server uses random tree permutation to generate an offset table by which the cloud server obfuscates the relationship between the tree and the comparison result. And the user receives and decrypts all the results and returns the plaintext.
(4-3) the cloud server performs inverse random tree permutation Anti-RTP: and recovering the correct mapping relation according to the offset table, and selecting the next node for all the trees until the leaf nodes are reached, so as to obtain the real result of each tree.
(5) The cloud server takes all encrypted leaf values of each tree as predicted values ahe.enc (pk, v i,J Value) to perform ciphertext additionAnd then obtaining an encryption prediction result prediction_enc, wherein J represents the leaf node sequence numbers of each tree, and transmitting the encryption prediction result prediction_enc to a user. Preferably, at the time of step (4), because the depth of each tree is different, some trees may arrive at leaf nodes earlier, so the cloud server may sum the results of arriving at leaf nodes earlier, so the communication overhead of each round is continuously reduced, and the final reasoning time is reduced.
(6) And the user carries out AHE decryption on the encrypted prediction result and AHE. DecA (prediction_enc, sk) to obtain a final prediction result prediction.
RTP and Anti-RTP:
in the scheme of the invention, the prediction depends on D-round communication. In each round, after executing the security comparison protocol under the ciphertext, the CS needs to determine the true path of each tree according to the plaintext. However, if the order in which the ciphertext is sent corresponds to the order of the tree, the user may infer the threshold, characteristics, and other information of the tree by conceiving some specific test data. For example, the first cipher text sent in the first round corresponds to the root node determination of the first tree. Therefore, we propose a random tree permutation protocol to obfuscate the relationship between the tree and the comparison result.
The random tree permutation algorithm RTP we choose the well-known Fisher-Yates shuffling method, iterating a sequence from end to beginning (or vice versa), for each position i, it swaps the value of i with the value of the random target position j. The above procedure can be expressed as:
where (p (1), p (2), …, p (n)) is a random arrangement of sequences (1, 2, …, n).
First, the cloud server needs to generate a random mapping table p, and then replace the comparison result according to the table. For example, for the comparison result (s 1 ,s 2 ,…,s n ) (ciphertext) according to the mapping table, putting the ith ciphertext into the p (i) position to obtain a random arranged comparison result (v) 1 ,v 2 ,…,v n ). The recovery algorithm Anti-RTP for random permutation also relies on this mapping table f, the user decrypts the comparison result, and the plaintext result (p 1 ,p 2 ,…,p n ) Sending the v to a cloud server, wherein the cloud server sends the v p(i) And (5) placing the block in the ith position to obtain a plaintext comparison result in the correct sequence.
RTP confuses the corresponding relation between the comparison result of each round of communication and the decision tree, and the specific process of RTP is as follows: in each round, the cloud server uses RTP algorithm to confuse the security comparison result and sends the result to the user. And the user decrypts the disordered ciphertext and sends the plaintext to the cloud server. By performing inverse random tree permutation Anti-RTP, the cloud server can recover the correct order, yielding the true result for each tree. Thus, the CS knows that each tree should select either the left or right child node on the next round. Even if the user deduces the information of a certain node, he/she does not know the corresponding tree, and cannot find child nodes in the next round. The protocol protects node privacy and path privacy of the GDBT. Furthermore, the random permutation algorithm requires only a few steps, and only one integer and swap operation is required per iteration. Because of its efficiency and simplicity, confusion and recovery of a vector requires only a linear complexity O (n), where n represents the number of decision trees that perform the secure comparison protocol and the random tree permutation protocol for this round.
Claims (2)
1. The privacy protection method for gradient promotion decision tree outsourcing reasoning is characterized by comprising the following steps:
an initialization stage:
the user and the model owner share a hash key and an addition homomorphic key;
model owners train a local gradient lifting decision tree GBDT model, wherein the local GBDT model consists of n decision trees; performing private tree conversion on the n decision trees to obtain n ciphertext privacy trees, and sending the GBDT model after the private tree conversion to a cloud server;
reasoning:
the user encrypts the data to be predicted: carrying out hash processing on the feature names by using a hash key, encrypting the data values by using an addition homomorphic key, and transmitting the encrypted data to be predicted to a cloud server by a user;
and executing a security comparison protocol between the cloud server and the user: the cloud server and the user complete communication by carrying out a secure comparison protocol under the ciphertext of the D round execution, each round of communication cloud server can send a predicted value of the current node under the ciphertext to the user, the predicted value after encryption of the homomorphic key of addition is obtained through the iteration of the D round, and a predicted result is obtained after summation and is sent to the user; d is the maximum depth of the tree in GBDT:
the user decrypts the homomorphic key by using the addition to obtain a final prediction result;
the D-round communication performed by the cloud server and the user executing the security comparison protocol specifically comprises the following steps:
the cloud server calculates comparison results of all trees under ciphertext; for each non-leaf node, the cloud server obtains from the model owner a threshold AHE.Enc (pk, v) for AHE encryption i,j θ) and hashed node feature name H from the user s (v i,j F) a step of; AHE.Enc represents an encryption process of addition homomorphic encryption, H s Representing a hash process, s representing a hash key; pk denotes the public key of the AHE, v i,j .f、v i,j θ represents the feature name and threshold of the j node of the i tree;
the cloud server extracts a random number r for each tree i And calculates the result e of the comparison of the encryption,wherein the value of the selected value x satisfies 2 x >(Data[v i,j .f]-v i,j .θ)×r i The method comprises the steps of carrying out a first treatment on the surface of the After securely calculating the comparison result ahe.enc (pk, e), the user receives the result and decrypts the ahe.deca (ahe.enc (pk, e), sk); AHE. DecA represents the decryption process of addition homomorphic encryption, data [ Data ]]Ciphertext representing addition homomorphic encryption; sk represents the private key of the AHE, +.>Representing addition between ciphertexts->Representing subtraction between ciphertext;
the user gets the result e= (Data v i,j .f]-v i,j .θ)×r i +2 x The (x+1) th bit of e is 1, indicating Data v i, j .f]≥v i,j The (x+1) th bit of θ, e is 0, data v is indicated i,j .f]<v i,j .θ;
Wherein, the private tree transformation of n decision trees is realizedThe method for converting the n ciphertext privacy trees through the privacy tree comprises the following steps: for node privacy, the feature names of non-leaf nodes are processed using hashing, i.e., H s (v i,j F), wherein v i,j F represents the feature name of the j node of the i-th tree, and the threshold value of the non-leaf node and the predicted value of the leaf node are encrypted by using AHE, namely AHE.Enc (pk, v i,j .θ)、AHE.Enc(pk,v i,j .value);
For path privacy, replacing a leaf node v which does not reach the maximum depth by the characteristics of a random node, and redirecting left and right child nodes of the random node to v after Hash and AHE encryption processing is also needed; the characteristics of the random node include a random characteristic name, a characteristic value, and a threshold value.
2. The method of claim 1, wherein during each round of communication in the inference phase, the cloud server performs random tree permutation before sending the predicted value of the current node under ciphertext:
for each node of the decision tree, carrying out hash processing on the characteristic name of the non-leaf node by using a hash key, and encrypting the threshold value of the non-leaf node and the predicted value of the leaf node by using an addition homomorphic key to protect the node privacy; replacing leaf nodes which do not reach the maximum depth by using random nodes to protect path privacy; the predicted value of the node of the decision tree after the random tree replacement is sent to the user; the random node replacement of the leaf node which does not reach the maximum depth is specifically: carrying out hash processing on the feature names of the random nodes by using the hash key, encrypting the threshold value of the random nodes by using the addition homomorphic key, and redirecting the left and right child nodes of the random nodes to the replaced leaf nodes;
the user receives and decrypts the currently received predicted value and returns a plaintext;
and the cloud server performs reverse random tree replacement after receiving the plaintext, recovers the correct mapping relation according to the offset table, and selects the next node for all the trees until the leaf nodes are reached.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211324597.6A CN115967526B (en) | 2022-10-27 | 2022-10-27 | Privacy protection method for gradient lifting decision tree outsourcing reasoning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211324597.6A CN115967526B (en) | 2022-10-27 | 2022-10-27 | Privacy protection method for gradient lifting decision tree outsourcing reasoning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115967526A CN115967526A (en) | 2023-04-14 |
CN115967526B true CN115967526B (en) | 2024-03-19 |
Family
ID=87353323
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211324597.6A Active CN115967526B (en) | 2022-10-27 | 2022-10-27 | Privacy protection method for gradient lifting decision tree outsourcing reasoning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115967526B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684855A (en) * | 2018-12-17 | 2019-04-26 | 电子科技大学 | A kind of combined depth learning training method based on secret protection technology |
CN113127925A (en) * | 2021-03-11 | 2021-07-16 | 西安电子科技大学 | User and service provider decision tree privacy classification service method, system and application |
CN115021900A (en) * | 2022-05-11 | 2022-09-06 | 电子科技大学 | Method for realizing comprehensive privacy protection of distributed gradient lifting decision tree |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10764048B2 (en) * | 2017-12-20 | 2020-09-01 | Nxp B.V. | Privacy-preserving evaluation of decision trees |
-
2022
- 2022-10-27 CN CN202211324597.6A patent/CN115967526B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109684855A (en) * | 2018-12-17 | 2019-04-26 | 电子科技大学 | A kind of combined depth learning training method based on secret protection technology |
CN113127925A (en) * | 2021-03-11 | 2021-07-16 | 西安电子科技大学 | User and service provider decision tree privacy classification service method, system and application |
CN115021900A (en) * | 2022-05-11 | 2022-09-06 | 电子科技大学 | Method for realizing comprehensive privacy protection of distributed gradient lifting decision tree |
Non-Patent Citations (2)
Title |
---|
Towards Lightweight and Efficient Distributed Intrusion Detection Framework;袁帅等;IEEE;全文 * |
云计算环境下朴素贝叶斯安全分类外包方案研究;陈思;;计算机应用与软件(07);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115967526A (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112822005B (en) | Secure transfer learning system based on homomorphic encryption | |
CN105122721B (en) | For managing the method and system for being directed to the trustship of encryption data and calculating safely | |
CN112395643B (en) | Data privacy protection method and system for neural network | |
CN110719159A (en) | Multi-party privacy set intersection method for resisting malicious enemies | |
WO2018110608A1 (en) | Collating system, method, device, and program | |
JP5657128B2 (en) | Secure calculation system, secure calculation method, and secure calculation program | |
Šeděnka et al. | Secure outsourced biometric authentication with performance evaluation on smartphones | |
Kumar et al. | Enhancing multi‐tenancy security in the cloud computing using hybrid ECC‐based data encryption approach | |
CN107919965A (en) | A kind of biological characteristic sensitive information outsourcing identity identifying method based on homomorphic cryptography | |
CN113221105B (en) | Robustness federated learning algorithm based on partial parameter aggregation | |
CN112818360B (en) | Deep neural network encryption reasoning method based on homomorphic encryption technology | |
CN111242290A (en) | Lightweight privacy protection generation countermeasure network system | |
CN110830514A (en) | Detection method for collusion-based false data injection attack of smart power grid | |
CN108718240A (en) | Authentication method, electronic equipment, storage medium based on full homomorphic cryptography and system | |
CN115409198A (en) | Distributed prediction method and system thereof | |
CN112766495A (en) | Deep learning model privacy protection method and device based on mixed environment | |
CN111581648B (en) | Method of federal learning to preserve privacy in irregular users | |
CN109688143A (en) | A kind of cluster data mining method towards secret protection in cloud environment | |
CN116484415A (en) | Privacy decision tree reasoning method based on isomorphic encryption | |
Kucherov et al. | Homomorphic encryption methods review | |
CN112906052B (en) | Aggregation method of multi-user gradient permutation in federated learning | |
CN115967526B (en) | Privacy protection method for gradient lifting decision tree outsourcing reasoning | |
CN111159727B (en) | Multi-party cooperation oriented Bayes classifier safety generation system and method | |
Pan et al. | PNAS: A privacy preserving framework for neural architecture search services | |
CN115580443A (en) | Graph data processing method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |