CN115442036A - Split shuffle-based federated learning method, apparatus, device and medium - Google Patents

Split shuffle-based federated learning method, apparatus, device and medium Download PDF

Info

Publication number
CN115442036A
CN115442036A CN202211074718.6A CN202211074718A CN115442036A CN 115442036 A CN115442036 A CN 115442036A CN 202211074718 A CN202211074718 A CN 202211074718A CN 115442036 A CN115442036 A CN 115442036A
Authority
CN
China
Prior art keywords
federal learning
gradient
privacy protection
model
authentication server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211074718.6A
Other languages
Chinese (zh)
Inventor
徐玲玲
徐培明
林宇
蒋屹新
匡晓云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
CSG Electric Power Research Institute
Original Assignee
South China University of Technology SCUT
CSG Electric Power Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, CSG Electric Power Research Institute filed Critical South China University of Technology SCUT
Priority to CN202211074718.6A priority Critical patent/CN115442036A/en
Publication of CN115442036A publication Critical patent/CN115442036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0863Generation of secret information including derivation or calculation of cryptographic keys or passwords involving passwords or one-time passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/30Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/30Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy
    • H04L9/3006Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy underlying computational problems or public-key parameters
    • H04L9/302Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy underlying computational problems or public-key parameters involving the integer factorization problem, e.g. RSA or quadratic sieve [QS] schemes

Abstract

The invention discloses a federal learning method based on local differential privacy and split shuffle, wherein a federal learning client side is registered in an authentication server at first, and is processed by using a privacy protection algorithm before uploading gradient information, and then the privacy protection gradient is encrypted to generate a message which is sent to the authentication server. The authentication server verifies the message, after the verification is completed, the gradient information is split and shuffled and then sent to the federal learning server, the contact between the gradient information in the federal learning server and each federal learning client is cut off, the privacy protection capability is enhanced, and meanwhile the high performance of the model is guaranteed. According to the method, the differential privacy protection of the gradient information is better realized by using the local differential privacy of an index mechanism; then, the privacy protection gradient information is uploaded to an authentication server for authentication, and the correctness of the gradient information is ensured; and finally, before the gradient information is uploaded to a federal learning server, splitting and shuffling processing is carried out, and the privacy protection strength is improved.

Description

Split shuffle-based federated learning method, apparatus, device and medium
Technical Field
The invention relates to the technical field of data security, in particular to a local differential privacy and split shuffle-based federated learning method, a local differential privacy and split shuffle-based federated learning device, computer equipment and a storage medium.
Background
In real life, data often exists in an isolated island form due to problems of industry competition, privacy safety, complex administrative procedures and the like among different organizations in most industries. In addition, since data in reality is mostly distributed in different places, for example, many large-scale multinational companies establish data centers around the world to store data, it is almost impossible to integrate data from various places and organizations. To address the above-mentioned problems, researchers have proposed a federal learning concept. Federal learning is a machine learning framework which is widely concerned in recent years, and can effectively help a plurality of organizations to finish efficient and convenient data use and machine learning model training. In a federated learning model, where there is a server and multiple participants, a complete round of federated learning processes is as follows: in the first step, the server first initializes a model and sends the model parameters to the participants. And secondly, the participator uses the parameters to build a model locally, then uses local data to train the model, and sends gradient information of the trained model to a server. And thirdly, after receiving the gradient information sent by each participant, the server processes the gradient information and updates the model. After completing one round, the process starts from the second step, and goes to the next round, and repeats. In this manner, the model can be trained using multi-party data without data exchange.
However, the user privacy safety problem exists in the federal learning, and in the interaction process of the federal learning, gradient information of a model can be intercepted by a malicious user and reversely deducted privacy data of a participating node; or, a malicious attacker tampers and replaces gradient information of the user to develop a virus attack and destroy the model of the Federal learning server. Therefore, in order to prevent the leakage of gradient information, improve the security of federal learning and ensure the privacy of user data, a differential privacy technology and a signature verification technology can be added into the federal learning process. Differential privacy protection is a privacy protection technology based on data distortion, and is characterized in that data is blurred by adding noise to the data, sensitive data information is covered, and the data cannot be restored, and the existing federal learning process based on differential privacy is as follows: the participant trains the model by using local data, and adds noise meeting differential privacy to the gradient information before uploading the gradient information, so that differential privacy protection is provided for the gradient information, and intrusion behaviors of external attackers such as background knowledge attack and the like are resisted; the signature verification technology ensures the security of the gradient information of the user by adopting an RSA signature verification method.
The existing federated learning method based on the differential privacy still has certain problems, the performance of a trained model is reduced due to the fact that noise needs to be added into gradient information, the privacy protection capability of the model can be improved due to the fact that more noise is added, and meanwhile the performance of the model is poorer. Most of the existing federal learning methods based on differential privacy cannot well balance the relationship between privacy protection capability and model performance, and how to train a high-performance model while ensuring strong privacy protection capability is still a challenging problem.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a federal learning method, a federal learning device, a computer device and a storage medium based on local differential privacy and split shuffle.
The first purpose of the invention can be achieved by adopting the following technical scheme:
a federated learning method based on local differential privacy and split shuffles, the federated learning method comprising the steps of:
s1, initialA chemical stage: the Federal learning server side converts the initial model parameter theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
s2, model training: the federal learning client uses a local data training model, and after model training is completed, gradient information g of the model is processed by using a privacy protection algorithm phi to generate a privacy protection gradient g'; then, encrypting the privacy protection gradient g ' by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g ' and the id of the ciphertext c and the privacy protection gradient g ' with the self id to form a message M and sending the message M to the authentication server;
s3, gradient information authentication and processing: after receiving a message sent by the Federal learning client, the authentication server analyzes the message, acquires a corresponding RSA public key pk according to the id, decrypts a ciphertext c by using the public key pk, compares a decryption result with the privacy protection gradient g 'and confirms that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G 'of all the federal learning clients are collected, randomly selecting privacy protection gradients G' of k clients from the privacy protection gradients G ', splitting and shuffling the privacy protection gradients G' of the k clients to obtain a gradient set G, and sending the gradient set G to a federal learning server;
s4, gradient information aggregation: the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the federated learning client, starting a new iteration, and finishing the federated learning method after finishing a preset total iteration.
Furthermore, the model represents most of common machine learning algorithms, such as a convolutional neural network algorithm, a fully-connected neural network algorithm, and the like, and since which algorithm is specifically adopted depends on a specific task, the method described in the present invention is applicable to most of common machine learning algorithms, so the common machine learning algorithms are collectively referred to as the model in the present invention.
Further, the step S1 process is as follows:
s1a, using prime number generator by Federal learning client
Figure BDA0003829211480000031
Generating two large prime numbers (p, q) and calculating the product of the two N = p × q;
s1b, calculating the minimum common multiple L of the parameter p-1 and the parameter q-1 by the federal learning client; solving a parameter E according to the parameter L, wherein the parameter E needs to satisfy the condition that the greatest common divisor of the parameter L is 1, and E is more than 1 and less than L; then, a parameter D is obtained, where the parameter D needs to satisfy (E × D) modL =1, and 1< D < L, where mod () represents a modulo operation;
s1c, the Federal learning client obtains an RSA public key pk = < E, N > and an RSA private key sk = < D, N > according to the parameters, sends the RSA public key pk to an authentication server for registration, and solves the safety problem that gradient information sent by the Federal learning client is tampered and intercepted by a malicious attacker through registration in the authentication server;
s1d, the authentication server maintains a list P for storing RSA public keys pk and id of the Federal learning client, when the authentication server receives the RSA public key pk sent by the Federal learning client, an id is distributed to the Federal learning client, the id is sent to the Federal learning client, and the id and the RSA public key pk are stored in the list P;
s1e, initializing a model at the Federal learning server side, wherein model parameters are theta 0 Model parameter θ 0 Sending the information to a federal learning client; the Federal learning client receives the model parameter theta 0 The model is then built locally.
Further, the step S2 process is as follows:
s2a, the federal learning client trains a model by using local data, and the gradient of the trained model is expressed as g = (g) 1 ,g 2 ,...,g n ) Wherein g is i A real value representing the ith dimension of the gradient information g;
s2b, converting the real number of each dimension in the model gradient information g into [ -c.10 [) ρ ,c·10 p ]C represents the range of an integer domain, rho represents the precision of a real number, and when conversion is carried out, a real number only keeps rho position after a decimal point;
s2c, calculating r and an interval [ -c.10 [) ρ ,c·10 ρ ]The fraction of each integer in the equation:
Figure BDA0003829211480000051
wherein y represents a value in the interval [ -c.10 ] ρ ,c·10 ρ ]An integer within, and y ≠ r; d (r, y) represents the Euclidean distance between r and y; alpha represents a privacy parameter for controlling the privacy protection intensity;
s2d, calculating interval [ -c.10 [) ρ ,c·10 ρ ]The probability that each integer is selected as output is calculated as follows:
Figure BDA0003829211480000052
wherein the content of the first and second substances,
Figure BDA0003829211480000053
represents the interval [ -c.10 [ ] ρ ,c·10 ρ ]Fractional sum of all internal integers; r' represents the interval [ -c.10 ] ρ ,c·10 ρ ]The selected integer is used as an output integer, and r' ≠ r;
s2e, according to the probability of each integer being selected calculated in the step S2d, selecting the interval [ -c.10 [ ] ρ ,c·10 ρ ]In the method, an integer is selected to replace an original value in gradient information g, and when fraction calculation is carried out on each integer in a section, the fraction calculated by the integer which is spatially closer to the original value is larger, so that the integer can be selected to replace the original value with higher probability, and more integers can be reservedThe influence of excessive noise on the Federal learning server model is reduced; after all values in the gradient information g are replaced, a privacy protection gradient g' is obtained;
s2f, using RSA private key sk =byFederal learning client<D,N>Calculating a ciphertext c = (g ') of a privacy protection gradient g' for authentication server verification D mod N;
And S2g, the federal learning client cascades the ciphertext c, the privacy protection gradient g 'and the id of the client to obtain a message M = < c, g' id >, and the message M is sent to the authentication server.
Further, the step S3 process is as follows:
s3a, the authentication server receives a message sent by the federal learning client, analyzes the message into M = < c, g' and id >, and acquires an RSA public key pk corresponding to the federal learning client from the list P according to the id value;
s3b, the authentication server parses the RSA public key into pk = < E, N >, and decrypts the ciphertext c using the following formula, so as to obtain the plaintext m:
m=(c) E mod N
s3c, comparing m with g ', if the m and the g' are the same, indicating that the privacy protection gradient g 'sent by the Federal learning client is not replaced or tampered, and ensuring the correctness of the privacy protection gradient g';
s3d, after the privacy protection gradients of all the federal learning clients are verified, the authentication server randomly selects privacy protection gradients g 'of k federal learning clients from all the federal learning clients, and if the total number of the clients is B, the selected probability is k/B, and the privacy protection gradients g' of the federal learning clients which are not selected are abandoned;
s3e, the authentication server carries out splitting and shuffling processing on the privacy protection gradients of the k federal learning client sides, internal relation of the privacy protection gradients of each federal learning client side is destroyed through the splitting and shuffling processing, the federal learning server side is prevented from carrying out overall analysis on the privacy protection gradients of one federal learning client side in each iteration while being guaranteed to obtain correct privacy protection gradients, and relevant information of local data used for model training of the federal learning client side is analyzed;
and S3f, the authentication server combines the split and mixed privacy protection gradients into a set, denoted as G, and sends the set G to the Federal learning server.
Further, the split shuffle processing in step S3e is as follows:
firstly, splitting values in the privacy protection gradient g', only preserving dimension information of each value, and then reordering the values after disordering the relative position of each value, wherein in the process, the dimension of each value in the gradient information is not changed, and only the relative position between each value in the same dimension is reordered; expressing k different values in the same dimension as
Figure BDA0003829211480000061
Wherein j is more than or equal to 1 and less than or equal to k,
Figure BDA0003829211480000071
representing the j-th of the k dimensions i.
Further, the step S4 process is as follows:
s4a, the Federal learning server receives the privacy protection gradient set G sent by the authentication server, starts to aggregate, and obtains the average value of each dimension gradient according to the dimension information of each value in the set to obtain the average gradient
Figure BDA0003829211480000072
The polymerization formula is as follows:
Figure BDA0003829211480000073
wherein the content of the first and second substances,
Figure BDA0003829211480000074
represents the mean gradient
Figure BDA0003829211480000075
The value of the ith dimension is obtained after aggregation, the value obtained after aggregation is the sum and average of privacy protection gradients, so that the split shuffling operation has no negative influence on the performance of the model of the Federal learning server side, and on the contrary, the internal relation of gradient information is damaged, so that the overall privacy budget of the scheme is smaller, and the overall privacy protection strength of the scheme is improved;
s4b, use of average gradient
Figure BDA0003829211480000076
Updating model parameter θ 0 To obtain a new model parameter theta 1 The formula is as follows:
Figure BDA0003829211480000077
wherein, gamma represents the model learning rate set by the Federal learning server end and is used for controlling the optimization rate of model parameters;
s4c, carrying out model parameter theta by the Federal learning server side 1 And sending the data to each federal learning client, starting a new iteration, and ending the federal learning method when the total iteration turns reach a preset total iteration turn.
The second purpose of the invention is realized by the following technical scheme:
a federated learning apparatus based on local differential privacy and split shuffles, the federated learning apparatus comprising:
an initialization module, wherein the Federal learning server end is used for initializing the model parameter theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
the federated learning client side uses a local data training model, processes gradient information g of the model by using a privacy protection algorithm phi after model training is completed, and generates a privacy protection gradient g'; then, encrypting the privacy protection gradient g 'by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g' and the self id to form a message M and sending the message M to the authentication server;
the gradient information authentication module is used for analyzing the message after the authentication server receives the message sent by the Federal learning client, acquiring a corresponding RSA public key pk according to the id, decrypting a ciphertext c by using the public key pk, comparing a decryption result with the privacy protection gradient g 'and confirming that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G ' of all federated learning clients are collected, randomly selecting privacy protection gradients G ' of k clients from the federated learning clients, splitting and shuffling the privacy protection gradients G ' of the k clients to obtain a gradient set G, and sending the gradient set G to a federated learning server;
the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the Federal learning client, starting a new iteration, and finishing the Federal learning method after finishing a preset total iteration.
The third purpose of the invention is realized by the following technical scheme:
a computer device comprising a processor and a memory for storing a processor executable program, the processor when executing the program stored in the memory implementing the above method for local differential privacy and split shuffle based federal learning.
The fourth purpose of the invention is realized by the following technical scheme:
a storage medium storing a program which, when executed by a processor, implements the above-described federated learning method based on local differential privacy and split shuffle.
Compared with the prior art, the invention has the following advantages and effects:
1. better model performance. The method uses the local differential privacy of an index mechanism, and better realizes the differential privacy protection of the gradient information. In the prior art, most of differential privacy schemes implement differential privacy protection by adding noise, but the noise size needs to be designed carefully, which easily causes that the added noise is too large to cause too large deviation from the original value, thereby affecting the performance of the model. The invention uses an index mechanism to realize differential privacy protection of gradient information, in the invention, a real numerical value is converted into an integer, then a probability is calculated for each number in a preset integer interval, and the probability is selected according to each integer in the interval, so that the final output result is closer to the original gradient information, the negative influence on the model performance caused by the difference between the output result and the original gradient information is reduced, and the performance of a machine learning model is closer to the model trained by using the original gradient information while the differential privacy protection is provided.
2. Higher safety. The invention uploads the privacy protection gradient information to the authentication server for authentication, thereby ensuring the correctness of the gradient information. In the prior art, before the gradient information of the client is uploaded to the federal learning server, the correctness of the gradient information is mostly not verified, so that the possibility that the gradient information is tampered or replaced by a malicious attacker cannot be avoided, and when the malicious attacker attacks the gradient information, the correctness of the gradient information cannot be ensured. The method and the system verify the gradient information sent by the federated learning client by using the trusted authentication server, and ensure that the gradient information cannot be replaced or tampered by a malicious attacker, so that the machine learning model is prevented from being attacked by virus exposure, and the safety of the machine learning model is ensured.
3. Stronger privacy protection capability. The method and the system carry out splitting and shuffling treatment before the gradient information is uploaded to the federated learning server, thereby improving the privacy protection intensity. In most current federal learning, the gradient information is directly sent to the federal learning server, so that the federal learning server can obtain the relationship between the gradient information and the client, and the privacy protection intensity is reduced. According to the method, the gradient information is split and shuffled by the authentication server, the subordination relation between the gradient information and the client is destroyed, so that the federated learning server cannot analyze the relevant client information through the gradient information, the privacy budget required by the scheme is reduced, and according to the definition of differential privacy, the greater the privacy budget is, the lower the privacy protection intensity is; therefore, by performing the split shuffling processing on the gradient information, the privacy protection strength of the scheme is enhanced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
FIG. 1 is a flow chart of a local differential privacy based federated learning method disclosed in an embodiment of the present invention;
FIG. 2 is a schematic diagram of an application environment of the local differential privacy-based federated learning method disclosed in the embodiment of the present invention;
FIG. 3 is a schematic diagram of the split shuffling step of the present invention;
fig. 4 is a block diagram of the structure of the federal learning device in embodiment 3 of the present invention;
fig. 5 is a block diagram of a computer device in embodiment 4 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
In this embodiment, there are 100 federal learning clients; the gradient information uploaded by the client comprises T dimensions; the total number of iterations was 20; the privacy budget is set to e =1; the data set was written using the MNIST. The specific flow of this example is as follows:
t1, each federal learning client uses a prime number generator
Figure BDA0003829211480000111
Generating two large prime numbers (p, q), calculating the product N = p × q of the two, calculating the least common multiple L of a parameter p-1 and a parameter q-1, and solving a parameter E according to the parameter L, wherein the parameter E needs to meet the condition that the greatest common multiple of the parameter E and the parameter L is 1, and 1< E < L; then, obtaining a parameter D, wherein the parameter D needs to meet the condition that E multiplied by D mod L =1, and D is more than 1 and less than L; finally, each federal learning client obtains the RSA public key pk =accordingto the parameters<E,N>RSA private key sk =<D,N>And sending the RSA public key pk to an authentication server for registration;
and T2, the authentication server maintains a list P for storing RSA public keys pk and id of the Federal learning client, when the authentication server receives the RSA public key pk sent by the Federal learning client, an id is distributed to each Federal learning client, the id is sent to the client, and the id and the RSA public key pk are combined into a key value pair (id, pk) to be stored in the list.
T3, assuming that the initial model parameter of the Federal learning server end is theta 0 Model parameter θ 0 Sending the information to each federal learning client; the Federal learning client receives the model parameter theta 0 Then, a model is constructed locally, the model is trained by using local data, and the gradient of the trained model is expressed as g = (g) 1 ,g 2 ,...,g T ) Wherein g is i Representing the real value of ith dimension of the gradient g, i is more than or equal to 1 and less than or equal to T; then, the Federal learning client converts the real number of each dimension in the model gradient g into [ -c.10 [ p ,c·10 ρ ]Where c denotes the range of the integer domain and p denotes the precision of the real number, and a real number remains p after a decimal point when the conversion is performed, where c =1 and p =8, the interval is [ -1 · 10 [ -10 [ ] 8 ,1·10 8 ]。
T4, after a real number is converted into an integer r, r and an interval [ -1 · 10 ] are calculated 8 ,1·10 8 ]Fraction of each integer, and calculating the interval [ -1 · 10 ] from the fraction of each integer 8 ,1·10 8 ]The probability of each integer being selected as an output. According to this probability, from the interval [ -1 · 10 [ -1 [ -10 [ ] 8 ,1·10 8 ]Selecting an integer to replace the original value in the gradient g; after all values in the gradient g are replaced, a privacy preserving gradient g' is obtained.
T5, the Federal learning client uses the own RSA private key sk =<D,N>Calculating a ciphertext c = (g ') of a privacy protection gradient g' for authentication server verification D mod N; cascading the ciphertext c, the privacy protection gradient g' and the self id to obtain a message M =<c,g′,id>And sending the data to an authentication server.
And T6, the authentication server receives the message sent by the federal learning client, analyzes the message into M = < c, g ', id >, acquires an RSA public key pk corresponding to the federal learning client from the list according to the id value, analyzes the RSA public key into pk = < E, N >, calculates a plaintext M, compares M and g ' to ensure the correctness of the privacy protection gradient g ', and when the privacy protection gradients of all the federal learning clients are verified, the authentication server needs to randomly select the privacy protection gradients of k clients, the selection probability is k/N, and the privacy protection gradients of the clients which are not selected are abandoned.
And T7, the authentication server carries out splitting and shuffling processing on the privacy protection gradients of the k clients. As shown in fig. 3, the values in the privacy protection gradient are first split, only dimension information of each value is retained, and then the values are shuffled and then reordered, and for convenience of description, k different values in the same dimension are represented as
Figure BDA0003829211480000121
Wherein j is more than or equal to 1 and less than or equal to k, and represents the jth of k values with the dimension i. The authentication server combines the privacy protection gradients after splitting and mixing into a set, denoted as G, and the set isAnd G is sent to the Federal learning server.
T8, the Federal learning server side receives the privacy protection gradient set G sent by the authentication server, starts to aggregate, obtains the average value of each dimension gradient according to the dimension information of each value in the set, and obtains the average gradient
Figure BDA0003829211480000122
Using mean gradients
Figure BDA0003829211480000123
Updating model parameter θ 0 To obtain a new model parameter theta 1 And a parameter theta 1 And sending the information to each federal learning client, and starting a new iteration.
And T9, after all 20 iterations are completed, the final model of the Federal learning server is verified by using a test data set, the final accuracy is 95.83%, and is only 1.43% lower than the accuracy of a method without using a privacy protection algorithm, and compared with other privacy protection algorithms, the local differential privacy protection method disclosed by the invention has advantages in both privacy protection strength and final accuracy of the model.
Example 2
In the present embodiment 2, the preset conditions are as follows: 500 federal learning clients are provided; the gradient information uploaded by the client comprises H dimensions; the total number of iterations is 30; the privacy budget is set to e =10; the dataset is classified using a CIFAR-10 image. The specific flow of this embodiment is as follows:
t1, each federal learning client uses a prime number generator
Figure BDA0003829211480000131
Generating two large prime numbers (p, q), calculating the product N = p × q of the two, calculating the least common multiple L of a parameter p-1 and a parameter q-1, and solving a parameter E according to the parameter L, wherein the parameter E needs to meet the condition that the greatest common multiple of the parameter E and the parameter L is 1, and 1< E < L; then, obtaining a parameter D, wherein the parameter D needs to meet the requirement that E multiplied by D mod L =1, and D is more than 1 and less than L; finally, each Federal learnerThe user terminal obtains RSA public key pk =accordingto the parameters<E,N>RSA private key sk =<D,N>And sending the RSA public key pk to an authentication server for registration;
and T2, the authentication server maintains a list P for storing RSA public keys pk and id of the Federal learning client, when the authentication server receives the RSA public key pk sent by the Federal learning client, an id is distributed to each Federal learning client, the id is sent to the client, and the id and the RSA public key pk are combined into a key value pair (id, pk) to be stored in the list.
T3, assuming that the initial model parameter of the Federal learning server end is theta 0 Model parameter θ 0 Sending the information to each federal learning client; the Federal learning client receives the model parameter theta 0 Then, a model is constructed locally, the model is trained by using local data, and the gradient of the trained model is expressed as g = (g) 1 ,g 2 ,...,g T ) In which g is i Representing the real value of ith dimension of the gradient g, i is more than or equal to 1 and less than or equal to T; then, the Federal learning client converts the real number of each dimension in the model gradient g into [ -c.10 [ ρ ,c·10 ρ ]Where c denotes the range of the integer domain and p denotes the precision of the real number, and a real number remains p after a decimal point when the conversion is performed, where c =1 and p =8, the interval is [ -1 · 10 [ -10 [ ] 8 ,1·10 8 ]。
T4, after a real number is converted into an integer r, r and an interval [ -1 · 10 ] are calculated 8 ,1·10 8 ]Fraction of each integer, and calculating the interval [ -1 · 10 ] from the fraction of each integer 8 ,1·10 8 ]The probability of each integer being selected as an output. According to this probability, from the interval [ -1 · 10 8 ,1·10 8 ]Selecting an integer to replace the original value in the gradient g; after the replacement is completed for all values in the gradient g, a privacy preserving gradient g' is obtained.
T5, the Federal learning client uses the own RSA private key sk =<D,N>Calculating a ciphertext c = (g ') of a privacy protection gradient g' for authentication server verification D mod N; protecting the privacyThe guard gradient g' is cascaded with the self id to obtain a message M =<c,g′,id>And sending the data to an authentication server.
And T6, the authentication server receives a message sent by the federal learning client, analyzes the message into M = < c, g ' and id >, acquires an RSA public key pk corresponding to the federal learning client from the list according to the id value, analyzes the RSA public key into pk = < E, N >, calculates to obtain a plaintext M, compares M and g ' to ensure the correctness of the privacy protection gradient g ', and after the verification of the privacy protection gradients of all the federal learning clients is completed, the authentication server needs to randomly select the privacy protection gradients of k clients from the authentication server, the selected probability is k/N, and the privacy protection gradients of the clients which are not selected are abandoned.
And T7, the authentication server carries out splitting and shuffling processing on the privacy protection gradients of the k clients. As shown in fig. 3, the values in the privacy protection gradient are first split, only dimension information of each value is retained, and then the values are shuffled and then reordered, and for convenience of description, k different values in the same dimension are represented as
Figure BDA0003829211480000141
Wherein j is more than or equal to 1 and less than or equal to k, and represents the jth of k values with the dimension i. And the authentication server combines the split and mixed privacy protection gradients into a set, denoted as G, and sends the set G to the Federal learning server.
T8, the Federal learning server side receives the privacy protection gradient set G sent by the authentication server, starts to aggregate, obtains the average value of each dimension gradient according to the dimension information of each value in the set, and obtains the average gradient
Figure BDA0003829211480000142
Using mean gradients
Figure BDA0003829211480000143
Updating the model parameter theta 0 to obtain a new model parameter theta 1 And a parameter theta 1 And sending the information to each federal learning client, and starting a new iteration.
And T9, after all 30 iterations are completed, the final model of the Federal learning server is verified by using a test data set, the final accuracy is 59.68%, and is only 1.78% lower than the accuracy of a method without using a privacy protection algorithm, and compared with other privacy protection algorithms, the local differential privacy protection method disclosed by the invention has advantages in both privacy protection strength and final accuracy of the model.
Example 3
As shown in fig. 4, the present embodiment provides a local differential privacy and split shuffle-based federated learning apparatus, which includes an initialization module 401, a signal-to-noise ratio minimization building module 402, a gradient information authentication module 403, and a gradient information aggregation module 404, where specific functions of each module are as follows:
an initialization module 401, the Federal learning server end uses the initial model parameter theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
the model training module 402 is used for the federal learning client to train a model by using local data, and after model training is finished, gradient information g of the model is processed by using a privacy protection algorithm phi to generate a privacy protection gradient g'; then, encrypting the privacy protection gradient g 'by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g' and the self id to form a message M and sending the message M to the authentication server;
the gradient information authentication module 403, after receiving the message sent by the federal learning client, the authentication server parses the message, obtains a corresponding RSA public key pk according to id, decrypts the ciphertext c using the public key pk, compares the decryption result with the privacy protection gradient g ', and confirms that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G ' of all federated learning clients are collected, randomly selecting privacy protection gradients G ' of k clients from the federated learning clients, splitting and shuffling the privacy protection gradients G ' of the k clients to obtain a gradient set G, and sending the gradient set G to a federated learning server;
the gradient information aggregation module 404 is configured to receive the gradient set G sent from the authentication server by the federal learning server, aggregate gradient information in the gradient set G, update the model on the authentication server by using the aggregated gradient information, and obtain a new model parameter θ after the model update is completed 1 Model parameter θ 1 And sending the data to the Federal learning client, starting a new iteration, and finishing the Federal learning method after finishing a preset total iteration.
Example 4
The present embodiment provides a computer device, which may be a computer, as shown in fig. 5, and includes a processor 502, a memory, an input device 503, a display 504, and a network interface 505 connected by a system bus 501, where the processor is used to provide computing and control capabilities, the memory includes a nonvolatile storage medium 506 and an internal memory 507, the nonvolatile storage medium 506 stores an operating system, a computer program, and a database, the internal memory 507 provides an environment for an operating system and a computer program in the nonvolatile storage medium to run, and when the processor 502 executes the computer program stored in the memory, the federal learning method based on local differential privacy and split shuffle provided by the present invention is implemented, and the method includes the following steps:
s1, an initialization stage: the Federal learning server side converts the initial model parameter theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
s2, model training: the federal learning client uses a local data training model, and after model training is completed, gradient information g of the model is processed by using a privacy protection algorithm phi to generate a privacy protection gradient g'; then, encrypting the privacy protection gradient g ' by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g ' and the id of the ciphertext c and the privacy protection gradient g ' with the self id to form a message M and sending the message M to the authentication server;
s3, gradient information authentication and processing: after receiving the message sent by the federal learning client, the authentication server analyzes the message, acquires a corresponding RSA public key pk according to the id, decrypts the ciphertext c by using the public key pk, compares a decryption result with the privacy protection gradient g ', and confirms that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G 'of all the federal learning clients are collected, randomly selecting privacy protection gradients G' of k clients from the privacy protection gradients G ', splitting and shuffling the privacy protection gradients G' of the k clients to obtain a gradient set G, and sending the gradient set G to a federal learning server;
s4, gradient information aggregation: the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the Federal learning client, starting a new iteration, and finishing the Federal learning method after finishing a preset total iteration.
Example 5
The present embodiment provides a storage medium, which is a computer-readable storage medium, and stores a computer program, where the computer program, when executed by a processor, implements a local differential privacy and split shuffle-based federal learning method proposed in the present invention, including the following steps:
s1, an initialization stage: the Federal learning server side converts the initial model parameter theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
s2, model training: the federal learning client uses a local data training model, and after model training is completed, gradient information g of the model is processed by using a privacy protection algorithm phi to generate a privacy protection gradient g'; then, encrypting the privacy protection gradient g 'by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g' and the self id to form a message M and sending the message M to the authentication server;
s3, gradient information authentication and processing: after receiving the message sent by the federal learning client, the authentication server analyzes the message, acquires a corresponding RSA public key pk according to the id, decrypts the ciphertext c by using the public key pk, compares a decryption result with the privacy protection gradient g ', and confirms that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients g' of all federal learning clients are collected, randomly selecting privacy protection gradients g of k clients from the privacy protection gradients g Splitting and shuffling the privacy protection gradients G' of the k clients to obtain a gradient set G, and sending the gradient set G to a federated learning server;
s4, gradient information aggregation: the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the federated learning client, starting a new iteration, and finishing the federated learning method after finishing a preset total iteration.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (10)

1. A federated learning method based on local differential privacy and split shuffle is characterized in that the federated learning method comprises the following steps:
s1, an initialization stage: the Federal learning server side converts the initial model parameter theta 0 Sending the information to a Federal learning client, and generating RSA public by the Federal learning clientThe key pk and the RSA private key sk send the RSA public key pk to an authentication server for registration to obtain self id, and then model parameters theta sent by the Federal learning server side are received 0 Initializing the model locally;
s2, model training: the federal learning client uses a local data training model, and after model training is completed, gradient information g of the model is processed by using a privacy protection algorithm phi to generate a privacy protection gradient g'; then, encrypting the privacy protection gradient g ' by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g ' and the id of the ciphertext c and the privacy protection gradient g ' with the self id to form a message M and sending the message M to the authentication server;
s3, gradient information authentication and processing: after receiving a message sent by the Federal learning client, the authentication server analyzes the message, acquires a corresponding RSA public key pk according to the id, decrypts a ciphertext c by using the public key pk, compares a decryption result with the privacy protection gradient g 'and confirms that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G ' of all federated learning clients are collected, randomly selecting privacy protection gradients G ' of k clients from the federated learning clients, splitting and shuffling the privacy protection gradients G ' of the k clients to obtain a gradient set G, and sending the gradient set G to a federated learning server;
s4, gradient information aggregation: the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the federated learning client, starting a new iteration, and finishing the federated learning method after finishing a preset total iteration.
2. The local differential privacy and split shuffle-based federated learning method of claim 1, wherein the model represents a machine learning algorithm that is a convolutional neural network algorithm or a fully-connected neural network algorithm.
3. The local differential privacy and split shuffle based federated learning method of claim 1, wherein said step S1 procedure is as follows:
s1a, using prime number generator by Federal learning client
Figure FDA0003829211470000021
Generating two large prime numbers (p, q) and calculating the product of the two N = p × q;
s1b, calculating the minimum common multiple L of the parameter p-1 and the parameter q-1 by the federal learning client; calculating a parameter E according to the parameter L, wherein the parameter E needs to meet the requirement that the greatest common divisor of the parameter L is 1, and 1-E-straw-type L; then, parameters D are obtained, where the parameters D need to satisfy (E × D) modL =1, and 1-D-straw, where mod () represents a modulo operation;
s1c, the Federal learning client obtains an RSA public key pk = < E, N > and an RSA private key sk = < D, N > according to the parameters, and sends the RSA public key pk to an authentication server for registration;
s1d, the authentication server maintains a list P for storing RSA public keys pk and id of the Federal learning client, when the authentication server receives the RSA public key pk sent by the Federal learning client, an id is distributed to the Federal learning client, the id is sent to the Federal learning client, and the id and the RSA public key pk are stored in the list P;
s1e, initializing a model at the Federal learning server side, wherein model parameters are theta 0 Model parameter θ 0 Sending the information to a federal learning client; the Federal learning client receives the model parameter theta 0 The model is then built locally.
4. The local differential privacy and split shuffle based federated learning method of claim 1, wherein said step S2 procedure is as follows:
s2a, the federal learning client trains a model by using local data, and the gradient of the trained model is expressed as g = (g) 1 ,g 2 ,…,g n ) Wherein g is i A real value representing the ith dimension of the gradient information g;
s2b, converting real number of each dimension in model gradient information g into [ -c.10 ] ρ ,c·10 ρ ]An integer r in the integer domain, wherein c represents the range of the integer domain, rho represents the precision of the real number, and rho bit is reserved after a decimal point of the real number is reserved during conversion;
s2c, calculating r and an interval [ -c.10 [) ρ ,c·10 ρ ]The fraction of each integer in the equation:
Figure FDA0003829211470000031
wherein y is in the range [ -c.10 ] ρ ,c·10 ρ ]An integer within, and y ≠ r; d (r, y) represents the Euclidean distance between r and y; alpha represents a privacy parameter for controlling the privacy protection intensity;
s2d, calculating interval [ -c.10 [) ρ ,c·10 ρ ]The probability that each integer is selected as output is calculated as follows:
Figure FDA0003829211470000032
wherein the content of the first and second substances,
Figure FDA0003829211470000033
represents the interval [ -c.10 [ ] ρ ,c·10 ρ ]Fractional sum of all internal integers; r' represents the interval [ -c.10 ] ρ ,c·10 ρ ]The selected integer is used as an output, and r' ≠ r;
s2e, according to the probability of each integer being selected calculated in the step S2d, selecting the interval [ -c.10 [ ] ρ ,c·10 ρ ]Selecting an integer to replace the original value in the gradient information g; after all values in the gradient information g are replaced, a privacy protection gradient g' is obtained;
s2f, using RSA private key sk =bythe Federal learning client<D,N>Computing the secret of the privacy protection gradient g' for authentication server verificationWen c = (g') D modN;
And S2g, the Federal learning client cascades the ciphertext c, the privacy protection gradient g 'and the self id to obtain a message M = < c, g' and id >, and the message M is sent to the authentication server.
5. The local differential privacy and split shuffle-based federal learning method as claimed in claim 1, wherein said step S3 procedure is as follows:
s3a, the authentication server receives a message sent by the federal learning client, analyzes the message into M = < c, g', id >, and obtains an RSA public key pk of the corresponding federal learning client from the list P according to the id value;
s3b, the authentication server analyzes the RSA public key into pk = < E, N >, and decrypts the ciphertext c by using the following formula to obtain a plaintext m:
m=(c) E modN
s3c, comparing m and g ', if the m and the g' are the same, indicating that the privacy protection gradient g 'sent by the federal learning client is not replaced or tampered, and ensuring the correctness of the privacy protection gradient g';
s3d, after the privacy protection gradients of all the federal learning clients are verified, the authentication server randomly selects privacy protection gradients g 'of k federal learning clients from all the federal learning clients, if the total number of the clients is B, the selected probability is k/B, and the privacy protection gradients g' of the federal learning clients which are not selected are abandoned;
s3e, the authentication server carries out splitting and shuffling processing on the privacy protection gradients of the k federal learning clients;
and S3f, the authentication server combines the split and mixed privacy protection gradients into a set, namely G, and sends the set G to the Federal learning server.
6. The local differential privacy and split shuffle-based federated learning method according to claim 5, wherein the split shuffle process in said step S3e is as follows:
firstly, the privacy protection ladderSplitting values in the degree g', only preserving dimension information of each value, then reordering the values after disordering the relative positions of each value, and representing k different values in the same dimension as
Figure FDA0003829211470000041
Wherein j is more than or equal to 1 and less than or equal to k,
Figure FDA0003829211470000042
representing the jth of the k values with dimension i.
7. The local differential privacy and split shuffle-based federal learning method as claimed in claim 1, wherein said step S4 procedure is as follows:
s4a, the Federal learning server receives the privacy protection gradient set G sent by the authentication server, starts to aggregate, obtains the average value of each dimension gradient according to the dimension information of each value in the set, and obtains the average gradient
Figure FDA0003829211470000051
The polymerization formula is as follows:
Figure FDA0003829211470000052
wherein the content of the first and second substances,
Figure FDA0003829211470000053
represents the mean gradient
Figure FDA0003829211470000054
The value of the ith dimension;
s4b, use of average gradient
Figure FDA0003829211470000055
Updating the model parameter θ 0 To obtain a new model parameter theta 1 The formula is as follows:
Figure FDA0003829211470000056
wherein, gamma represents the model learning rate set by the Federal learning server end and is used for controlling the optimization rate of model parameters;
s4c, the Federal learning server side enables the model parameter theta to be measured 1 And sending the data to each federal learning client, starting a new iteration, and ending the federal learning method when the total iteration reaches a preset total iteration.
8. A federal learning apparatus based on the local differential privacy and split shuffle based federal learning method as claimed in any one of claims 1 to 7, wherein said federal learning apparatus comprises:
an initialization module, a Federal learning server end is used for initializing model parameters theta 0 Sending the data to a federal learning client, generating an RSA public key pk and an RSA private key sk by the federal learning client, sending the RSA public key pk to an authentication server for registration to obtain self id, and then receiving a model parameter theta sent by the federal learning server 0 Initializing the model locally;
the federated learning client side uses a local data training model, processes gradient information g of the model by using a privacy protection algorithm phi after completing model training, and generates a privacy protection gradient g'; then, encrypting the privacy protection gradient g 'by using an RSA private key sk to obtain a ciphertext c, and cascading the ciphertext c, the privacy protection gradient g' and the self id to form a message M and sending the message M to the authentication server;
the gradient information authentication module is used for analyzing the message after the authentication server receives the message sent by the Federal learning client, acquiring a corresponding RSA public key pk according to the id, decrypting a ciphertext c by using the public key pk, comparing a decryption result with the privacy protection gradient g 'and confirming that the privacy protection gradient g' is not replaced or tampered; after privacy protection gradients G 'of all the federal learning clients are collected, randomly selecting privacy protection gradients G' of k clients from the privacy protection gradients G ', splitting and shuffling the privacy protection gradients G' of the k clients to obtain a gradient set G, and sending the gradient set G to a federal learning server;
the Federal learning server receives the gradient set G sent from the authentication server, aggregates gradient information in the gradient set G, updates the model on the authentication server by using the aggregated gradient information, and obtains a new model parameter theta after the model is updated 1 Model parameter θ 1 And sending the data to the Federal learning client, starting a new iteration, and finishing the Federal learning method after finishing a preset total iteration.
9. A computer device comprising a processor and a memory for storing a processor-executable program, wherein the processor, when executing the memory-stored program, implements the local differential privacy and split shuffle based federated learning method of any of claims 1-7.
10. A storage medium storing a program which, when executed by a processor, implements the local differential privacy and split shuffle based federated learning method of any of claims 1-7.
CN202211074718.6A 2022-09-02 2022-09-02 Split shuffle-based federated learning method, apparatus, device and medium Pending CN115442036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211074718.6A CN115442036A (en) 2022-09-02 2022-09-02 Split shuffle-based federated learning method, apparatus, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211074718.6A CN115442036A (en) 2022-09-02 2022-09-02 Split shuffle-based federated learning method, apparatus, device and medium

Publications (1)

Publication Number Publication Date
CN115442036A true CN115442036A (en) 2022-12-06

Family

ID=84247409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211074718.6A Pending CN115442036A (en) 2022-09-02 2022-09-02 Split shuffle-based federated learning method, apparatus, device and medium

Country Status (1)

Country Link
CN (1) CN115442036A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034328A (en) * 2023-10-09 2023-11-10 国网信息通信产业集团有限公司 Improved abnormal electricity utilization detection system and method based on federal learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034328A (en) * 2023-10-09 2023-11-10 国网信息通信产业集团有限公司 Improved abnormal electricity utilization detection system and method based on federal learning
CN117034328B (en) * 2023-10-09 2024-03-19 国网信息通信产业集团有限公司 Improved abnormal electricity utilization detection system and method based on federal learning

Similar Documents

Publication Publication Date Title
Lei et al. Outsourcing large matrix inversion computation to a public cloud
US10129029B2 (en) Proofs of plaintext knowledge and group signatures incorporating same
US9973342B2 (en) Authentication via group signatures
US10083310B1 (en) System and method for mobile proactive secure multi-party computation (MPMPC) using commitments
DE102018108313A1 (en) A method and processing apparatus for performing a grid-based cryptographic operation
CN107615285A (en) The Verification System and device encrypted including the unclonable function of physics and threshold value
CN112436938B (en) Digital signature generation method and device and server
Petzoldt et al. Small Public Keys and Fast Verification for ultivariate uadratic Public Key Systems
CN112446052B (en) Aggregated signature method and system suitable for secret-related information system
CN112417489B (en) Digital signature generation method and device and server
CN111539041A (en) Safety selection method and system
WO2020187413A1 (en) Distributed network with blinded identities
CN115442036A (en) Split shuffle-based federated learning method, apparatus, device and medium
Zhang et al. Enhanced certificateless auditing protocols for cloud data management and transformative computation
Talwar Differential secrecy for distributed data and applications to robust differentially secure vector summation
Tian et al. DIVRS: Data integrity verification based on ring signature in cloud storage
Liu et al. A post quantum secure multi-party collaborative signature with deterability in the Industrial Internet of Things
DE102020113198A1 (en) Cryptographic operation
Malina et al. Trade-off between signature aggregation and batch verification
Zhang et al. Image encryption algorithm based on the Matryoshka transform and modular-inverse matrix
Garg Candidate multilinear maps
CN114465728B (en) Method, device, equipment and storage medium for attacking elliptic curve signature algorithm
Yuan et al. Efficient unrestricted identity-based aggregate signature scheme
Ren et al. BPFL: Blockchain-Based Privacy-Preserving Federated Learning against Poisoning Attack
CN114520728B (en) Distributed anonymous marking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination