CN117134945A - Data processing method, system, device, computer equipment and storage medium - Google Patents

Data processing method, system, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117134945A
CN117134945A CN202310911435.0A CN202310911435A CN117134945A CN 117134945 A CN117134945 A CN 117134945A CN 202310911435 A CN202310911435 A CN 202310911435A CN 117134945 A CN117134945 A CN 117134945A
Authority
CN
China
Prior art keywords
parameter
ciphertext
training
client
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310911435.0A
Other languages
Chinese (zh)
Inventor
王小伟
张旭
孙华锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202310911435.0A priority Critical patent/CN117134945A/en
Publication of CN117134945A publication Critical patent/CN117134945A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0442Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/30Public key, i.e. encryption algorithm being computationally infeasible to invert or user's encryption keys not requiring secrecy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to the technical field of data security, and discloses a data processing method, a system, a device, computer equipment and a storage medium, wherein the method comprises the following steps: receiving and summarizing a first training parameter ciphertext sent by a first client to obtain a first parameter ciphertext; the first parameter ciphertext is sent to other servers, and the second parameter ciphertext sent by the other servers is received; obtaining a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext; calculating a first parameter based on a private key of a target server and a current training parameter ciphertext, and sending the first parameter to other servers; receiving a second parameter sent by other servers; decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain the current training parameter; and updating parameters of the model to be trained in the first client and the second client according to the current training parameters so as to obtain a target model. The invention can improve the data security and the calculation efficiency.

Description

Data processing method, system, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of data security technologies, and in particular, to a data processing method, system, apparatus, computer device, and storage medium.
Background
Along with the rapid popularization of intelligent terminal equipment, a large amount of personal information data are collected by various network platforms, and personal information leakage is extremely easy to cause. In recent years, not only privacy of private data by individuals is increasingly concerned, but also related departments are continuously put out of laws and regulations in terms of personal privacy protection of networks and construction of network security. In the field of artificial intelligence, parameters of a plurality of clients are often required to be summarized by a central server during training of a neural network model, and then model training is performed, so that data leakage is easy to occur in the process that the clients send data to the central server. Federal learning is used as a distributed machine learning mode, and data does not need to leave a local client, so that data privacy is protected. In the existing data protection process of federal learning, a special key management mechanism is generally required to distribute keys, and once the key management mechanism is attacked or unreliable, the keys are easy to leak, so that the data of a client is easy to leak.
Disclosure of Invention
In view of the above, the present invention provides a data processing method, system, apparatus, computer device and storage medium, so as to solve the problem of data leakage in the model training process.
In a first aspect, the present invention provides a data processing method applied to a target server, where the target server is connected with another server, the target server is connected with a first client, the other server is connected with a second client, and the first client and the second client are used for training a same model to be trained, and the method includes:
receiving a first training parameter ciphertext sent by the first client, wherein the first training parameter ciphertext is obtained by encrypting a first training parameter by utilizing public keys of the target server and the other servers, and the first training parameter is obtained by calculating based on the increment of a model to be trained, the loss function value and the local training data quantity;
obtaining a first parameter ciphertext based on the summary of all the first training parameter ciphertexts;
the first parameter ciphertext is sent to the other servers, and a second parameter ciphertext sent by the other servers is received, wherein the second parameter ciphertext is obtained by summarizing second training parameter ciphertext by the other servers;
Obtaining a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext;
calculating a first parameter based on a private key of a target server and the current training parameter ciphertext, and sending the first parameter to the other servers;
receiving second parameters sent by other servers, wherein the second parameters are calculated based on private keys of the other servers and the current training parameters;
decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain a current training parameter;
and updating parameters of the model to be trained in the first client and the second client according to the current training parameters so as to obtain a target model.
According to the data processing method provided by the embodiment, the adopted threshold encryption method relates to a plurality of servers, the puzzle solving task is distributed to each server, and when all the servers participate in the decryption process (namely, the first parameter and the second parameter are calculated), the correct decryption result can be obtained, so that the data are protected. In the data processing process, the communication relationship between the client and the server can be established according to the region of the client, network signals, server bandwidth and other information, each server receives the data sent by the corresponding client, then the data are primarily summarized to obtain a first parameter ciphertext and a second parameter ciphertext, then the data are summarized between the target server and other servers to obtain a current parameter ciphertext, and the network communication efficiency can be effectively improved by carrying out joint decryption based on the current training parameter ciphertext and the private key of each server.
In an alternative embodiment, the public key is determined as follows:
acquiring a first calculation parameter to obtain a private key of the target server;
obtaining a second calculation parameter of the target server based on the private key of the target server;
receiving second calculation parameters of the other servers, wherein the second calculation parameters are obtained based on private keys of the other servers, and the private keys of the other servers are obtained based on the first calculation parameters;
the public key is determined based on the first calculation parameter and the second calculation parameter.
In an alternative embodiment, the public key is determined according to the following formula:
wherein b represents a public key, n represents the number of servers, i represents the ith server, b i A second calculation parameter is indicated and is indicated,p represents a first calculated parameter, a i Represents the private key of the ith server, g represents p 2 P=p+1, 0<a i <p(p-1)。
In an alternative embodiment, the current training parameters are determined as follows:
wherein m represents the current training parameter, p represents the first calculation parameter, C 3 Representing the first component of the ciphertext of the current training parameter, C 4 Representing a second component, a, of the ciphertext of the current training parameter i Representing the private key of the ith server, S i Representing the first parameter or the second parameter of the i-th server.
In an optional implementation manner, the receiving the first training parameter ciphertext sent by the first client includes:
the first client trains the model to be trained based on local training data to obtain increment and loss function value of the model to be trained, encrypts the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value to obtain a first training parameter ciphertext, wherein the first training parameter comprises the product of the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value;
and receiving a first training parameter ciphertext sent by the first client.
In an alternative embodiment, the first training parameter ciphertext is determined as follows:
C=(C 1 ,C 2 )=(g r modp 2m b r modp 2 )
wherein C represents a first training parameter ciphertext, C 1 Representing a first component of the ciphertext of the first training parameter, C 2 A second component representing the ciphertext of the first training parameter, r representing the random number, p representing the first calculation parameter, g representing p 2 P=p+1, c represents the first training parameter, and b represents the public key.
In a second aspect, the present invention provides a data processing apparatus applied to a target server, where the target server is connected to another server, the target server is connected to a first client, and the other server is connected to a second client, and the first client and the second client are used for training a same model to be trained, where the apparatus includes:
the first ciphertext receiving module is used for receiving a first training parameter ciphertext sent by the first client, the first training parameter ciphertext is obtained by encrypting a first training parameter by utilizing public keys of the target server and the other servers, and the first training parameter is obtained by calculating based on the increment of a model to be trained, the loss function value and the local training data quantity;
the first parameter ciphertext determining module is used for obtaining a first parameter ciphertext based on summarizing all the first training parameter ciphertexts;
the first parameter ciphertext sending module is used for sending the first parameter ciphertext to the other servers and receiving second parameter ciphertext sent by the other servers, wherein the second parameter ciphertext is obtained by summarizing second training parameter ciphertext by the other servers;
The current training parameter ciphertext determining module is used for obtaining a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext;
the first parameter calculation module is used for calculating a first parameter based on a private key of a target server and the current training parameter ciphertext and sending the first parameter to the other servers;
the second parameter receiving module is used for receiving second parameters sent by the other servers, and the second parameters are calculated based on private keys of the other servers and the current training parameters;
the current training parameter decryption module is used for decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain a current training parameter;
and the model determining module is used for updating parameters of the model to be trained in the first client and the second client according to the current training parameters so as to obtain a target model.
In a third aspect, the present invention provides a computer device comprising: the data processing system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the data processing method of the first aspect or any corresponding implementation mode of the first aspect is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the data processing method of the first aspect or any of its corresponding embodiments.
In a fifth aspect, the present invention provides a data processing system, the system comprising:
at least one server, the server including a target server and other servers, the target server being configured to perform the data processing method of the first aspect or any implementation manner corresponding to the first aspect;
the client comprises a first client and a second client, the first client is connected with the target server, and the second client is connected with the other servers.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow diagram of a data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data processing system according to an embodiment of the present invention;
FIG. 3 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Federal learning is used as a distributed machine learning mode, and data does not need to leave a local client, so that data privacy is protected. In the existing machine learning process based on federal learning, sharing local gradient or model data can lead to privacy disclosure. The prior art encrypts the private information before sending it to the server, which needs to rely on the server being trusted, which is virtually impossible in many scenarios. In addition, when the number of clients is large and the distribution area is wide, because of the reasons of regions, server bandwidths and the like, communication between certain clients and servers can have large network delay, and under the condition that only one server is used, the communication efficiency is extremely low, so that the overall learning efficiency is influenced. In order to protect the private information of the data, some students adopt a differential privacy method, and an attacker cannot acquire the private data information by adding noise to the model parameters, but a malicious server can still acquire the local gradient or model data of the client, so that the private information of the client is obtained. Current privacy protection methods typically require a specialized key management mechanism to distribute keys, which can easily lead to key leakage once the key management mechanism is attacked or untrusted, and thus to data leakage at the client. Based on the above, the embodiment of the invention provides a data processing method to solve the above problems.
According to an embodiment of the present invention, there is provided a data processing method embodiment, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
In this embodiment, a data processing method is provided, which is applied to a target server, where the target server is connected to another server, the target server is connected to a first client, and the other server is connected to a second client, where the first client and the second client are used to train the same model to be trained, and fig. 1 is a flowchart of the data processing method according to an embodiment of the present invention, as shown in fig. 1, and the flowchart includes the following steps:
step S11, a first training parameter ciphertext sent by a first client is received.
The first training parameter ciphertext is obtained by encrypting the first training parameter by utilizing public keys of the target server and other servers, and the first training parameter is obtained by calculating based on the increment of the model to be trained, the loss function value and the local training data quantity.
And establishing a corresponding relation between the client and the server according to the information such as the region, the network signal, the server bandwidth and the like, namely determining which server the client communicates with in the protocol execution process. Each server may correspond to one or more clients, the first client corresponds to the target server, the second clients correspond to the other servers, the number of the first clients and the second clients is plural, the second clients in this embodiment refer to all clients corresponding to the other servers, and the number of the servers and the clients is not limited.
The local training data of each client is different. The scheme relates to a plurality of servers, a target server sends a model to be trained and a corresponding loss function to other servers, and then each server sends the model to be trained and the corresponding loss function to a corresponding client. Taking the target server as an example, the target server sends the model to be trained and the corresponding loss function to the corresponding first client, and the first client calculates and trains the model to be trained based on the local training data and calculates to obtain the loss function value, wherein a random gradient descent method can be adopted during training, and the specific training method is not limited. The first training parameters refer to parameters obtained in the process of training the model by the first client, and include the increment of the model to be trained, the loss function value and the local training data quantity.
Let the parameters of the model to be trained be w= (w) 1 ,…,w d ) That is, the model parameters can be regarded as a d-dimensional vector. The first client is Set to comprise m clients, and the Set of indexes of the m clients is Set. For a certain client i epsilon Set, setting the local model parameters obtained after training asThe corresponding model increment to be trained is +.> The number of local training data is Zi and the corresponding loss function value is Fiw. And carrying out certain multiplication operation among the increment of the model to be trained, the loss function value and the local training data quantity, encrypting the multiplication operation result by utilizing the public keys of the target server and other servers to obtain a first training parameter ciphertext, and sending the first training parameter ciphertext to the target server. The public key is generated jointly by all servers. The encryption adopts a threshold encryption method.
Step S12, obtaining a first parameter ciphertext based on the summary of all the first training parameter ciphertexts.
The target server receives the first training ciphertexts sent by the first clients, and gathers the first training ciphertexts of each first client to obtain first parameter ciphertexts.
With the local training data quantity |Z i As an example, the encryption results in:
E(|Z i |)
The j-th server is taken as an example for explanation. According to the homomorphism of the threshold password, a corresponding first parameter ciphertext is obtained through calculation:
i.e. ciphertext corresponding to the number of local training data of all clients corresponding to the jth server.
And summarizing various data contained in the first training parameter ciphertext, wherein the obtained first parameter ciphertext contains various ciphertext data.
And step S13, the first parameter ciphertext is sent to other servers, and a second parameter ciphertext sent by the other servers is received, wherein the second parameter ciphertext is obtained by summarizing the second training parameter ciphertext by the other servers.
And the other servers send the same model to be trained and the corresponding loss function to the corresponding second client, the second client executes the same operation as the first client to obtain second training parameters, and the second training parameters are encrypted after being processed to obtain second training parameter ciphertext. And the second client sends the second training parameter ciphertext to the corresponding other servers, and the other servers summarize the second training parameter ciphertexts of all the second clients to obtain the second parameter ciphertext.
The target server sends the first parameter ciphertext to other servers and receives the second parameter ciphertext sent by the other servers. And the other servers receive the first parameter ciphertext sent by the target server.
Step S14, obtaining the current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext.
The target server obtains a second parameter ciphertext and a first parameter ciphertext which are sent by other servers, wherein the second parameter ciphertext is obtained by summarizing second training parameter ciphertext of a second client, and the first parameter ciphertext is obtained by summarizing first training parameter ciphertext of the first client. The first training parameter ciphertext and the second training parameter ciphertext form the current training parameter ciphertext, which can be understood as a set of ciphertext data of all clients corresponding to all servers.
With the local training data quantity |Z i The i is an example, and the corresponding current training parameter ciphertext is:
where n represents the total number of servers and j represents the jth server.
And step S15, calculating to obtain a first parameter based on the private key of the target server and the current training parameter ciphertext, and sending the first parameter to other servers.
Each server has a corresponding private key, the private key of each server is not disclosed, calculation is carried out based on the private key and the current training parameter ciphertext to obtain a first parameter, and the first parameter can be understood as intermediate data in the current training parameter ciphertext decryption process.
And S16, receiving second parameters sent by other servers, wherein the second parameters are calculated based on private keys of the other servers and current training parameters.
Each server calculates a first parameter corresponding to each server based on the private key of the server and the current training parameter ciphertext, and in the embodiment, the first parameters calculated by other servers are defined as second parameters. The target server receives the second parameters sent by the other servers.
And step S17, decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain the current training parameter.
The target server decrypts the current training parameters based on the first parameter and the second parameter, and other servers decrypt the current training parameters based on the first parameter and the second parameter, and the decryption process is calculated based on the first parameter and the second parameter.
With the local training data quantity |Z i The i is an example, and the corresponding current training parameter ciphertext is:
decrypting the data to obtain the total number of the local training data of the clients participating in the training of the round:
where n represents the total number of servers, j represents the jth server, and i represents the ith client.
Similarly, the current training parameters also include the increment and loss function values of the model to be trained of all clients.
And S18, updating parameters of the model to be trained in the first client and the second client according to the current training parameters to obtain a target model.
Updating the parameters w of the model to be trained based on the current training parameters:
and calculates the total loss function F (w) of the present round:
repeating the above processes until the total loss function converges or the iteration times reach the requirement, obtaining corresponding parameters, completing model training, and obtaining the target model.
According to the data processing method provided by the embodiment, the adopted threshold encryption method relates to a plurality of servers, the puzzle solving task is distributed to each server, and when all the servers participate in the decryption process (namely, the first parameter and the second parameter are calculated), the correct decryption result can be obtained, so that the data are protected. In the data processing process, the communication relationship between the client and the server can be established according to the region of the client, network signals, server bandwidth and other information, each server receives the data sent by the corresponding client, then the data are primarily summarized to obtain a first parameter ciphertext and a second parameter ciphertext, then the data are summarized between the target server and other servers to obtain a current parameter ciphertext, and the network communication efficiency can be effectively improved by carrying out joint decryption based on the current training parameter ciphertext and the private key of each server.
In some alternative embodiments, step S11 in fig. 1 includes:
in step S111, the first client trains the model to be trained based on the local training data to obtain an increment and a loss function value of the model to be trained, encrypts the local training data amount, a product of the local training data amount and the increment of the model to be trained, and a product of the local training data amount and the loss function value to obtain a first training parameter ciphertext, where the first training parameter includes the local training data amount, a product of the local training data amount and the increment of the model to be trained, and a product of the local training data amount and the loss function value.
The first client trains the model to be trained to obtain the increment of the model to be trained The local training data amount is |Z i I, the corresponding loss function value is F i (w) the first training parameters comprise:
local training data quantity |Z i Product of the number of local training data and the increment of the model to be trained, Z i |·Δw i Local training data quantity and loss function value |Z i |·F i (w)。
Encrypting the first training parameters to obtain first training parameter ciphertexts respectively comprises the following steps:
local training data quantity ciphertext E (|Z) i |);
Ciphertext E of the product of the local training data quantity and the increment of the model to be trained 0 (|Z i |·Δw i );
Ciphertext E of the product of the local training data quantity and the loss function value 0 (|Z i |·F i (w))。
Step S112, receiving a first training parameter ciphertext sent by the first client.
In some alternative embodiments, the encryption mode first training parameter ciphertext of step S111 is determined as follows:
C=(C 1 ,C 2 )=(g r modp 2 ,ρ m b r modp 2 )
wherein C represents a first training parameter ciphertext, C 1 Representing a first component of the ciphertext of the first training parameter, C 2 A second component representing the ciphertext of the first training parameter, r representing the random number, p representing the first calculation parameter, g representing p 2 P=p+1, m represents the first training parameter, and b represents the public key.
amodc reads a modulo c, representing the remainder of a divided by c, a≡bmoc representing the remainder of a, b divided by c. If ab.ident.1 mod c, it means that b is the multiplicative inverse of a modulo c, denoted b.ident.a -1 mod, where a is also the multiplicative inverse of b modulo c. The symbol ≡ indicates that the numerical result of the rear expression is assigned to the front variable.
The server sets a security parameter lambda, selects a lambda-bit large prime number P as a first calculation parameter, and the server P i Selecting a i (0<a i P (p-1)) as its own private key, and a public key b is jointly generated based on the private keys of the respective servers.
Since the data precision in practical application is limited, the data can be converted into an integer by uniformly multiplying the data by a certain multiple (assumed to be mul) before encryption. Then p is chosen to be large enough that the absolute value of the sum of the data after conversion to integers does not exceed p/2. And then adding p to the data smaller than 0, converting all the data into natural numbers, and directly encrypting by using the threshold cryptographic algorithm. After the decryption of the threshold cryptographic algorithm, if the data is larger than p/2, subtracting p from the data, dividing the data by mul, and if the data is smaller than p/2, directly dividing the data by mul to obtain a final decryption result.
According to the homomorphism of the threshold cryptographic algorithm, the (n, n) threshold cryptographic algorithm popularized in the real number range can be easily verified to still meet the addition homomorphism. Respectively using E 0 (m) and D 0 (c) The encryption of the plaintext m and the decryption of the ciphertext c after the above threshold encryption scheme is generalized to the real number range are shown. Can be obtained when-p/2 is less than or equal to (m) 1 +m 2 ) When mul.ltoreq.p/2, there are:
D 0 (E 0 (m 1 )·E 0 (m 2 ))=m 1 +m 2
all servers jointly generate a key meeting the threshold password of the addition homomorphism, agree on uniformly multiplying the real number by the multiple mul when encrypting, and publish the multiple mul while publishing the public key b.
In the present embodiment, the corresponding:
local training data quantity ciphertext:
ciphertext of the product of the local training data quantity and the increment of the model to be trained is calculated, firstly, for |Z i |·Δw i Multiplying the multiple mul and then adding p to the data less than 0, the product of the number of local training data and the increment of the model to be trained requires the use of an algorithm that extends to the real range, and therefore, here the encryption is used as a function E 0 (. Cndot.) ciphertext of the product of the number of local training data and the increment of the model to be trained:
ciphertext of the product of the local training data quantity and the loss function value is calculated by first performing the algorithm on |Z i |·F i (w) multiplying the multiple mul and then adding p to the data less than 0, the product of the number of local training data and the increment of the model to be trained requires the use of a generalization to the real range Therefore, the encryption is the function E 0 (. Cndot.) ciphertext of the product of the number of local training data and the increment of the model to be trained:
in some alternative embodiments, the public key in the embodiment depicted in FIG. 1 is determined as follows:
step S21, obtaining a first calculation parameter to obtain a private key of the target server.
The server sets a security parameter lambda, selects a lambda-bit large prime number P as a first calculation parameter, and targets the server P i Selecting a i (0<a i <p (p-1)) as its own private key. Similarly, the private key range of other servers is also (0, p (p-1)).
Step S22, based on the private key of the target server, obtaining a second calculation parameter of the target server.
Second calculation parameter b i
In step S23, a second calculation parameter of the other server is received, the second calculation parameter is obtained based on the private key of the other server, and the private key of the other server is obtained based on the first calculation parameter.
The second calculation parameters of the other servers are the same as step S22.
Step S24, determining the public key based on the first calculation parameter and the second calculation parameter.
Public key b is generated based on the private key federation of each server.
In some alternative embodiments, the public key is determined according to the following formula:
Wherein b represents a public key, n represents the number of servers, i represents the ith server, b i A second calculation parameter is indicated and is indicated,p represents a first calculated parameter, a i Represents the private key of the ith server, g represents p 2 P=p+1, 0<a i <p(p-1)。
In some alternative embodiments, the current training parameters of step S17 in fig. 1 are determined as follows:
wherein m represents the current training parameter, p represents the first calculation parameter, C 3 Representing the first component of the ciphertext of the current training parameter, C 4 Representing a second component, a, of the ciphertext of the current training parameter i Representing the private key of the ith server, S i Representing the first parameter or the second parameter of the i-th server.
Taking n servers in total, the target server is the jth server, and the first client is the ith client as an example:
the first training parameters include:
local training data quantity |Z i Product of the number of local training data and the increment of the model to be trained, Z i |·Δw i Local training data quantity and loss function value |Z i |·F i (w)。
The first training parameter ciphertext comprises:
local areaTraining data quantity ciphertext E (|Z) i |);
Ciphertext E of the product of the local training data quantity and the increment of the model to be trained 0 (|Z i |·Δw i );
Ciphertext E of the product of the local training data quantity and the loss function value 0 (|Z i |·F i (w))。
The first parameter ciphertext comprises:
total number of local training data quantity ciphertext for all first clients:
the sum of the ciphertext of the product of the local training data quantity of all the first clients and the increment of the model to be trained:
sum of product ciphertext of local training data quantity and loss function value of all first clients:
the current training parameter ciphertext comprises:
total number of local training data quantity ciphertexts for the first client and the second client:
the sum of the ciphertext of the product of the local training data quantity of the first client and the second client and the increment of the model to be trained:
the sum of the ciphertext of the product of the local training data quantity of the first client and the second client and the loss function value:
the current training parameters include:
total number of local training data amounts for the first client and the second client:
the sum of the product of the local training data amounts of the first client and the second client and the increment of the model to be trained:
the sum of the products of the local training data amounts of the first client and the second client and the loss function value:
the following describes the calculation mode of the current training parameters in detail:
wherein m represents the current training parameter, p represents the first calculation parameter, C 3 Representing the first component of the ciphertext of the current training parameter, C 4 Representing a second component, a, of the ciphertext of the current training parameter i Representing the private key of the ith server, S i Representing the first parameter or the second parameter of the i-th server.
Total number of local training data quantity ciphertexts for the first client and the second client:
at decryption, the following steps are respectively:
as C 3 、C 4 Substituting the formula, and solving to obtain m as the total number of the local training data of the first client and the second client.
Sum of ciphertext of products of local training data amounts of the first client and the second client and increments of the model to be trained:
at decryption, the following steps are respectively:
as C 3 、C 4 Substituting the data into a formula, judging whether the calculated data is larger than p/2, subtracting if so, not subtracting if so, and then dividing the data by a multiple mul to obtain m which is the sum of the local training data quantity of the first client and the second client and the increment of the model to be trained.
Sum of ciphertext of products of the local training data quantity and the loss function value for the first client and the second client:
at decryption, the following steps are respectively:
as C 3 、C 4 Substituting the data into a formula, judging whether the calculated data is larger than p/2, subtracting if so, not subtracting if so, and dividing the data by a multiple mul to obtain m which is the sum of the local training data quantity of the first client and the second client and the product of the loss function value.
The data processing method provided by the embodiment of the invention provides a federal learning privacy protection method based on a threshold password, and adopts an (n, n) threshold encryption method during encryption, wherein the encryption method needs n participants to participate in the decryption process so as to normally decrypt a ciphertext. The method comprises the following steps:
1. and (3) key generation: setting a safety parameter lambda, selecting a large prime number p and p of lambda bits 2 Is the primitive root g. Select ρ=p+1. Participant P i Selecting a i (0<a i < p (p-1)) as its own private key, calculates and disclosesThen jointly generate a public key:
2. encryption: the plaintext to be encrypted isSelecting a random number r, and calculating to obtain ciphertext:
C=(C 1 ,C 2 )=(g r modp 2 ,ρ m b r modp 2 )
C=(C 1 ,C 2 )=(g r modp 2 ,ρ m b r modp 2 )
C 1 representing the first component of ciphertext, C 2 Representing a second component of the ciphertext.
3. Joint decryption: p (P) i Calculation ofAnd publishes the results. Then, after the participant who needs to decrypt receives the results published by other participants, further calculate +.>And find:
S i representing the intermediate data to be decrypted,expressed in the modulus p 2 In the sense C 1 A of (2) i To the power.
If E (m) is used to denote encryption of plaintext m with the upper threshold cipher, D (c) is used to denote decryption of ciphertext c with the upper threshold cipher. It was readily verified that the following results were obtained:
D(E(m 1 )·E(m 2 ))=m 1 +m 2 modp,
wherein E (m) 1 )·E(m 2 ) Representing the multiplication of the corresponding coordinates of the two vectors. When 0 is less than or equal to m 1 +m 2 When < p, there are
D(E(m 1 )·E(m 2 ))=m 1 +m 2
Regarding the method of extending this algorithm to the real number range, since the data accuracy in practical application is limited, the data can be converted into an integer by uniformly multiplying the data by a certain multiple (assumed to be mul) before encryption. The p is chosen to be large enough that the absolute value of the sum of the data after conversion to integers does not exceed p/2. And then adding p to the data smaller than 0, converting all the data into natural numbers, and directly encrypting by using the threshold cryptographic algorithm. After the decryption of the threshold cryptographic algorithm, if the data is larger than p/2, subtracting p from the data, dividing the data by mul, and if the data is smaller than p/2, directly dividing the data by mul to obtain a final decryption result.
According to the homomorphism of the threshold cryptographic algorithm, the (n, n) threshold cryptographic algorithm popularized in the real number range can be easily verified to still meet the addition homomorphism. Respectively using E 0 (m) and D 0 (c) Represents encryption of plaintext m and decryption of ciphertext c after generalizing the above threshold cryptographic scheme to the real range, and is obtainable when-p/2 is less than or equal to (m 1 +m 2 ) When mul is less than or equal to p/2, D is present 0 (E 0 (m 1 )·E 0 (m 2 ))=m 1 +m 2
In the federation learning process, it is assumed that there is one server, M clients, in federation learning. We use Z i Representing a set of ith client data, with |Z i I represents the set Z i Data number of the data in the data storage unit. The ith client regarding Z i Is defined as:
wherein f (w; z) is a loss function specified by a user, and a mean square error loss function, a cross entropy loss function and the like can be selected according to requirements.
The overall loss function is defined as:
wherein, psi is i Is the weight of the client andone arrangement commonly used is to assign weights based on the number of data on the client.
The plurality of servers are used, clients are distributed to the designated servers according to the information such as regions, server bandwidths and the like, and when the number of the clients is too large and the distribution areas are scattered, the problems of network congestion and delay can be relieved to a certain extent. The adoption of the threshold password meeting the addition homomorphism can effectively resist collusion attack of the server on the client data. The federal learning architecture involving a plurality of servers is provided, and a threshold password satisfying addition homomorphism is generated among the servers, and because the threshold password (n, n) is used, keys owned by all the servers must be acquired in order to acquire the private data of the user, so that the attack of the servers on the private data of the user can be effectively prevented.
Also provided in this embodiment is a data processing system, referring to fig. 2, the system includes:
At least one server, the server includes goal server and other servers, the goal server is used for carrying out the above-mentioned data processing method;
the client comprises a first client and a second client, wherein the first client is connected with the target server, and the second client is connected with other servers.
In this embodiment, a data processing device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a data processing apparatus, as shown in fig. 3, including:
the first ciphertext receiving module 31 is configured to receive a first training parameter ciphertext sent by the first client, where the first training parameter ciphertext is obtained by encrypting a first training parameter by using a public key of the target server and the other servers, and the first training parameter is calculated based on an increment of a model to be trained, a loss function value, and a local training data amount;
A first parameter ciphertext determination module 32, configured to obtain a first parameter ciphertext based on a summary of all the first training parameter ciphertexts;
the first parameter ciphertext sending module 33 is configured to send the first parameter ciphertext to the other server, and receive a second parameter ciphertext sent by the other server, where the second parameter ciphertext is obtained by summarizing the second training parameter ciphertext by the other server;
the current training parameter ciphertext determination module 34 is configured to obtain a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext;
a first parameter calculation module 35, configured to calculate a first parameter based on a private key of a target server and the current training parameter ciphertext, and send the first parameter to the other servers;
a second parameter receiving module 36, configured to receive a second parameter sent by the other server, where the second parameter is calculated based on a private key of the other server and the current training parameter;
a current training parameter decryption module 37, configured to decrypt the current training parameter ciphertext based on the first parameter and the second parameter, to obtain a current training parameter;
The model determining module 38 is configured to update parameters of the model to be trained in the first client and the second client according to the current training parameters, so as to obtain a target model.
In some alternative embodiments, the apparatus further comprises:
the private key generation module is used for acquiring a first calculation parameter to obtain a private key of the target server;
the second calculation parameter generation module is used for obtaining second calculation parameters of the target server based on the private key of the target server;
the second calculation parameter receiving module is used for receiving second calculation parameters of the other servers, the second calculation parameters are obtained based on private keys of the other servers, and the private keys of the other servers are obtained based on the first calculation parameters;
and the public key determining module is used for determining the public key based on the first calculation parameter and the second calculation parameter.
In some alternative embodiments, the public key is determined according to the following formula:
wherein b represents a public key, n represents the number of servers, i represents the ith server, b i A second calculation parameter is indicated and is indicated,p represents a first calculated parameter, a i Represents the private key of the ith server, g represents p 2 P=p+1, 0 < a i <p(p-1)。
In some alternative embodiments, the current training parameters are determined as follows:
wherein m represents the current training parameter, p represents the first calculation parameter, C 3 Representing the first component of the ciphertext of the current training parameter, C 4 Representing a second component, a, of the ciphertext of the current training parameter i Representing the private key of the ith server, S i Representing the first parameter or the second parameter of the i-th server.
In some alternative embodiments, the first ciphertext receiving module 31 may comprise:
the client processing unit is used for training the model to be trained based on the local training data by the first client to obtain an increment and a loss function value of the model to be trained, encrypting the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value to obtain a first training parameter ciphertext, wherein the first training parameter comprises the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value;
and the data receiving unit is used for receiving the first training parameter ciphertext sent by the first client.
In some alternative embodiments, the first training parameter ciphertext is determined as follows:
C=(C 1 ,C 2 )=(g r modp 2 ,ρ m b r modp 2 )
wherein C represents a first training parameter ciphertext, C 1 Representing a first component of the ciphertext of the first training parameter, C 2 A second component representing the ciphertext of the first training parameter, r representing the random number, p representing the first calculation parameter, g representing p 2 P=p+1, m represents the first training parameter, and b represents the public key.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The data processing apparatus in this embodiment is presented in the form of functional units, where the units refer to ASIC (Application Specific Integrated Circuit ) circuits, processors and memories executing one or more software or firmware programs, and/or other devices that can provide the above described functionality.
The embodiment of the invention also provides computer equipment, which is provided with the data processing device shown in the figure 3.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 4, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 4.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (10)

1. The data processing method is characterized by being applied to a target server, wherein the target server is connected with other servers, the target server is connected with a first client, the other servers are connected with a second client, and the first client and the second client are used for training the same model to be trained, and the method comprises the following steps:
receiving a first training parameter ciphertext sent by the first client, wherein the first training parameter ciphertext is obtained by encrypting a first training parameter by utilizing public keys of the target server and the other servers, and the first training parameter is obtained by calculating based on the increment of a model to be trained, the loss function value and the local training data quantity;
obtaining a first parameter ciphertext based on the summary of all the first training parameter ciphertexts;
the first parameter ciphertext is sent to the other servers, and a second parameter ciphertext sent by the other servers is received, wherein the second parameter ciphertext is obtained by summarizing second training parameter ciphertext by the other servers;
Obtaining a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext;
calculating a first parameter based on a private key of a target server and the current training parameter ciphertext, and sending the first parameter to the other servers;
receiving second parameters sent by other servers, wherein the second parameters are calculated based on private keys of the other servers and the current training parameters;
decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain a current training parameter;
and updating parameters of the model to be trained in the first client and the second client according to the current training parameters so as to obtain a target model.
2. The method of claim 1, wherein the public key is determined as follows:
acquiring a first calculation parameter to obtain a private key of the target server;
obtaining a second calculation parameter of the target server based on the private key of the target server;
receiving second calculation parameters of the other servers, wherein the second calculation parameters are obtained based on private keys of the other servers, and the private keys of the other servers are obtained based on the first calculation parameters;
The public key is determined based on the first calculation parameter and the second calculation parameter.
3. The method of claim 2, wherein the public key is determined according to the formula:
wherein b represents a public key, n represents the number of servers, i represents the ith server, b i A second calculation parameter is indicated and is indicated,p represents a first calculated parameter, a i Represents the private key of the ith server, g represents p 2 P=p+1, 0<a i <p(p-1)。
4. The method of claim 2, wherein the current training parameters are determined as follows:
wherein m represents the current training parameter, p represents the first calculation parameter, C 3 Representing the first component of the ciphertext of the current training parameter, C 4 Representing a second component, a, of the ciphertext of the current training parameter i Representing the private key of the ith server, S i Representing the first parameter or the second parameter of the i-th server.
5. The method of claim 1, wherein the receiving the first training parameter ciphertext transmitted by the first client comprises:
the first client trains the model to be trained based on local training data to obtain increment and loss function value of the model to be trained, encrypts the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value to obtain a first training parameter ciphertext, wherein the first training parameter comprises the product of the local training data quantity, the product of the local training data quantity and the increment of the model to be trained and the product of the local training data quantity and the loss function value;
And receiving a first training parameter ciphertext sent by the first client.
6. The method of claim 5, wherein the first training parameter ciphertext is determined as follows:
C=(C 1 ,C 2 )=(g r modp 2m b r modp 2 )
wherein C represents a first training parameter ciphertext, C 1 Representing a first component of the ciphertext of the first training parameter, C 2 A second component representing the ciphertext of the first training parameter, r representing the random number, p representing the first calculation parameter, g representing p 2 P=p+1, m represents the first training parameter, and b represents the public key.
7. A data processing system, the system comprising:
at least one server comprising a target server for performing the data processing method of any one of claims 1 to 6 and other servers;
the client comprises a first client and a second client, the first client is connected with the target server, and the second client is connected with the other servers.
8. A data processing apparatus, applied to a target server, the target server being connected to a first client, the other servers being connected to a second client, the first client and the second client being configured to train a same model to be trained, the apparatus comprising:
The first ciphertext receiving module is used for receiving a first training parameter ciphertext sent by the first client, the first training parameter ciphertext is obtained by encrypting a first training parameter by utilizing public keys of the target server and the other servers, and the first training parameter is obtained by calculating based on the increment of a model to be trained, the loss function value and the local training data quantity;
the first parameter ciphertext determining module is used for obtaining a first parameter ciphertext based on summarizing all the first training parameter ciphertexts;
the first parameter ciphertext sending module is used for sending the first parameter ciphertext to the other servers and receiving second parameter ciphertext sent by the other servers, wherein the second parameter ciphertext is obtained by summarizing second training parameter ciphertext by the other servers;
the current training parameter ciphertext determining module is used for obtaining a current training parameter ciphertext based on the first parameter ciphertext and the second parameter ciphertext;
the first parameter calculation module is used for calculating a first parameter based on a private key of a target server and the current training parameter ciphertext and sending the first parameter to the other servers;
The second parameter receiving module is used for receiving second parameters sent by the other servers, and the second parameters are calculated based on private keys of the other servers and the current training parameters;
the current training parameter decryption module is used for decrypting the current training parameter ciphertext based on the first parameter and the second parameter to obtain a current training parameter;
and the model determining module is used for updating parameters of the model to be trained in the first client and the second client according to the current training parameters so as to obtain a target model.
9. A computer device, comprising:
a memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the data processing method of any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon computer instructions for causing a computer to execute the data processing method according to any one of claims 1 to 6.
CN202310911435.0A 2023-07-24 2023-07-24 Data processing method, system, device, computer equipment and storage medium Pending CN117134945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310911435.0A CN117134945A (en) 2023-07-24 2023-07-24 Data processing method, system, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310911435.0A CN117134945A (en) 2023-07-24 2023-07-24 Data processing method, system, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117134945A true CN117134945A (en) 2023-11-28

Family

ID=88857362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310911435.0A Pending CN117134945A (en) 2023-07-24 2023-07-24 Data processing method, system, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117134945A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117811722A (en) * 2024-03-01 2024-04-02 山东云海国创云计算装备产业创新中心有限公司 Global parameter model construction method, secret key generation method, device and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117811722A (en) * 2024-03-01 2024-04-02 山东云海国创云计算装备产业创新中心有限公司 Global parameter model construction method, secret key generation method, device and server
CN117811722B (en) * 2024-03-01 2024-05-24 山东云海国创云计算装备产业创新中心有限公司 Global parameter model construction method, secret key generation method, device and server

Similar Documents

Publication Publication Date Title
US11601407B2 (en) Fast oblivious transfers
CN110419053B (en) System and method for information protection
Jung et al. Collusion-tolerable privacy-preserving sum and product calculation without secure channel
CN110719159A (en) Multi-party privacy set intersection method for resisting malicious enemies
CN112106322A (en) Password-based threshold token generation
CN110912897B (en) Book resource access control method based on ciphertext attribute authentication and threshold function
CN115037477A (en) Block chain-based federated learning privacy protection method
Chandran et al. {SIMC}:{ML} inference secure against malicious clients at {Semi-Honest} cost
CN112787796B (en) Aggregation method and device for detecting false data injection in edge calculation
CN114254386A (en) Federated learning privacy protection system and method based on hierarchical aggregation and block chain
CN112597542B (en) Aggregation method and device of target asset data, storage medium and electronic device
WO2019110399A1 (en) Two-party signature device and method
US10630476B1 (en) Obtaining keys from broadcasters in supersingular isogeny-based cryptosystems
KR20210139344A (en) Methods and devices for performing data-driven activities
EP2435946A1 (en) A method of efficient secure function evaluation using resettable tamper-resistant hardware tokens
CN117134945A (en) Data processing method, system, device, computer equipment and storage medium
CN114239018A (en) Method and system for determining number of shared data for protecting privacy data
CN118195031A (en) Trusted federation learning method, device and system
US11784822B2 (en) System and method for transmitting a notification to a network
CN109218016B (en) Data transmission method and device, server, computer equipment and storage medium
Peng Efficient VSS free of computational assumption
CN117811722B (en) Global parameter model construction method, secret key generation method, device and server
TW202112098A (en) Digital signature method with hierarchical mechanism and hardware wallet device suitable therefore
CN106447473B (en) Online safe multi-positive correlation attribute reverse auction method based on matching degree
CN106447474B (en) Online safe multi-attribute reverse auction method based on matching degree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination