CN112287377A - Model training method based on federal learning, computer equipment and storage medium - Google Patents

Model training method based on federal learning, computer equipment and storage medium Download PDF

Info

Publication number
CN112287377A
CN112287377A CN202011337584.3A CN202011337584A CN112287377A CN 112287377 A CN112287377 A CN 112287377A CN 202011337584 A CN202011337584 A CN 202011337584A CN 112287377 A CN112287377 A CN 112287377A
Authority
CN
China
Prior art keywords
model
training model
training
participant
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011337584.3A
Other languages
Chinese (zh)
Inventor
张燕
李祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xinghuan Intelligent Technology Co ltd
Original Assignee
Nanjing Xinghuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xinghuan Intelligent Technology Co ltd filed Critical Nanjing Xinghuan Intelligent Technology Co ltd
Priority to CN202011337584.3A priority Critical patent/CN112287377A/en
Publication of CN112287377A publication Critical patent/CN112287377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The embodiment of the invention discloses a model training method based on federal learning, computer equipment and a storage medium. The method comprises the following steps: generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks; receiving local encryption training models uploaded by all participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for all the participants; according to a preset mask offset algorithm, performing fusion calculation on the local encryption training model to obtain a fused complete encryption training model; and carrying out single decryption on the complete encryption training model to obtain the target training model. According to the scheme of the embodiment of the invention, the model can be trained by multiple parties, and the model training efficiency in the federal learning system is greatly improved.

Description

Model training method based on federal learning, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of artificial intelligence, in particular to a model training method based on federal learning, computer equipment and a storage medium.
Background
With the development and large-scale application of big data and artificial intelligence technology, enterprises and related departments pay more and more attention to the protection of data security and data privacy. Federal learning is a new artificial intelligence technology, and the design goal is to develop efficient machine learning among multiple devices (participants) or multiple computing nodes on the premise of ensuring data security, protecting data privacy and guaranteeing legal compliance.
At the present stage, in order to protect privacy data of a coordinator (a server) from being leaked, the coordinator encrypts a model parameter set by using a homomorphic encryption algorithm and then sends the encrypted model parameter set to a participant, the participant performs model training in an encrypted state by using an encrypted model and a local training sample based on a homomorphic encryption principle, and then sends the encrypted local training model obtained after calculation to the coordinator to perform model aggregation calculation for the next round of iterative training.
Although the above method can ensure the computational security of the model result, the efficiency of training the model in a ciphertext manner by each participant is low, and the risk of overflow also exists.
Disclosure of Invention
The embodiment of the invention provides a model training method based on federal learning, computer equipment and a storage medium, which are used for realizing the training of a model by multiple parties on the premise of ensuring data safety.
In a first aspect, an embodiment of the present invention provides a model training method based on federal learning, which is applied to a coordinator of a federal learning system, and is characterized in that the method includes:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
In a second aspect, an embodiment of the present invention further provides a model training method based on federal learning, which is applied to a participant of a federal learning system, and is characterized in that the method includes:
receiving a unique identification code issued by a coordinator, and generating a mask matched with the unique identification code according to the unique identification code;
performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model;
and uploading the local encryption training model to the coordinator, and receiving a target training model issued by the coordinator.
In a third aspect, an embodiment of the present invention further provides a computer device, including a processor and a memory, where the memory is configured to store instructions that, when executed, cause the processor to:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
In a fourth aspect, an embodiment of the present invention further provides a computer device, including a processor and a memory, where the memory is used to store instructions that, when executed, cause the processor to:
receiving a unique identification code issued by a coordinator, and generating a mask matched with the unique identification code according to the unique identification code;
performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model;
and uploading the local encryption training model to the coordinator, and receiving a target training model issued by the coordinator.
In a fifth aspect, an embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions, when executed by a computer processor, are configured to perform a method for model training based on federated learning according to any embodiment of the present invention.
According to the scheme of the embodiment of the invention, a coordinator of the federal learning system generates unique identification codes respectively corresponding to all participants in the federal learning system and respectively issues the unique identification codes to the matched participants, wherein the unique identification codes are used for indicating the participants to generate matched masks; receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant; performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model; the complete encryption training model is decrypted once to obtain a target training model, the problem that each participant conducts model training in a ciphertext mode is low in efficiency is solved, training of the model by multiple participants (without the ciphertext mode) can be achieved on the premise that data safety is guaranteed, and model training efficiency in a federal learning system is greatly improved.
Drawings
FIG. 1 is a flowchart of a federated learning-based model training method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a federated learning-based model training method in a second embodiment of the present invention;
FIG. 3 is a flowchart of a federated learning-based model training method in a third embodiment of the present invention;
FIG. 4 is a flowchart of a model training method based on federated learning according to a fourth embodiment of the present invention;
FIG. 5 is a timing diagram of a Federal learning-based model training method according to a fourth embodiment of the present invention;
FIG. 6 is a flowchart of mask generation according to a fourth embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a model training apparatus based on federated learning in the fifth embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a model training apparatus based on federated learning according to a sixth embodiment of the present invention;
fig. 9 is a schematic structural diagram of a computer device in a seventh embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad invention. It should be further noted that, for convenience of description, only some structures, not all structures, relating to the embodiments of the present invention are shown in the drawings.
The term "federal learning" as used herein is an emerging artificial intelligence technology, and the design goal is to develop efficient machine learning among multiple parties or multiple computing nodes on the premise of ensuring data security, protecting data privacy and guaranteeing legal compliance.
The term "coordinator" as used herein is used to combine the model parameters or gradient parameters of each participant and send the results to each participant; in this embodiment, the coordinating party may be a server or an electronic device such as a computer.
The term "participant" is used herein to perform iterative computation on the model issued by the coordinator until the model converges, where the data of each participant is not shared; in this embodiment, the participating party may be a server, or may also be a computer, a tablet computer, a smart phone, or the like.
The term "Unique Identifier (uuid)" used herein may be an ASCII string in hexadecimal; illustratively, the characters may be hexadecimal ASCII strings generated after encrypting 0 to (k-1), respectively, where k is the number of participants in the federal learning system.
The term "local encryption training model" is used herein to encrypt the local training model after the overlapping of the mask for each of the participants.
For ease of understanding, the main inventive concepts of the embodiments of the present invention are briefly described.
In the prior art, a coordinator in a federated learning system encrypts a model parameter set by adopting a homomorphic encryption algorithm and then sends the encrypted model parameter set to a participant, the participant performs model training in an encrypted state by using an encrypted model and a local training sample based on a homomorphic encryption principle, and then sends an encrypted local training model obtained after calculation to the coordinator to perform model aggregation calculation for the next round of iterative training.
However, in the method in the prior art, the efficiency of training the model by each participant in a ciphertext mode is low, and the risk of overflow also exists.
The inventor considers whether model training can be performed by each participant without a ciphertext mode or not aiming at the problems, and a method for ensuring data privacy is provided so as to improve the model training efficiency of federal learning.
Based on the above thought, the inventor creatively proposes that a coordinator of the federal learning system generates unique identification codes respectively corresponding to each participant in the federal learning system, and issues each unique identification code to a matched participant, wherein the unique identification codes are used for indicating each participant to generate a matched mask; receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant; performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model; and carrying out single decryption on the complete encryption training model to obtain a target training model, realizing that multiple parties (without ciphertext mode) train the model on the premise of ensuring data safety, and greatly improving the model training efficiency in the Federal learning system.
Example one
Fig. 1 is a flowchart of a federal learning based model training method in an embodiment of the present invention, where the embodiment is applicable to a case where a model is trained by a coordinator of a federal learning system, and the method may be implemented by a federal learning based model training apparatus, which may be implemented by software and/or hardware and integrated in a computer device. Specifically, referring to fig. 1, the method specifically includes the following steps:
and 110, generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks.
The federal learning system may include a coordinating party and a plurality of participating parties (for example, 2, 4, or 10, etc., which are not limited in this embodiment), and in this embodiment, the coordinating party may be a server or an electronic device such as a computer; the participant may be a server, a computer, a tablet computer, a smart phone, or the like, and is not limited in this embodiment.
In an optional implementation manner of this embodiment, the coordinating party may generate unique identification codes corresponding to each participant in the federal learning system; wherein, the unique identification code can be a hexadecimal ASCII character string; illustratively, the characters may be hexadecimal ASCII strings generated after encrypting 0 to (k-1), respectively, where k is the number of participants in the federal learning system.
Further, each generated unique identification code may be issued to the matched participant, for example, the unique identification code a may be issued to the participant a matched therewith; and issuing the unique identification code B to the matched party B until all the unique identification codes are issued. Further, each participant receiving the unique identification code may generate a matching mask based on the received participant.
It should be noted that the unique identification code referred to in this embodiment may be used to instruct each participant to generate a matching mask, for example, a symbol identifier that may be used to identify the mask generated by each participant; meanwhile, conflict detection can be carried out in the process of generating the mask by each participant.
And 120, receiving the local encryption training models uploaded by the participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for the participants.
In an optional implementation manner of this embodiment, the coordinator may receive a local encryption training model uploaded by each participant, where the local encryption training model is a training model obtained by training each participant in the federal learning system and obtained by encrypting the local encryption training model according to an encryption key after superimposing a mask.
For example, if the participant a in the federal learning system trains to obtain the model a, the participant a may first superimpose a mask matched with the participant a, encrypt the model a on which the mask is superimposed by using a key stored in the participant a to obtain a local encryption training model a, and further upload the local encryption training model a to a coordinator in the federal learning system.
And step 130, performing fusion calculation on the local encryption training model according to a preset mask offset algorithm to obtain a fused complete encryption training model.
The preset mask cancellation algorithm may be an algorithm that adds all masks to 0, or another cancellation algorithm, which is not limited in this embodiment.
In an optional implementation manner of this embodiment, after receiving the local encryption training models uploaded by each participant, the coordinator may perform fusion calculation on the local encryption training models according to a preset mask elimination algorithm to obtain a fused complete encryption training model.
It should be noted that, in this embodiment, after receiving the local encryption training models uploaded by each participant, the coordinator does not decrypt each local encryption training model, but directly fuses each local encryption training model, and cancels all masks through a preset mask cancellation algorithm in the fusion process, so as to obtain the fused encryption training model that does not include masks.
And 140, carrying out single decryption on the complete encrypted training model to obtain a target training model.
In an optional implementation manner of this embodiment, after performing fusion calculation on the local encryption training model and obtaining a fused complete encryption training model, the coordinator may perform single decryption on the complete encryption training model, thereby obtaining the target training model.
Optionally, after the coordinator performs fusion calculation on the local encryption training model to obtain a fused complete encryption training model, the coordinator may perform single decryption on the complete encryption training model through a private key locally stored by the coordinator to obtain a target training model.
According to the scheme of the embodiment of the invention, a coordinator of the federal learning system generates unique identification codes respectively corresponding to all participants in the federal learning system and respectively issues the unique identification codes to the matched participants, wherein the unique identification codes are used for indicating all the participants to generate matched masks; receiving local encryption training models uploaded by all participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for all the participants; according to a preset mask offset algorithm, performing fusion calculation on the local encryption training model to obtain a fused complete encryption training model; the complete encryption training model is decrypted once to obtain the target training model, the problem that each participant conducts model training in a ciphertext mode is low in efficiency is solved, the model can be trained by multiple participants (without the ciphertext mode) on the premise that data safety is guaranteed, and model training efficiency in a federal learning system is greatly improved.
Example two
Fig. 2 is a flowchart of a model training method based on federal learning in the second embodiment of the present invention, which is a further refinement of the above technical solutions, and the technical solution in this embodiment may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 2, the model training method based on federal learning may include the following steps:
step 210, obtaining a transmission key matched with each participant.
In an optional implementation manner of this embodiment, before generating the unique identification codes corresponding to the participants, the coordinator may first obtain the transmission key matched with each participant. It should be noted that, in this embodiment, each participant may encrypt the local model to be uploaded through the transmission key matched with the participant, and may also decrypt the received target training model.
In an optional implementation manner of this embodiment, the obtaining, by the coordinator, the transmission key matched with each participant may include: generating a public and private key pair, sending the public key to each participant, and storing the private key locally; and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys.
In a particular implementation, a coordinating party may generate a public-private key pair (e.g., a homomorphic encrypted public-private key pair) and send the public keys to the participating parties separately, while storing the private keys locally at the coordinating party; each participant can encrypt the generated transmission key according to the received public key; furthermore, each participant can encrypt the transmission key of the participant and upload the encrypted transmission key to the coordinator; after receiving the encrypted transmission keys uploaded by each participant, the coordinator can decrypt each encrypted transmission key through a local stored private key, so as to obtain the transmission keys of each participant, and store each transmission key locally.
And step 220, generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks.
And step 230, receiving the local encryption training models uploaded by the participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for the participants.
And 240, performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model.
The preset mask cancellation algorithm makes the calculation result after the addition of the masks be 0.
In an optional implementation manner of this embodiment, after receiving the local encryption training models uploaded by the participants, the coordinator may directly perform addition calculation on the local encryption training models to obtain a complete encryption training model. It can be understood that, in the present embodiment, the preset mask cancellation algorithm may enable a calculation result of adding masks of the local encryption training models to be 0, and then, the local encryption training models are directly subjected to addition calculation, and the obtained complete encryption training model does not include a mask.
And step 250, carrying out single decryption on the complete encryption training model to obtain a target training model.
And step 260, encrypting the target training model through the transmission key of each participant, and respectively issuing the encrypted target training model to each participant so as to update the local model of each participant.
In an optional implementation manner of this embodiment, after the coordinator decrypts the complete encrypted training model once to obtain the target training model, the coordinator may further encrypt the target training model by coordinating and sending the locally stored transmission keys of the participants, and send the encrypted target training model to each participant respectively to update the local training models of the participants.
In the scheme of the embodiment, a coordinating party generates a public and private key pair, and sends a public key to each participating party, and stores a private key locally; receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys; after the complete encrypted training model is decrypted once to obtain the target training model, the target training model can be encrypted through the transmission key of each participant, and the encrypted target training model is respectively issued to each participant to update the local model of each participant, so that the training efficiency of the model is improved, and meanwhile, the training model stored locally by each participant can be updated.
EXAMPLE III
Fig. 3 is a flowchart of a method for training a model based on federal learning according to a third embodiment of the present invention, where the method is applicable to a case where a model is trained by a participant of a federal learning system, and the method can be executed by a model training apparatus based on federal learning, and the apparatus can be implemented by software and/or hardware and is integrated in a computer device. Specifically, referring to fig. 3, the method specifically includes the following steps:
and 310, receiving the unique identification code issued by the coordinator, and generating a mask matched with the unique identification code according to the unique identification code.
In an optional implementation manner of this embodiment, the participant may receive the unique identifier issued by the coordinator, and generate a mask matched with the unique identifier according to the received unique identifier.
In this embodiment, a DH key exchange algorithm and a secret sharing algorithm may be used to generate masks that match each participant; for example, the representation of the mask may be as follows:
Figure BDA0002797635160000121
wherein, p and g are public keys generated by a coordinator through a DH algorithm, r is a private key generated by a participant, k is the number of the participants in the Federal learning system, and when uuid is used as the public key, the private key is generated by the coordinator through the DH algorithm, and when the private key is used as the private key, thej>uuidiWhen a isi=1;uuidj<uuidiWhen a isi-1, i is 1 to k, j-1 to k, and i is not equal to j. In this embodiment, the masks of all the participants are eliminated when the coordinator performs model fusion calculation, and the calculation result of the fusion model is not affected.
And 320, performing iterative training according to the initial training model and the local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model.
In an optional implementation manner of this embodiment, the participant may perform iterative training according to the initial training model and data stored locally in the participant, so as to obtain a model training result; further, a mask matched with the participant can be superimposed on the model training result and encrypted, so that a local encryption training model is obtained.
For example, the local encryption training model of the participant may be as shown in the following formula:
Figure BDA0002797635160000122
wherein n isiFor the number of samples stored locally by the participant,
Figure BDA0002797635160000123
a mask that matches the participant; wiAnd (5) obtaining a model training result.
And 330, uploading the local encryption training model to a coordinator, and receiving a target training model issued by the coordinator.
In an optional implementation manner of this embodiment, after the participant generates the local encryption training model, the participant may upload the generated local price training model to the coordinator, and further receive the target training model issued by the coordinator, and further update the local model to the latest received target training model.
In the scheme of this embodiment, the coordinator generates a mask matched with the unique identification code by receiving the unique identification code issued by the coordinator and according to the unique identification code; performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model; the local encryption training model is uploaded to the coordinator, the target training model issued by the coordinator is received, each participant does not need to train the model in a ciphertext mode, the target training model can be determined quickly, and the model training efficiency in the federal learning system is greatly improved.
Example four
Fig. 4 is a flowchart of a model training method based on federal learning in a fourth embodiment of the present invention, which is a further refinement of the above technical solutions, and the technical solution in this embodiment may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 4, the model training method based on federal learning may include the following steps:
step 410, receiving a public key issued by a coordinator, and generating a transmission key; and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
In an optional implementation manner of this embodiment, each participant may receive a public key issued by the coordinator, and further generate a transmission key; for example, a random number (for example, 1234, 12ab, or abc @123, etc., which is not limited in this embodiment) may be randomly generated and used as the transmission key by the participant.
Further, the transmission key can be encrypted according to the received public key, and the encrypted transmission key is uploaded to the coordinator.
And step 420, receiving the unique identification code issued by the coordinator, and generating a mask matched with the unique identification code according to the unique identification code.
And 430, performing iterative training according to the initial training model and the local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model.
And 440, uploading the local encryption training model to a coordinator, and receiving a target training model issued by the coordinator.
Step 450, decrypting the target training model through the transmission key, and updating the local model; and continuing to perform the operation of iterative training according to the target training model and the local data.
In an optional implementation manner of this embodiment, each participant may decrypt the target training model through a transmission key matched with the participant, and update the local model through the decrypted training model; and continuing to execute the operation of iterative training according to the target training model and the local data until the training model converges.
In the scheme of this embodiment, a participant receives a public key issued by a coordinator and generates a transmission key; encrypting the transmission key through the public key, and uploading the encrypted transmission key to a coordinator; after receiving the target training model fed back by the coordinator, decrypting the target training model through the transmission key, and updating the local model; and continuously executing the iterative training operation according to the target training model and the local data, and each participant does not need to train the model in a ciphertext mode, so that the target training model can be quickly determined, and the model training efficiency in the federal learning system is greatly improved.
In order to enable those skilled in the art to better understand the model training method based on federated learning in the present embodiment, a specific example is used for description below, and fig. 5 is a timing chart of a model training method based on federated learning in a fourth embodiment of the present invention; it should be noted that fig. 5 shows only one participant in the federal learning system for convenience of description, and does not limit the embodiments of the present invention. Specifically, referring to fig. 5, the method specifically includes the following steps:
and step 510, generating a public and private key pair by utilizing a Paillier homomorphic encryption algorithm.
And 511, sending the Paillier public key to each participant, and storing the Paillier private key in the local.
And step 520, receiving and storing the Paillier public key sent by the coordinator, and generating a transmission key of the participant.
Wherein, the transmission key can be expressed as: ciKey, where i is the participant number.
And 521, encrypting the transmission key by using the Paillier public key and then sending the encrypted transmission key to the coordinator.
And step 512, decrypting by using the Paillier private key to obtain the transmission keys of all the participants, and storing.
And 513, generating a unique uuid for each participant, and respectively sending the unique uuid to the corresponding participants.
Wherein, the MD5 algorithm is utilized, for example: generating 16-system ASCII character strings after encrypting 0 to (k-1); where k is the number of participants.
And 522, generating a mask matched with the DH key exchange algorithm and the secret sharing algorithm by using the DH key exchange algorithm and the secret sharing algorithm.
Step 523, a round of iterative training is completed by using the initial model and the local sample data to obtain a local training model result.
Wherein the result of the local training model is Wi
Step 524, adding a mask to the locally trained model, and combining the training samples to generate a final model result.
For example, assume that the model parameters of a participant are: wiThe mask is: riThe number of samples for this iteration is: n isiThen the model result that the participant finally sends to the coordinator is:
Figure BDA0002797635160000151
and step 525, encrypting the final model result by using the Paillier public key and then sending the encrypted model result to the coordinator.
And 514, after receiving the encrypted models and the sample numbers sent by the participants, performing model fusion calculation to obtain a fused model ciphertext.
The embodiment of the invention uses a FedAvg federal average polymerization algorithm to realize model fusion calculation, and the calculation formula is as follows:
Figure BDA0002797635160000161
Figure BDA0002797635160000162
wherein the content of the first and second substances,
Figure BDA0002797635160000163
as a final result of model fusion, WsIs the sum of all the participant models. According to the principle of masking, the sum of the masks of all participants is 0, i.e.
Figure BDA0002797635160000164
And 515, decrypting the fused model by using a locally stored private key to finally obtain the decrypted fused model.
Wherein the decrypted fusion model is
Figure BDA0002797635160000165
In the embodiment of the invention, the coordinator performs model fusion calculation and then decrypts, so that the ciphertext can be decrypted only once, and the decryption times are reduced.
And 516, encrypting the decrypted model by using the transmission key of each participant, and sending the encrypted model to the corresponding participant.
Step 526, after receiving the encrypted fusion model, decrypting with the locally generated transmission key, and updating the local model.
The next iteration continues until the training model converges.
For better understanding of the embodiments of the present invention, fig. 6 is a flowchart of mask generation in a fourth embodiment of the present invention, and referring to fig. 6, the mask generation process mainly includes the following steps:
and step 610, generating uuids of all the participants by using an MD5 algorithm.
Step 611, issuing uuid to corresponding participant.
And step 612, generating a public key p, g by using a DH algorithm.
Step 613, issuing p, g to the participant.
And step 620, generating a private key r.
Step 621, convert grmod p is uploaded to the coordinator.
Step 614, mixing { uuid, grmod p to the participants.
Step 622, generate mask item
Figure BDA0002797635160000171
According to the embodiment of the invention, the mask is added on the basis of the local model data of the participants, and the local model result added with the mask can effectively prevent the model information from being acquired by the network in the transmission process, and meanwhile, the coordination party can not analyze the local models of all the participants, so that the privacy data of the participants are protected. In addition, the masks can be eliminated when the coordinator performs model fusion calculation, so that the final result after the model fusion calculation is not influenced.
The embodiment of the invention utilizes the Paillier homomorphic encryption algorithm to realize the model fusion calculation after the encryption of the participants on the basis of the homomorphic principle at the coordinator, thereby ensuring the processing safety of model data. In addition, the coordination party performs fusion calculation on the encrypted local model of the participant and then decrypts the fused model, so that the coordination party can be ensured to decrypt the ciphertext only once, the decryption times are reduced, and the efficiency of the federal learning system is improved.
The embodiment of the invention adopts a symmetric encryption algorithm with higher efficiency to transmit the fusion model, and further improves the performance of the Federal learning system on the premise of ensuring the network security of the fusion model. In addition, the participants perform iterative training based on the decrypted fusion model, the training efficiency is higher than that of the encrypted fusion model, and the overflow risk is avoided.
EXAMPLE five
Fig. 7 is a schematic structural diagram of a model training apparatus based on federal learning in a fifth embodiment of the present invention, which (a model training apparatus 700 based on federal learning) can execute the model training method based on federal learning involved in the foregoing embodiments. Referring to fig. 7, the apparatus includes: a unique identification code generating module 710, a local encryption training model receiving module 720, a fusion calculating module 730 and a target training model generating module 740.
The unique identification code generation module 710 is configured to generate unique identification codes corresponding to the participants in the federal learning system, and issue the unique identification codes to the matched participants, where the unique identification codes are used to indicate the participants to generate matched masks;
the local encryption training model receiving module 720 is configured to receive the local encryption training models uploaded by the participants, where the local encryption training models are obtained by encrypting the local training models, to which the masks are superimposed, for the participants;
the fusion calculation module 730 is configured to perform fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and the target training model generation module 740 is configured to perform single decryption on the complete encrypted training model to obtain a target training model.
In the scheme of the embodiment, the unique identification codes corresponding to the participants in the federal learning system are generated by the unique identification code generation module and are respectively issued to the matched participants, and the unique identification codes are used for indicating the participants to generate matched masks; receiving the local encryption training models uploaded by all participants through a local encryption training model receiving module; performing fusion calculation on the local encryption training model through a fusion calculation module according to a preset mask offset algorithm to obtain a fused complete encryption training model; the complete encryption training model is decrypted once through the target training model generation module to obtain the target training model, the problem that the efficiency of model training of all participants in a ciphertext mode is low is solved, the model can be trained by multiple participants (without the ciphertext mode) on the premise of ensuring data safety, and the model training efficiency in the federal learning system is greatly improved.
Optionally, the federal learning based model training device 700 further includes: the transmission password acquisition module is used for acquiring transmission keys matched with all the participants; in this embodiment, the transmission password obtaining module is specifically configured to generate a public and private key pair, send a public key to each participant, and store the private key locally; and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys.
Optionally, the federal learning based model training device 700 further includes: and the local model updating module is used for encrypting the target training model through the transmission key of each participant and respectively transmitting the encrypted target training model to each participant so as to update the local model of each participant.
Optionally, the fusion calculation module 730 is specifically configured to directly perform addition calculation on each local encryption training model to obtain a complete encryption training model;
the preset mask cancellation algorithm makes the calculation result after the addition of the masks be 0.
Optionally, the target training model generating module 740 is specifically configured to decrypt the fused encrypted training model through a local private key stored in the local area, so as to obtain the target training model.
The model training device based on the federal learning provided by the embodiment of the invention can execute the model training method based on the federal learning provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
FIG. 8 is a schematic structural diagram of a model training apparatus based on federated learning according to a sixth embodiment of the present invention; the apparatus (federal learning based model training apparatus 800) may perform the federal learning based model training method involved in the above embodiments. Referring to fig. 8, the apparatus includes: a mask generation module 810, a local encryption training model generation module 820, and a local encryption training model upload module 830.
The mask generating module 810 is configured to receive a unique identification code issued by a coordinator, and generate a mask matched with the unique identification code according to the unique identification code;
the local encryption training model generation module 820 is used for performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model;
the local encryption training model uploading module 830 is configured to upload the local encryption training model to the coordinator and receive the target training model sent by the coordinator.
In the scheme of this embodiment, a unique identification code issued by a coordinator is received through a mask generation module, and a mask matched with the unique identification code is generated according to the unique identification code; performing iterative training through a local encryption training model generation module according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model; the local encryption training model is uploaded to the coordinator through the local encryption training model uploading module, the target training model issued by the coordinator is received, each participant does not need to train the model in a ciphertext mode, the target training model can be rapidly determined, and the model training efficiency in the Federal learning system is greatly improved.
Optionally, the federal learning based model training device 800 further includes: the transmission key generation module is used for receiving the public key issued by the coordinator and generating a transmission key; and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
Optionally, the federal learning based model training device 800 further includes: the local model updating module is used for decrypting the target training model through the transmission key and updating the local model; and continuing to perform the operation of iterative training according to the target training model and the local data.
The model training device based on the federal learning provided by the embodiment of the invention can execute the model training method based on the federal learning provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE seven
Fig. 9 is a schematic structural diagram of a computer apparatus according to a seventh embodiment of the present invention, and as shown in fig. 9, the computer apparatus includes a processor 90, a memory 91, an input device 92, and an output device 93; the number of the processors 90 in the computer device may be one or more, and one processor 90 is taken as an example in fig. 9; the processor 90, the memory 91, the input device 92 and the output device 93 in the computer apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 9.
The memory 91 serves as a computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the federal learning based model training method in the embodiment of the present invention (for example, the unique identifier generation module 710, the local encryption training model reception module 720, the fusion calculation module 730, and the target training model generation module 740 in the federal learning based model training apparatus shown in fig. 7, or the mask generation module 810, the local encryption training model generation module 820, and the local encryption training model upload module 830 in the federal learning based model training apparatus shown in fig. 8). The processor 90 executes various functional applications of the computer device and data processing by executing software programs, instructions and modules stored in the memory 91, namely, implements the above-described federal learning-based model training method.
The memory 91 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 91 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 91 may further include memory located remotely from processor 90, which may be connected to a computer device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 92 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the computer apparatus. The output device 93 may include a display device such as a display screen.
Example eight
An eighth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for model training based on federal learning, and the method is applied to a coordinator of a federal learning system, and the method includes:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the model training method based on federal learning provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the model training apparatus based on federal learning, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (17)

1. A model training method based on federal learning is applied to a coordinator of a federal learning system and is characterized by comprising the following steps:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
2. The method of claim 1, further comprising, prior to generating the unique identification codes corresponding to each participant in the federated learning system:
acquiring a transmission key matched with each participant;
the acquiring of the transmission key matched with each of the participants includes:
generating a public and private key pair, sending the public key to each participant, and storing the private key locally;
and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through the private key respectively to obtain the transmission keys of each participant, and storing the transmission keys.
3. The method of claim 2, wherein after decrypting the complete encrypted training model a single time to obtain a target training model, further comprising:
and encrypting the target training model through the transmission key of each participant, and respectively issuing the encrypted target training model to each participant so as to update the local model of each participant.
4. The method according to claim 1, wherein the performing a fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model comprises:
directly carrying out addition calculation on each local encryption training model to obtain the complete encryption training model;
the preset mask cancellation algorithm makes the calculation result obtained by adding the masks be 0.
5. The method of claim 2, wherein said single decrypting of the complete encrypted training model to obtain a target training model comprises:
and decrypting the fused encrypted training model through a local private key to obtain the target training model.
6. A model training method based on federal learning is applied to the participants of the federal learning system, and is characterized by comprising the following steps:
receiving a unique identification code issued by a coordinator, and generating a mask matched with the unique identification code according to the unique identification code;
performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model;
and uploading the local encryption training model to the coordinator, and receiving a target training model issued by the coordinator.
7. The method of claim 6, further comprising, before receiving the unique identifier issued by the coordinator:
receiving a public key issued by the coordinator and generating a transmission key;
and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
8. The method of claim 7, further comprising, after receiving the target training model of the coordinator feedback:
decrypting the target training model through the transmission key, and updating a local model;
and continuing to execute the operation of iterative training according to the target training model and the local data.
9. A computer device comprising a processor and a memory, the memory to store instructions that, when executed, cause the processor to:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
10. The apparatus of claim 9, the processor, prior to generating the unique identification codes corresponding to respective participants in a federated learning system, is further configured to:
acquiring a transmission key matched with each participant;
the processor is arranged to obtain a transmission key matching each of the parties by:
generating a public and private key pair, sending the public key to each participant, and storing the private key locally;
and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through the private key respectively to obtain the transmission keys of each participant, and storing the transmission keys.
11. The apparatus of claim 10, wherein the processor, after decrypting the complete encrypted training model a single time to obtain a target training model, is further configured to:
and encrypting the target training model through the transmission key of each participant, and respectively issuing the encrypted target training model to each participant so as to update the local model of each participant.
12. The apparatus of claim 9, wherein the processor is configured to perform a fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model by:
directly carrying out addition calculation on each local encryption training model to obtain the complete encryption training model;
the preset mask cancellation algorithm makes the calculation result obtained by adding the masks be 0.
13. The apparatus of claim 10, wherein the processor is configured to decrypt the complete encrypted training model a single time to obtain a target training model by:
and decrypting the fused encrypted training model through a local private key to obtain the target training model.
14. A computer device comprising a processor and a memory, the memory to store instructions that, when executed, cause the processor to:
receiving a unique identification code issued by a coordinator, and generating a mask matched with the unique identification code according to the unique identification code;
performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model;
and uploading the local encryption training model to the coordinator, and receiving a target training model issued by the coordinator.
15. The device of claim 14, wherein the processor, prior to receiving the unique identifier issued by the coordinator, is further configured to:
receiving a public key issued by the coordinator and generating a transmission key;
and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
16. The apparatus of claim 15, wherein the processor, after receiving the target training model fed back by the coordinator, is further configured to:
decrypting the target training model through the transmission key, and updating a local model;
and continuing to execute the operation of iterative training according to the target training model and the local data.
17. A storage medium containing computer-executable instructions for performing the federal learning based model training method of any of claims 1-5, or 6-8 when executed by a computer processor.
CN202011337584.3A 2020-11-25 2020-11-25 Model training method based on federal learning, computer equipment and storage medium Pending CN112287377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011337584.3A CN112287377A (en) 2020-11-25 2020-11-25 Model training method based on federal learning, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011337584.3A CN112287377A (en) 2020-11-25 2020-11-25 Model training method based on federal learning, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112287377A true CN112287377A (en) 2021-01-29

Family

ID=74425753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337584.3A Pending CN112287377A (en) 2020-11-25 2020-11-25 Model training method based on federal learning, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112287377A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297590A (en) * 2021-04-28 2021-08-24 东方电气风电有限公司 Artificial intelligence algorithm source code transplanting method and system
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113609781A (en) * 2021-08-16 2021-11-05 广域铭岛数字科技有限公司 Automobile production mold optimization method, system, equipment and medium based on federal learning
CN113887741A (en) * 2021-11-05 2022-01-04 深圳市电子商务安全证书管理有限公司 Data generation method, device, equipment and storage medium based on federal learning
CN114006769A (en) * 2021-11-25 2022-02-01 中国银行股份有限公司 Model training method and device based on horizontal federal learning
WO2022178719A1 (en) * 2021-02-24 2022-09-01 华为技术有限公司 Horizontal federated learning-based training method, apparatus and system
CN115021905A (en) * 2022-05-24 2022-09-06 北京交通大学 Method for aggregating parameters of local model for federated learning
WO2023033717A3 (en) * 2021-09-02 2023-04-27 脸萌有限公司 Data protection method and apparatus, medium, and electronic device
CN117077816A (en) * 2023-10-13 2023-11-17 杭州金智塔科技有限公司 Training method and system of federal model

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190251291A1 (en) * 2017-05-22 2019-08-15 International Business Machines Corporation Anonymity assessment system
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110601814A (en) * 2019-09-24 2019-12-20 深圳前海微众银行股份有限公司 Federal learning data encryption method, device, equipment and readable storage medium
CN110674528A (en) * 2019-09-20 2020-01-10 深圳前海微众银行股份有限公司 Federal learning privacy data processing method, device, system and storage medium
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111027086A (en) * 2019-12-16 2020-04-17 支付宝(杭州)信息技术有限公司 Private data protection method and system
CN111125788A (en) * 2019-12-26 2020-05-08 南京星环智能科技有限公司 Encryption calculation method, computer equipment and storage medium
CN111241570A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties
CN111259443A (en) * 2020-01-16 2020-06-09 百融云创科技股份有限公司 PSI (program specific information) technology-based method for protecting privacy of federal learning prediction stage

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190251291A1 (en) * 2017-05-22 2019-08-15 International Business Machines Corporation Anonymity assessment system
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110674528A (en) * 2019-09-20 2020-01-10 深圳前海微众银行股份有限公司 Federal learning privacy data processing method, device, system and storage medium
CN110601814A (en) * 2019-09-24 2019-12-20 深圳前海微众银行股份有限公司 Federal learning data encryption method, device, equipment and readable storage medium
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111027086A (en) * 2019-12-16 2020-04-17 支付宝(杭州)信息技术有限公司 Private data protection method and system
CN111125788A (en) * 2019-12-26 2020-05-08 南京星环智能科技有限公司 Encryption calculation method, computer equipment and storage medium
CN111259443A (en) * 2020-01-16 2020-06-09 百融云创科技股份有限公司 PSI (program specific information) technology-based method for protecting privacy of federal learning prediction stage
CN111241570A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KEITH BONAWITZ,ET AL.: "Practical Secure Aggregation for Privacy-Preserving Machine Learning", 《CCS "17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY》 *
王健宗 等: "联邦学习算法综述", 《大数据》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022178719A1 (en) * 2021-02-24 2022-09-01 华为技术有限公司 Horizontal federated learning-based training method, apparatus and system
CN113297590A (en) * 2021-04-28 2021-08-24 东方电气风电有限公司 Artificial intelligence algorithm source code transplanting method and system
CN113344221A (en) * 2021-05-10 2021-09-03 上海大学 Federal learning method and system based on neural network architecture search
CN113609781A (en) * 2021-08-16 2021-11-05 广域铭岛数字科技有限公司 Automobile production mold optimization method, system, equipment and medium based on federal learning
CN113609781B (en) * 2021-08-16 2023-12-26 广域铭岛数字科技有限公司 Method, system, equipment and medium for optimizing automobile production die based on federal learning
WO2023033717A3 (en) * 2021-09-02 2023-04-27 脸萌有限公司 Data protection method and apparatus, medium, and electronic device
CN113887741A (en) * 2021-11-05 2022-01-04 深圳市电子商务安全证书管理有限公司 Data generation method, device, equipment and storage medium based on federal learning
CN114006769A (en) * 2021-11-25 2022-02-01 中国银行股份有限公司 Model training method and device based on horizontal federal learning
CN114006769B (en) * 2021-11-25 2024-02-06 中国银行股份有限公司 Model training method and device based on transverse federal learning
CN115021905A (en) * 2022-05-24 2022-09-06 北京交通大学 Method for aggregating parameters of local model for federated learning
CN117077816A (en) * 2023-10-13 2023-11-17 杭州金智塔科技有限公司 Training method and system of federal model
CN117077816B (en) * 2023-10-13 2024-03-29 杭州金智塔科技有限公司 Training method and system of federal model

Similar Documents

Publication Publication Date Title
CN112287377A (en) Model training method based on federal learning, computer equipment and storage medium
CN110138802B (en) User characteristic information acquisition method, device, block chain node, network and storage medium
CN110289968B (en) Private key recovery method, collaborative address creation method, collaborative address signature device and storage medium
CN105794145A (en) Server-aided private set intersection (PSI) with data transfer
CN110235409A (en) Use the protected RSA signature of homomorphic cryptography or the method for decryption
CN111010266B (en) Message encryption and decryption, reading and writing method and device, computer equipment and storage medium
CN109474616B (en) Multi-platform data sharing method and device and computer readable storage medium
US8923519B2 (en) Method of efficient secure function evaluation using resettable tamper-resistant hardware tokens
Saddam et al. A lightweight image encryption and blowfish decryption for the secure internet of things
WO2021129470A1 (en) Polynomial-based system and method for fully homomorphic encryption of binary data
CN112380404A (en) Data filtering method, device and system
CN116861477A (en) Data processing method, system, terminal and storage medium based on privacy protection
CN111046408A (en) Judgment result processing method, query method, device, electronic equipment and system
WO2020177109A1 (en) Lot-drawing processing method, trusted chip, node, storage medium and electronic device
CN114465708B (en) Privacy data processing method, device, system, electronic equipment and storage medium
CN114866312A (en) Common data determination method and device for protecting data privacy
CN115344882A (en) Multi-party computing method, device and storage medium based on trusted computing environment
CN114430549A (en) White box encryption and decryption method and device suitable for wireless communication
CN103873270B (en) Intelligent meter infrastructure network system and its message broadcasting method
EP3700123A1 (en) Cryptographic method and system for securing electronic transmission of data
JP2014220668A (en) Transmission side device and reception side device
CN115460020B (en) Data sharing method, device, equipment and storage medium
CN115426195B (en) Data transmission method, device, computer equipment and storage medium
CN114978620B (en) Encryption method and decryption method for identity identification number
CN114726543B (en) Key chain generation and message sending and receiving methods and devices based on message chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129