CN111860829A - Method and device for training federal learning model - Google Patents

Method and device for training federal learning model Download PDF

Info

Publication number
CN111860829A
CN111860829A CN202010564686.2A CN202010564686A CN111860829A CN 111860829 A CN111860829 A CN 111860829A CN 202010564686 A CN202010564686 A CN 202010564686A CN 111860829 A CN111860829 A CN 111860829A
Authority
CN
China
Prior art keywords
learning model
training
secret
node
secret share
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010564686.2A
Other languages
Chinese (zh)
Inventor
夏家骏
鲁颖
张珣
沈敏均
陈楚元
张佳辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhishu Beijing Technology Co Ltd
Original Assignee
Guangzhishu Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhishu Beijing Technology Co Ltd filed Critical Guangzhishu Beijing Technology Co Ltd
Priority to CN202010564686.2A priority Critical patent/CN111860829A/en
Publication of CN111860829A publication Critical patent/CN111860829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method and a device for training a federated learning model, wherein an execution subject is a target training node participating in the federated learning model training, the target training node is any one training node participating in the federated learning model training, and the method comprises the following steps: acquiring a gradient secret share of a first learning model on the target training node; restoring gradient information of the first learning model according to the gradient secret share; and updating the model parameters of the first learning model according to the gradient information, and retraining the updated first learning model. According to the method and the device, on the premise of a semi-honest hypothesis, the Federal learning model is trained based on the secret sharing algorithm, so that data safety is guaranteed, the effectiveness of model training is improved, and the time consumed by operation is shortened.

Description

Method and device for training federal learning model
Technical Field
The application relates to the technical field of data processing, in particular to a method and a device for training a federated learning model.
Background
At present, data-driven business innovation plays a crucial role in the development process of enterprise digital transformation. In order to break a data isolated island and improve the data use quality, data cooperation between mechanisms is gradually frequent. Federal learning is a feasible solution which can meet privacy protection and data security, and private data of all parties cannot be locally obtained through homomorphic encryption, secret sharing and other modes, so that joint calculation and modeling are realized. On the other hand, in the process of training the model, the operation speed is also required to be ensured. Therefore, how to guarantee data safety and shorten the time consumed by calculation while considering effective training of the federal learning model becomes one of important research directions.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present application is to provide a method for training a federated learning model, which is used to solve the technical problems that effective training of the federated learning model cannot be considered in the existing method for training the federated learning model, data safety is ensured, and computation time is shortened.
A second object of the invention is to propose another method for training the federal learning model.
The invention also provides a device for training the joint learning model.
A fourth object of the present invention is to propose another training apparatus for a federal learning model.
A fifth object of the present invention is to provide another method for training the federal learning model.
The sixth purpose of the invention is to provide a training system of the federated learning model.
A seventh object of the invention is to propose yet another electronic device.
An eighth object of the present invention is to propose yet another computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present application provides a method for training a federal learning model, where an execution subject is a target training node participating in training the federal learning model, where the target training node is any one of the training nodes participating in training the federal learning model, and the method includes the following steps: acquiring a gradient secret share of a first learning model on the target training node; restoring gradient information of the first learning model according to the gradient secret share; and updating the model parameters of the first learning model according to the gradient information, and retraining the updated first learning model.
In addition, the method for training the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
according to an embodiment of the present application, the obtaining a gradient secret share of a first learning model on the target training node includes: receiving a first gradient secret share of the first learning model sent by the remaining training nodes and assisting nodes participating in the federated learning model training; obtaining a second gradient secret share of the first learning model on the target training node.
According to an embodiment of the application, said updating model parameters of said first learning model according to said gradient information comprises: receiving a loss function of the first learning model fed back by the assisting node, wherein the loss function is recovered by the assisting node according to a loss function secret share of the first learning model; identifying whether the first learning model converges according to the loss function; and if the first learning model is identified to be not converged, updating the model parameters of the first learning model according to the gradient information.
According to an embodiment of the application, before updating the model parameters of the first learning model according to the gradient information, the method further includes receiving an update indication fed back by the assisting node, where the update indication is generated when the assisting node identifies that the first learning model is not converged according to a loss function of the first learning model.
According to an embodiment of the present application, the obtaining a second gradient secret share of the first learning model on the target training node includes: secret sharing processing is carried out on sample data corresponding to the first learning model to obtain secret data shares on the target training nodes; obtaining an intermediate result of the first learning model, and performing secret sharing processing on the intermediate result to obtain a secret share of the intermediate result on the target training node; acquiring a label secret share on the target training node, wherein the label secret share is generated after the label training node carries out secret sharing processing on label data of a sample of the label secret share; and acquiring the second gradient secret share according to the data secret share, the intermediate result secret share and the tag secret share.
According to an embodiment of the present application, after performing secret sharing processing on the sample data corresponding to the first learning model, the method further includes: and respectively sending other data secret shares generated after the secret sharing processing of the sample data to the residual training nodes and the assisting nodes.
According to an embodiment of the present application, the target training node is a labeled training node, wherein the secret sharing processing on the sample data corresponding to the first learning model includes: and carrying out secret sharing processing on the sample data and the label data of the sample.
According to an embodiment of the present application, after performing the secret sharing process on the intermediate result, the method further includes: and respectively sending other intermediate result secret shares generated after the intermediate result secret sharing processing to the residual training nodes and the assisting nodes.
According to an embodiment of the present application, further comprising: and obtaining the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag, and sending the secret share of the loss function of the first learning model to the assisting node.
The embodiment of the first aspect of the application provides a method for training a federal learning model, which can train the federal learning model based on a secret sharing algorithm on the premise of a semi-honest assumption, so that attackers (including external attackers and residual training nodes) must obtain a certain number of gradient secret shares at the same time to recover gradient information, the risk that the attackers collude to steal private data is reduced, and the safety in the model training process is improved. Meanwhile, when part of gradient secret shares are lost or destroyed, the gradient information can still be recovered by using other gradient secret shares, and the reliability in the model training process is improved. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, whether the first learning model is converged is identified according to the loss function, and indication information is sent to the target training node according to the identification result, so that participation of each training node is avoided, reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and effectiveness of model training is further improved.
In order to achieve the above object, an embodiment of the second aspect of the present application provides another method for training a federal learning model, where an executive subject is a help node participating in the training of the federal learning model, and the method includes the following steps: obtaining a first gradient secret share of a first learning model on a target training node, wherein the target training node is any one training node participating in the federal learning model training; sending a first gradient secret share of a first learning model to the target training node.
In addition, the method for training the federal learning model according to the above embodiment of the present application may further have the following additional technical features:
according to an embodiment of the present application, the obtaining a first gradient secret share of a first learning model on a target training node includes: receiving a data secret share sent by the target training node, wherein the data secret share is generated after secret sharing processing is performed on sample data corresponding to the first learning model; receiving an intermediate result secret share sent by the target training node, wherein the intermediate result secret share is generated after an intermediate result generated during the training of the first learning model is subjected to secret sharing processing; receiving a label secret share sent by a labeled training node, wherein the label secret share is generated after the labeled training node carries out secret sharing processing on label data of a sample of the labeled training node; and acquiring the first gradient secret share according to the data secret share, the intermediate result secret share and the tag secret share.
According to an embodiment of the present application, further comprising: receiving a loss function secret share of the first learning model sent by each training node; and recovering the loss function of the first learning model according to the received secret share of the loss function.
According to an embodiment of the present application, after the recovering the loss function of the first learning model, the method further includes: judging whether the first learning model converges according to the loss function; if the first learning model is not converged, sending an updating instruction for updating model parameters to the target training node; and if the first learning model is converged, sending a finishing instruction of finishing the training of the first learning model to the target training node.
According to an embodiment of the present application, after the recovering the loss function of the first learning model, the method further includes: and sending the loss function to the target training node.
The embodiment of the second aspect of the application provides a method for training a federated learning model, which can introduce an assisting node to participate in federated learning model training based on a secret sharing algorithm on the premise of a semi-honest hypothesis, obtain a first gradient secret share of a first learning model on a target training node through the assisting node, and send the obtained first gradient secret share to the target training node, so that the risk that local data and gradient secret of the target training node are leaked is avoided, and the security of the data of the target training node is ensured.
In order to achieve the above object, an embodiment of a third aspect of the present application provides a training device for a federated learning model, where the training device is provided with a target training node, where the target training node is any one of training nodes participating in the federated learning model training; the device for training the federal learning model comprises: the first acquisition module is used for acquiring a gradient secret share of a first learning model on the target training node; a gradient recovery module, configured to recover gradient information of the first learning model according to the gradient secret share; and the updating module is used for updating the model parameters of the first learning model according to the gradient information and retraining the updated first learning model.
In addition, the training device of the federal learning model according to the above embodiment of the present application may have the following additional technical features:
according to an embodiment of the application, the first obtaining module includes: a first receiving unit, configured to receive a first gradient secret share of the first learning model sent by remaining training nodes and assisting nodes participating in the federated learning model training; a first obtaining unit, configured to obtain a second gradient secret share of the first learning model on the target training node.
According to an embodiment of the application, the update module includes: a second receiving unit, configured to receive a loss function of the first learning model fed back by the assisting node, where the loss function is recovered by the assisting node according to a loss function secret share of the first learning model; an identifying unit configured to identify whether the first learning model converges according to the loss function; and the updating unit is used for updating the model parameters of the first learning model according to the gradient information when the first learning model is identified to be not converged.
According to an embodiment of the application, the receiving module is further configured to receive an update indication fed back by the assisting node before updating the model parameters of the first learning model according to the gradient information, where the update indication is generated when the assisting node identifies that the first learning model is not converged according to a loss function of the first learning model.
According to an embodiment of the application, the first obtaining module includes: the first secret sharing unit is used for carrying out secret sharing processing on sample data corresponding to the first learning model so as to obtain a data secret share on the target training node; the second secret sharing unit is used for acquiring an intermediate result of the first learning model and performing secret sharing processing on the intermediate result to acquire an intermediate result secret share on the target training node; the second obtaining unit is used for obtaining the secret share of the label on the target training node, wherein the secret share of the label is generated after the label training node carries out secret sharing processing on the label data of the sample; a third obtaining unit, configured to obtain the second gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
According to an embodiment of the present application, the device for training a federal learning model further includes: and a first sending module, configured to, after secret sharing processing is performed on sample data corresponding to the first learning model, send secret shares of other data generated after the secret sharing processing of the sample data to the remaining training nodes and the assisting node, respectively.
According to an embodiment of the present application, the target training node is a labeled training node, and the secret sharing unit is further configured to perform secret sharing processing on the sample data and the label data of the sample.
According to an embodiment of the present application, the device for training a federal learning model further includes: and a second sending module, configured to send, after performing secret sharing processing on the intermediate result, other intermediate result secret shares generated after the intermediate result secret sharing processing to the remaining training nodes and the assisting node, respectively.
According to an embodiment of the present application, the device for training a federal learning model further includes: and the second obtaining module is used for obtaining the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag and sending the secret share of the loss function of the first learning model to the assisting node.
The embodiment of the third aspect of the application provides a training device for a federated learning model, which can train the federated learning model based on a secret sharing algorithm on the premise of a semi-honest assumption, so that attackers (including external attackers and residual training nodes) must obtain a certain number of gradient secret shares at the same time to recover gradient information, the risk that the attackers collude to steal private data is reduced, and the security in the model training process is improved. Meanwhile, when part of gradient secret shares are lost or destroyed, the gradient information can still be recovered by using other gradient secret shares, and the reliability in the model training process is improved. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, whether the first learning model is converged is identified according to the loss function, and indication information is sent to the target training node according to the identification result, so that participation of each training node is avoided, reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and effectiveness of model training is further improved.
In order to achieve the above object, a fourth aspect of the present application provides a training apparatus for a federal learning model, where the training apparatus for the federal learning model is on an assist node participating in the training of the federal learning model; the device for training the federal learning model comprises: the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a first gradient secret share of a first learning model on a target training node, and the target training node is any one training node participating in the federal learning model training; a sending module, configured to send a first gradient secret share of a first learning model to the target training node.
In addition, the training device of the federal learning model according to the above embodiment of the present application may have the following additional technical features:
according to an embodiment of the application, the first obtaining module includes: a first receiving unit, configured to receive a data secret share sent by the target training node, where the data secret share is generated after secret sharing processing is performed on sample data corresponding to the first learning model; a second receiving unit, configured to receive an intermediate result secret share sent by the target training node, where the intermediate result secret share is generated after secret sharing processing is performed on an intermediate result generated during training of the first learning model; a third receiving unit, configured to receive a secret share of a tag sent by a tagged training node, where the secret share of the tag is generated after the tagged training node performs secret sharing processing on tag data of a sample of the tag; an obtaining unit, configured to obtain the first gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
According to an embodiment of the present application, the device for training a federal learning model further includes: the receiving module is used for receiving the loss function secret share of the first learning model sent by each training node; and the loss recovery module is used for recovering the loss function of the first learning model according to the received secret share of the loss function.
According to an embodiment of the present application, the device for training a federal learning model further includes: the judging module is used for judging whether the first learning model converges or not according to the loss function after the loss function is restored; and the indication sending module is used for sending an update indication for updating the model parameters to the target training node when the first learning model is not converged, and sending a completion indication for finishing the training of the first learning model to the target training node when the first learning model is converged.
According to an embodiment of the present application, the sending module is further configured to: after recovering the loss function, transmitting the loss function to the target training node.
The embodiment of the fourth aspect of the application provides a training device of a federal learning model, which can introduce an assisting node to participate in the federal learning model training based on a secret sharing algorithm on the premise of a semi-honest hypothesis, obtain a first gradient secret share of a first learning model on a target training node through the assisting node, and send the obtained first gradient secret share to the target training node, so that the risk that local data and gradient secret of the target training node are leaked is avoided, and the safety of the data of the target training node is ensured.
In order to achieve the above object, an embodiment of a fifth aspect of the present application provides a method for training a federal learning model, where the method includes the following steps: the assisting node sends a gradient secret share of a first learning model on a target training node to the target training node; the rest training nodes send the gradient secret share of the first learning model to the target training node; the target training node acquires a local gradient secret share of the first learning model; the target training node recovers the gradient information of the first learning model according to the local gradient secret shares, the gradient secret shares sent by the assisting node and the gradient secret shares sent by the remaining nodes; and the target training node updates the model parameters of the first learning model according to the gradient information and trains the updated first learning model again.
In order to achieve the above object, an embodiment of a sixth aspect of the present application provides a system for training a federal learning model, the system including: a training apparatus for a federal learning model as defined in the third aspect of the present application and a training apparatus for a federal learning model as defined in the fourth aspect of the present application.
In order to achieve the above object, a seventh embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method for training a federated learning model as described in any of the embodiments of the first aspect of the present application or a method for training a federated learning model as described in any of the embodiments of the second aspect of the present application when executing the program.
In order to achieve the above object, an eighth aspect of the present application provides a computer-readable storage medium, wherein when executed by a processor, the program implements a method for training a federal learning model as defined in any one of the embodiments of the first aspect of the present application, or implements a method for training a federal learning model as defined in any one of the embodiments of the second aspect of the present application.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in one embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 4 is a schematic flow chart of a secret sharing algorithm disclosed in one embodiment of the present application;
FIG. 5 is a schematic flow chart of a secret sharing algorithm disclosed in another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 8 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 9 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 10 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 11 is a schematic flow chart diagram illustrating a method for training a federated learning model as disclosed in another embodiment of the present application;
FIG. 12 is an architecture diagram of federated learning model training as disclosed in one embodiment of the present application;
FIG. 13 is a schematic illustration of a training apparatus for a federated learning model as disclosed in one embodiment of the present application;
FIG. 14 is a schematic illustration of a training apparatus for the federated learning model as disclosed in another embodiment of the present application;
FIG. 15 is a schematic illustration of a training apparatus for the federated learning model as disclosed in another embodiment of the present application;
FIG. 16 is a schematic illustration of a training apparatus for the federated learning model as disclosed in another embodiment of the present application;
FIG. 17 is an architecture diagram of a cloud platform as disclosed in one embodiment of the present application;
fig. 18 is a schematic structural diagram of a training system of a bang learning model according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The following describes a method and an apparatus for training a federal learning model according to an embodiment of the present application with reference to the drawings.
Fig. 1 is a flow chart illustrating a method for training a federal learning model according to an embodiment of the present application.
The method for training the federated learning model provided by the application trains the federated learning model based on a Secret Sharing (Secret Sharing) algorithm, such as a Shamir algorithm, on the premise of a semi-honest hypothesis.
The semi-honest hypothesis means that all participating nodes can accurately calculate according to a protocol in the model training process, and all intermediate results can be recorded to derive additional information.
The secret sharing algorithm means that the shared secret can be reasonably distributed among a group so as to achieve the purpose of common secret management of all participants.
The federated learning model can be a longitudinal federated learning model, which refers to a federated learning model under the scene that different participating nodes are united to perform machine learning and the data dimensions of the participating nodes are inconsistent.
It should be noted that the method for training the federal learning Model provided by the present application is suitable for the federal learning Model, and is also suitable for establishing a federal learning system with numerous machine learning models such as a Logistic Regression Model, a tree structure Model, a neural network Model, and the like. Particularly, when a logistic regression model (namely a longitudinal federal learning model using the logistic regression model) is established on the federal learning system, the method provided by the application has more obvious effect.
It should be noted that, in the process of training the joint learning model, the related nodes may be preliminarily divided into: a helper node (non-data owner) and a training node (data owner), both types of nodes. The training nodes are corresponding to local learning models, so that the local learning models are trained by inputting respective samples into the corresponding local learning models. The local learning model is a part of the federal learning model, and the local learning model of each training node forms an integral federal learning model.
The private user data owned by the training nodes respectively comprises data aiming at different aspects of the same user. For example, one of the training nodes is company a, and the user data thereof is the salary data, working life and promotion opportunity of 100 employees in company a, and the other training node is bank B, and the user data thereof is the consumption record, fixed asset and personal credit line of 100 employees in company a. The user data owned by company a and bank B are from the same group of users, that is, 100 employees in company a data are consistent with 100 employees in bank B data.
As shown in fig. 1, a target training node participating in the training of the federal learning model is used as an execution subject, where the target training node is any one of the training nodes participating in the training of the federal learning model, and the method for training the federal learning model provided in the embodiment of the present application is explained, which specifically includes the following steps:
s101, obtaining a gradient secret share of a first learning model on a target training node.
In an embodiment of the application, a longitudinal federated learning model may be trained based on a secret sharing algorithm on the premise of a semi-honest assumption.
In an embodiment of the application, gradient information of the first learning model is shared as a secret, and a plurality of nodes participating in the federal learning model training can obtain a gradient secret share of the first learning model, wherein the plurality of nodes can include a target training node, a rest training node except for a non-target training node, and an assisting node.
Since in a secret sharing algorithm, a secret can be recovered if a certain number of secret shards need to be obtained simultaneously to recover the password.
Alternatively, the target training node may receive a certain number of gradient secret shares, and after obtaining the set number of gradient secret shares, the gradient information of the first learning model may be restored based on a recovery algorithm. The set number of gradient secret shares may be from part of the remaining training nodes, or from part of the remaining training nodes and assisting nodes, or from part of the remaining searching nodes, assisting nodes, and target training nodes, and it is only necessary to ensure that the number of the received gradient secret shares reaches the set number. The recovery algorithm is not limited in the present application, and may be selected according to actual situations, for example, a super-convergence cell block gradient recovery algorithm, a gradient recovery algorithm based on solving a unilateral optimal problem, and the like may be selected.
Optionally, the target training node may also receive the gradient secret shares of the first learning model sent by each of the remaining training nodes. Further, the target training node may also receive the gradient secret shares of the first learning model sent by the assisting node, wherein the gradient secret shares from the remaining training nodes and the assisting node may be referred to as first gradient secret shares. Further, the target training node may obtain the gradient secret shares of the first learning model from local, where the local gradient secret shares may be referred to as second gradient secret shares to share with the first gradient secret shares.
And S102, restoring the gradient information of the first learning model according to the second gradient secret share and the first gradient secret share.
Optionally, after the obtained second gradient secret share and the first gradient secret share, since the first gradient secret share and the second gradient secret share are fragments of the secret of the gradient information, the gradient information can be restored according to a set restoration algorithm based on obtaining the first gradient secret share and the second gradient secret share, and the gradient information of the first learning model can be restored.
S103, updating the model parameters of the first learning model according to the gradient information, and retraining the updated first learning model.
The gradient information of the first learning model is a vector, the direction of the gradient information refers to the direction of the change of the inner point of the function definition domain, and the mode of the gradient information is the maximum value of the directional derivative. In the process of federal model training, the loss function is the minimum value and is the optimization target of model training, and at the moment, by acquiring the gradient information of the first learning model, a model parameter updating strategy (including an adjusting direction and an adjusting step length) can be determined so as to ensure that the loss function can be reduced most quickly.
After the gradient information of the first learning model is acquired, the adjustment direction and the adjustment step length of the model parameters in the first learning model can be determined according to the gradient information, and the model parameters in the first learning model are updated according to the adjustment direction and the adjustment step length. Further, after the first learning model is updated, the training of the first learning model may continue to generate a converged first learning model. It should be noted that each training node can recover the gradient information of its own local learning model in the manner of the target training node.
Therefore, on the premise of semi-honest assumption, the method and the device can train the Federal learning model based on the secret sharing algorithm, so that attackers (including external attackers and residual training nodes) can recover gradient information only by obtaining a certain number of gradient secret shares at the same time, the risk that the attackers collude to steal private data is reduced, and the safety in the model training process is improved. Meanwhile, when part of gradient secret shares are lost or destroyed, the gradient information can still be recovered by using other gradient secret shares, and the reliability in the model training process is improved. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, whether the first learning model is converged is identified according to the loss function, and indication information is sent to the target training node according to the identification result, so that participation of each training node is avoided, reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and effectiveness of model training is further improved.
As a possible implementation manner, as shown in fig. 2, the process of updating the model parameters of the first learning model according to the gradient information in step S104 specifically includes the following steps:
s201, receiving a loss function of the first learning model fed back by the assisting node, wherein the loss function is recovered by the assisting node according to a loss function secret share of the first learning model.
Alternatively, the assisting node may receive the loss function secret shares of the first learning model sent by the plurality of training nodes, and then recover the loss function of the first learning model according to the loss function secret shares of the first learning model. It should be noted that, when the assisting node attempts to recover the loss function of the first learning model, the secret can be recovered only when the secret shares of the loss function received by the assisting node are more than the preset number, and the secret cannot be recovered when the secret shares of the loss function received by the assisting node are less than the preset number. The preset number can be set according to actual conditions.
Further, the assisting node sends the loss function of the first learning model to the target training node, and accordingly, the target training node can receive the loss function of the first learning model fed back by the assisting node.
S202, whether the first learning model converges or not is identified according to the loss function.
Optionally, the loss function may be compared with a preset threshold, and if the identified loss function reaches the preset threshold, it indicates that the first learning model reaches the required precision, that is, it indicates that the current first learning model is convergent, the training may be stopped, and the current first learning model is saved; if the recognition loss function does not reach the preset threshold, which indicates that the first learning model does not reach the required precision, i.e., indicates that the current first learning model does not converge, step S203 may be further performed.
The preset threshold value can be set according to actual conditions. For example, the preset threshold may be set to 92%, 95%, 99%, or the like.
And S203, if the first learning model is identified not to be converged, updating the model parameters of the first learning model according to the gradient information.
If it is recognized that the first learning model does not converge, the model parameters of the first learning model may be updated according to the gradient information, and the training may be continued to improve the accuracy required by the first learning model.
As a possible implementation manner, the assisting node may send corresponding indication information to the target training node according to the recognition result of whether the first learning model converges.
Optionally, if it is identified that the first learning model does not converge, an update instruction may be sent to the target training node; if the first learning model is identified to be convergent, a training-stop indication may be sent to the target training node. Correspondingly, if the first learning model is identified to be not converged, the target training node can receive an update indication fed back by the assisting node; if the first learning model is identified to be converged, the target training node may receive a training stopping instruction sent by the assisting node. Wherein the update indication is generated by the assisting node when the first learning model is identified as not converging according to a loss function of the first learning model.
Therefore, on the premise of a semi-honest hypothesis, the method and the device can train the federal learning model based on the secret sharing algorithm, identify whether the first learning model is converged according to the received loss function, and update the first learning model according to the gradient information when the first learning model is identified not to be converged. The received loss function is obtained by the training nodes based on the secret sharing algorithm, namely the gradient information can be recovered only when a certain number of gradient secret shares are obtained, so that the risk that attackers conspire to steal private data is reduced, the safety information in the model training process is ensured, and the participation of each training node is avoided, thereby ensuring the reliability of the indication information received by each training node and fed back by the assisting nodes, and further improving the effectiveness of the model training.
It should be noted that, after receiving the first gradient secret shares sent by the remaining training nodes and the assisting node, the target training node may further obtain a second gradient secret share of the first learning model on the target training node.
As a possible implementation manner, as shown in fig. 3, on the basis of the foregoing embodiment, taking the target training node to obtain the second gradient secret share as an example, the following explains an obtaining process of the gradient secret share, and specifically includes the following steps:
s301, secret sharing processing is carried out on the sample data corresponding to the first learning model to obtain the secret share of the data on the target training node.
It should be noted that, as the training nodes participating in the model training include: labeled training nodes and unlabeled training nodes. Therefore, before attempting to obtain the second gradient secret share of the first learning model on the target training node, it may also be identified whether the target training node is a tagged training node.
Optionally, if the target training node is identified as a training node without a label, secret sharing processing may be performed only on sample data corresponding to the first learning model to generate a data secret share and other data secret shares corresponding to the sample data; if the target training node is identified as the training node with the label, secret sharing processing can be simultaneously performed on the sample data corresponding to the first learning model and the label data of the sample, so that secret shares of data and secret shares of other data corresponding to the sample data and the label data of the sample are generated.
Further, the target training node may store the data secret shares locally and send other data secret shares to the remaining training nodes and assisting nodes, respectively.
It should be noted that, when the secret sharing process is attempted, the sample data and/or the tag data of the sample may be divided as the secret, and the secret is shared among n participants (including the target training node, the remaining training nodes, and the assisting node), so that the secret can be recovered only when more than a preset number of participants cooperate, and the secret cannot be recovered when less than the preset number of participants cooperate. The preset number can be set according to actual conditions. For example, the preset number may be set to 2/3 × n.
For example, as shown in fig. 4, if the target training node is a labeled training node, the sample data X and the label data y of the sample may be used as the secret a, the secret a is divided into secret shares a1 to a6, the secret share a6 is stored locally, and the secret shares a1 to a5 are sent to the remaining training nodes S1 to S4 and the assisting node S5. If the preset number is 4, the secret a can be recovered only when the number of the cooperative participants reaches 4, and the secret a cannot be recovered when the number of the cooperative participants does not reach 4. Wherein A1-A6 are all the same.
S302, obtaining an intermediate result of the first learning model, and performing secret sharing processing on the intermediate result to obtain a secret share of the intermediate result on the target training node.
Optionally, the target training node may obtain an intermediate result u ═ θ of the first learning model according to the local first learning model parameter θ and the local sample data XTx, and then the intermediate result u is subjected to a secret sharing process to obtain an intermediate result secret share and other intermediate result secret shares.
Further, the target training node may store the intermediate result secret shares locally and send other intermediate result secret shares to the remaining training nodes and assisting nodes, respectively.
For example, as shown in fig. 5, the intermediate result u may be used as the secret B, the secret B is divided into intermediate result secret shares B1-B6, the secret share B6 is stored locally, and the secret shares B1-B5 are sent to the remaining training nodes S1-S4 and the assisting node S5, so that the intermediate result secret share B1 is carried by the target training node. Wherein B1-B6 are the same.
S303, obtaining a secret share of the label on the target training node, wherein the secret share of the label is generated after the label training node carries out secret sharing processing on the label data of the sample.
Since the secret sharing system has a homomorphic characteristic, when trying to acquire the secret share of the tag on the target training node, the sample data corresponding to the first learning model and the tag data of the sample may be used as one secret to perform secret sharing processing, or the sample data corresponding to the first learning model and the tag data of the sample may be used as two secrets to perform secret sharing processing, so that the secret shares of the tag on the target training node acquired by the two methods are consistent.
For example, for sample data X and label data y of the sample, X and y may be used as one secret C to perform secret sharing processing, so that the secret share of the data obtained on the target training node is C1. Further, the corresponding tag secret share y1 may be extracted from C1; or y alone may be used as one secret D to perform secret sharing processing, so that the secret share D1 of the tag on the target training node can be directly obtained. At this time, y1 is identical to D1.
S304, obtaining a second gradient secret share according to the data secret share, the intermediate result secret share and the label secret share.
Alternatively, the target node may calculate the second gradient secret share from the data secret share, the intermediate result secret share, and the tag secret share using the following formula:
Figure BDA0002547402310000131
where θ is a first learning model parameter, uiSecret shares, x, for intermediate resultsi pAs secret shares of data, yiIs a secret share of the tag.
Further, the target training node may calculate the loss function secret share of the first learning model according to the intermediate result secret share and the tag secret share by using the following formula and send the loss function secret share to the assisting node:
Figure BDA0002547402310000132
therefore, the sample data and the intermediate result corresponding to the first learning model can be processed secretly through the target training node, the secret share of the data, the secret share of the intermediate result and the secret share of the label on the target training node are obtained, and the secret share of the second gradient of the first learning model on the target training node and the secret share of the loss function of the first learning model are further obtained, so that data interaction can be carried out on the basis of a secret analysis algorithm on the premise of semi-honest assumption, and data safety in the data information interaction and calculation process is guaranteed.
Fig. 6 is a flowchart illustrating a method for training a federal learning model according to another embodiment of the present disclosure.
As shown in fig. 6, the method for training the federal learning model provided in the embodiment of the present application is explained by using the assisting nodes participating in the training of the federal learning model as an executing subject, and specifically includes the following steps:
s401, obtaining a first gradient secret share of a first learning model on a target training node, wherein the target training node is any one training node participating in the Federal learning model training.
As can be seen from the foregoing embodiments, the gradient information is related to the samples, the intermediate results, and the labels of the samples, and the assisting node may obtain the first gradient secret share of the first learning model according to the calculation formula of the gradient information based on the information related to the first learning model shared by the secrets.
S402, sending the first gradient secret share of the first learning model to the target training node.
Optionally, after the first gradient secret shares are obtained, the first gradient secret shares may be sent to corresponding target training nodes, so that after the target training nodes obtain a preset number of gradient secret shares of the first learning model, the gradient information of the first learning model may be recovered, and the first learning model is updated to continue training the first learning model to generate a converged first learning model.
Therefore, on the premise of a semi-honest hypothesis, the assisting node is introduced to participate in the federated learning model training based on the secret sharing algorithm, the first gradient secret share of the first learning model on the target training node is obtained through the assisting node, and the obtained first gradient secret share is sent to the target training node, so that the risk that local data and the gradient secret of the target training node are leaked is avoided, and the data security of the target training node is ensured.
As a possible implementation manner, as shown in fig. 7, on the basis of the foregoing embodiment, in the foregoing step S401, a process of obtaining a first gradient secret share of a first learning model on a target training node specifically includes the following steps:
s501, receiving a data secret share sent by a target training node, wherein the data secret share is generated after secret sharing processing is performed on sample data corresponding to a first learning model.
Optionally, the target training node may perform secret sharing processing on sample data corresponding to the first learning model to generate a data secret share corresponding to the sample data, and send the data secret share to the assisting node. Accordingly, the assisting node may receive the data secret share sent by the target training node.
It should be noted that the training nodes participating in model training include: if the target training node is identified to be the training node without the label, the secret data share received by the assisting node only comprises the secret data share generated after secret sharing processing is carried out on sample data corresponding to the first learning model; if the target training node is identified to be the training node with the label, the data secret share received by the assisting node simultaneously comprises the data secret share generated after secret sharing processing is carried out on the sample data corresponding to the first learning model and the label data of the sample.
S502, receiving an intermediate result secret share sent by the target training node, wherein the intermediate result secret share is generated after the intermediate result generated in the training of the first learning model is subjected to secret sharing processing.
Optionally, the target training node may obtain an intermediate result u ═ θ of the first learning model according to the local first learning model parameter θ and the local sample data XTx, then performing secret sharing processing on the intermediate result u to obtain intermediate result secret shares, and respectively sending the intermediate result secret shares to the assisting nodes. Accordingly, the assisting node may receive the intermediate result secret shares sent by the target training node.
S503, receiving a label secret share sent by the label training node, wherein the label secret share is generated after the label training node carries out secret sharing processing on label data of a sample.
Optionally, the tagged training node may perform secret sharing processing on the sample data corresponding to the first learning model and the tag data of the sample as one secret, and send the generated secret share of the data to the coordination node. Accordingly, the coordinating node may receive the data secret shares and extract the tag secret shares therefrom.
Optionally, the tagged training node may perform secret sharing processing on the sample data corresponding to the first learning model and the tag data of the sample as two secrets, and directly send the generated secret share of the tag to the coordination node. Accordingly, the coordinating node may receive the tag secret share directly.
It should be noted that, because the secret sharing architecture has a homomorphic characteristic, the secret shares of the tags obtained by the coordinating node through the foregoing two methods are consistent.
S504, obtaining a first gradient secret share according to the data secret share, the intermediate result secret share and the label secret share.
Alternatively, the coordinating node may calculate the first gradient secret share from the data secret share, the intermediate result secret share, and the tag secret share using the following formula:
Figure BDA0002547402310000151
where θ is a first learning model parameter, uiSecret shares, x, for intermediate resultsi pAs secret shares of data, yiIs a secret share of the tag.
Therefore, the data information interaction method and the data information interaction system can assist the nodes to receive the secret share of the data, the secret share of the intermediate result and the secret share of the label sent by the training node with the label, and further obtain the secret share of the first gradient of the first learning model on the training node with the label, so that data interaction can be carried out on the basis of a secret analysis algorithm on the premise of semi-honesty assumption, and data safety in the data information interaction and calculation process is guaranteed.
Further, after the assisting node obtains the first gradient secret share, the loss function secret shares sent by all the training nodes can also be obtained, and the loss function of the first learning model is recovered through decryption.
As a possible implementation manner, as shown in fig. 8, on the basis of the foregoing embodiment, the process of recovering the loss function of the first learning model specifically includes the following steps:
S601, receiving the loss function secret share of the first learning model sent by each training node.
Optionally, when attempting to train the federated learning model, each training node may obtain the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag, respectively, and send the secret share of the loss function to the assisting node. Accordingly, the assisting node may receive the loss function secret shares of the first learning model sent by each training node.
S602, recovering the loss function of the first learning model according to the received secret share of the loss function.
Optionally, the assisting node may decrypt from the received loss function secret share to recover the loss function of the first learning model.
Further, after the assisting node recovers the loss function of the first learning model, the loss function may be sent to the target training node.
It should be noted that, based on the secret algorithm, it is ensured that only the assisting node can recover the loss function of the first learning model, and the other training nodes cannot recover the loss function of the first learning model, and meanwhile, the loss function sent to the target training node by the assisting node is also a function corresponding to the first learning model of the target training node.
Therefore, the loss function secret shares of the first learning model sent by each training node can be received by the assisting nodes, and the loss function of the first learning model is recovered through decryption, so that the assisting nodes can recover the loss function only by obtaining a certain number of loss function secret shares, the risk that attackers conspire to steal private data is reduced, the loss function can be recovered only by the assisting nodes in the whole model training process, the participation of each training node is avoided, and the reliability of the loss function obtained by the assisting nodes is ensured.
Further, after the assisting node recovers the loss function of the first learning model, whether the first learning model converges may be identified according to the loss function, and an indication of matching may be sent to the target training node according to the identification result.
As a possible implementation manner, as shown in fig. 9, on the basis of the foregoing embodiment, the process of identifying whether the first learning model converges according to the loss function specifically includes the following steps:
and S701, judging whether the first learning model converges or not according to the loss function.
Optionally, the assisting node may compare the obtained loss function with a preset threshold, and if the loss function is identified to reach the preset threshold, it indicates that the first learning model reaches the required precision, that is, it indicates that the current first learning model is convergent, step S703 may be executed; if the recognition loss function does not reach the preset threshold, which indicates that the first learning model does not reach the required precision, i.e., indicates that the current first learning model does not converge, step S702 may be executed.
S702, sending an updating instruction for updating the model parameters to the target training node.
It should be noted that, if it is recognized that the first learning model does not converge, an update instruction for updating the model parameters may be sent to the target training node, so that the target training node updates the model parameters of the first learning model according to the gradient information, and continues training to improve the accuracy required by the first learning model.
And S703, sending a finishing indication of finishing the training of the first learning model to the target training node.
If it is recognized that the first learning model converges, an instruction to complete the training of the first learning model may be sent to the target training node, so that the target training node stops the training of the first learning model and stores the current first learning model.
Therefore, whether the first learning model is converged can be judged through the assisting node according to the loss function, and a matching instruction is sent to the target training node according to the recognition result so as to instruct the first learning model to be trained until the model is converged, and the effectiveness in the model training process is ensured.
In order to implement the foregoing embodiment, an embodiment of the present invention further provides a flowchart of another method for training a federal learning model, and as shown in fig. 10, the method for training a federal learning model includes the following steps:
S801, the assisting node sends the gradient secret share of the first learning model on the target training node to the target training node.
S802, the residual training nodes send the gradient secret share of the first learning model to the target training node.
And S803, the target training node acquires the local gradient secret share of the first learning model.
S804, the target training node recovers the gradient information of the first learning model according to the local gradient secret shares, the gradient secret shares sent by the assisting nodes and the gradient secret shares sent by the rest nodes.
And S805, the target training node updates the model parameters of the first learning model according to the gradient information, and trains the updated first learning model again.
It should be noted that, for the descriptions of steps S801 to S805, reference may be made to the relevant descriptions in the above embodiments, and details are not repeated here.
Therefore, on the premise of semi-honest assumption, the method and the device can train the Federal learning model based on the secret sharing algorithm, reduce the risk of data leakage in information transmission, and ensure the safety of private data of the training nodes. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, so that the participation of each training node is avoided, the reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and the effectiveness of model training is further improved.
In order to implement the foregoing embodiment, an embodiment of the present invention further provides another flowchart of a method for training a federal learning model, and as shown in fig. 11, the method for training a federal learning model provided in the embodiment of the present application is explained by taking an entire process of model training performed by assist nodes and training nodes as an example, and specifically includes the following steps:
s901, each training node respectively carries out secret sharing processing on sample data corresponding to the first learning model to obtain a data secret share and other data secret shares on each training node, and the other data secret shares are respectively sent to the remaining training nodes and the assisting node.
The data secret shares obtained by the training nodes with the labels through secret sharing processing comprise label secret shares.
S902, the training nodes respectively obtain the intermediate results of the first learning model, perform secret sharing processing on the intermediate results to obtain intermediate result secret shares and other intermediate result secret shares on the training nodes, and respectively send the other intermediate result secret shares to the remaining training nodes and the assisting node.
S903, each training node obtains a first gradient secret share according to the data secret share, the intermediate result secret share and the label secret share.
And S904, the training nodes and the assisting node respectively obtain a second gradient secret share according to other data secret shares, other intermediate result secret shares and other label secret shares.
And S905, restoring the gradient information of the first learning model corresponding to each training node by each training node according to the first gradient secret share and the second gradient secret share.
S906, each training node obtains the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the label respectively, and sends the secret share of the loss function of the first learning model to the assisting node.
And S907, the assisting nodes receive the secret share of the loss function of the first learning model sent by each training node.
S908, the assisting node recovers the loss function of the first learning model according to the received secret share of the loss function, and sends the loss function to the target training node.
And S909, the assisting node judges whether the first learning model converges or not according to the loss function.
S910, the assisting node sends an updating instruction for updating the model parameters to the training nodes of which the first learning model is not converged.
For example, as shown in fig. 11, the remaining training nodes are training nodes for which the first learning model does not converge.
And S911, updating the model parameters of the first learning model by the non-convergent training nodes of the first learning model according to the gradient information of the corresponding first learning model, and re-training the updated first learning model.
And S912, the assisting node sends a completion instruction of the completion of the training of the first learning model to the training node converged by the first learning model.
For example, as shown in fig. 11, the target training node is a training node at which the first learning model converges.
It should be noted that, for the descriptions of steps S901 to S912, reference may be made to the relevant descriptions in the above embodiments, and details are not repeated here.
As shown in fig. 12, on the premise of a semi-honest assumption, the method includes that each training node (including a labeled training node and a non-labeled training node) and an assisting node participate in calculation together, a Shamir secret sharing algorithm, a loss function calculation formula, a secret share calculation formula, a symbolic mathematical system based on data flow transformation (TensorFlow) and the like are adopted to train a longitudinal federal learning model, whether the model converges or not is judged according to a loss function calculated by a coordinating node, and an instruction is sent to the corresponding training node according to a recognition result until the models corresponding to all the training nodes complete training, so that data safety is ensured, the effectiveness of model training is improved, and the time consumed by operation is shortened.
Therefore, on the premise of semi-honest assumption, the method and the device can train the Federal learning model based on the secret sharing algorithm, so that attackers (including external attackers and residual training nodes) can recover gradient information only by obtaining a certain number of gradient secret shares at the same time, the risk that the attackers collude to steal private data is reduced, and the safety in the model training process is improved. Meanwhile, when part of gradient secret shares are lost or destroyed, the gradient information can still be recovered by using other gradient secret shares, and the reliability in the model training process is improved. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, whether the first learning model is converged is identified according to the loss function, and indication information is sent to the target training node according to the identification result, so that participation of each training node is avoided, reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and effectiveness of model training is further improved.
Based on the same application concept, the embodiment of the application also provides a device corresponding to the method for training the federated learning model.
Fig. 13 is a schematic structural diagram of a training apparatus of a federal learning model according to an embodiment of the present application. As shown in fig. 13, the training apparatus is provided with a target training node, where the target training node is any one of the training nodes participating in the training of the federal learning model, and the training apparatus 1000 of the federal learning model includes: a first acquisition module 110, a gradient restoration module 120, and an update module 130.
The first obtaining module 110 is configured to obtain a gradient secret share of a first learning model on the target training node; a gradient recovery module 120, configured to recover gradient information of the first learning model according to the gradient secret share; an updating module 130, configured to update the model parameters of the first learning model according to the gradient information, and train the updated first learning model again.
According to an embodiment of the present application, as shown in fig. 14, the first obtaining module 110 in fig. 13 includes: a first receiving unit 111 and a first obtaining unit 112. The first receiving unit 111 is configured to receive a first gradient secret share of the first learning model sent by the remaining training nodes and the assisting nodes participating in the federal learning model training; a first obtaining unit 112, configured to obtain a second gradient secret share of the first learning model on the target training node.
According to an embodiment of the present application, as shown in fig. 14, the update module 130 in fig. 13 includes: a second receiving unit 131, a recognition unit 132, and an updating unit 133. A second receiving unit 131, configured to receive a loss function of the first learning model fed back by the assisting node, where the loss function is recovered by the assisting node according to a loss function secret share of the first learning model; an identifying unit 132 for identifying whether the first learning model converges according to the loss function; an updating unit 133, configured to update a model parameter of the first learning model according to the gradient information when it is identified that the first learning model does not converge.
According to an embodiment of the present application, the first receiving unit 111 is further configured to: receiving an update indication fed back by the assisting node before updating the model parameters of the first learning model according to the gradient information, wherein the update indication is generated when the assisting node identifies that the first learning model is not converged according to the loss function of the first learning model.
According to an embodiment of the present application, as shown in fig. 14, the first obtaining module 110 in fig. 13 includes: a first secret sharing unit 113, a second secret sharing unit 114, a first acquisition unit 115, and a second acquisition unit 116. The first secret sharing unit 113 is configured to perform secret sharing processing on sample data corresponding to the first learning model to obtain a data secret share on the target training node; a second secret sharing unit 124, configured to obtain an intermediate result of the first learning model, and perform secret sharing processing on the intermediate result to obtain an intermediate result secret share on the target training node; a first obtaining unit 125, configured to obtain a secret share of the tag on the target training node, where the secret share of the tag is generated after the tag training node performs secret sharing processing on the tag data of its own sample; a second obtaining unit 126, configured to obtain the second gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
According to an embodiment of the present application, as shown in fig. 14, the training apparatus 1000 of the federal learning model provided in the present application further includes: a first sending module 140, configured to, after performing secret sharing processing on sample data corresponding to the first learning model, send other secret data shares generated after the secret sharing processing on the sample data to the remaining training nodes and the assisting node, respectively.
According to an embodiment of the present application, the target training node is a labeled training node, wherein, as shown in fig. 14, the first secret sharing unit 113 and the second secret sharing unit 114 are further configured to perform secret sharing processing on the sample data and the label data of the sample.
According to an embodiment of the present application, as shown in fig. 14, the training apparatus 1000 of the federal learning model provided in the present application further includes: a second sending module 150, configured to send, after performing secret sharing processing on the intermediate result, other intermediate result secret shares generated after the intermediate result secret sharing processing to the remaining training nodes and the assisting node, respectively.
According to an embodiment of the present application, as shown in fig. 14, the training apparatus 1000 of the federal learning model provided in the present application further includes: a second obtaining module 160, configured to obtain the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag, and send the secret share of the loss function to the assisting node.
Therefore, on the premise of semi-honest assumption, the method and the device can train the Federal learning model based on the secret sharing algorithm, so that attackers (including external attackers and residual training nodes) can recover gradient information only by obtaining a certain number of gradient secret shares at the same time, the risk that the attackers collude to steal private data is reduced, and the safety in the model training process is improved. Meanwhile, when part of gradient secret shares are lost or destroyed, the gradient information can still be recovered by using other gradient secret shares, and the reliability in the model training process is improved. Furthermore, the federated learning model is trained based on a secret sharing algorithm, the gradient secret share operation can be completed in an express way, and the operation time is greatly shortened. Furthermore, only the assisting nodes can recover the loss function in the whole model training process, whether the first learning model is converged is identified according to the loss function, and indication information is sent to the target training node according to the identification result, so that participation of each training node is avoided, reliability of the indication information fed back by the assisting nodes and received by each training node is ensured, and effectiveness of model training is further improved.
Based on the same application concept, the embodiment of the application also provides a device corresponding to another Federal learning model training method.
Fig. 15 is a schematic structural diagram of a training apparatus of a federal learning model according to an embodiment of the present application. As shown in fig. 15, the training apparatus 1000 of the federal learning model is provided on a assisting node participating in the training of the federal learning model, and includes: a first obtaining module 210 and a sending module 220.
The first obtaining module 210 is configured to obtain a first gradient secret share of a first learning model on a target training node, where the target training node is any one of training nodes participating in the federal learning model training;
a sending module 220, configured to send the first gradient secret share of the first learning model to the target training node.
According to an embodiment of the present application, as shown in fig. 16, the first obtaining module 210 in fig. 15 includes: a first receiving unit 211, a second receiving unit 212, a third receiving unit 213, and an obtaining unit 214. The first receiving unit 211 is configured to receive a data secret share sent by the target training node, where the data secret share is generated after secret sharing processing is performed on sample data corresponding to the first learning model; a second receiving unit 212, configured to receive an intermediate result secret share sent by the target training node, where the intermediate result secret share is generated after a secret sharing process is performed on an intermediate result generated during training of the first learning model; a third receiving unit 213, configured to receive a secret share of a tag sent by a tagged training node, where the secret share of the tag is generated after the tagged training node performs secret sharing processing on tag data of its own sample; an obtaining unit 214, configured to obtain the first gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
According to an embodiment of the present application, as shown in fig. 16, the training apparatus 1000 of the federal learning model provided in the present application further includes: a receiving module 230 and a loss recovery module 240. The receiving module 230 is configured to receive a loss function secret share of the first learning model sent by each training node; a loss recovery module 240, configured to recover the loss function of the first learning model according to the received secret share of the loss function.
According to an embodiment of the present application, as shown in fig. 16, the training apparatus 1000 of the federal learning model provided in the present application further includes: a judging module 250 and an indication sending module 260. The determining module 250 is configured to determine whether the first learning model converges according to the loss function after the loss function is recovered; an indication sending module 260, configured to send, to the target training node, an update indication for updating the model parameter when the first learning model is not converged, and send, to the target training node, a completion indication for completing the training of the first learning model when the first learning model is converged.
According to an embodiment of the present application, as shown in fig. 15, the sending module 220 is further configured to send the loss function to the target training node after the loss function is recovered.
Therefore, on the premise of a semi-honest hypothesis, the assisting node is introduced to participate in the federated learning model training based on the secret sharing algorithm, the first gradient secret share of the first learning model on the target training node is obtained through the assisting node, and the obtained first gradient secret share is sent to the target training node, so that the risk that local data and the gradient secret of the target training node are leaked is avoided, and the data security of the target training node is ensured.
It should be noted that, as shown in fig. 17, a training system composed of a training apparatus of the federal learning model provided in the present application, and at least one data management system and an auxiliary system can form a service application layer of a cloud platform, and then an application program is established in combination with the data layer and a basic support layer, so as to implement functions of the application program on the basis of eliminating intermediate result leakage risk, avoiding a final calculation result from being acquired by an unneeded node, and ensuring data security.
The MySQL is a relational database management system, and Remote dictionary service (Redis) belongs to a database; the inter-cloud federal learning calculation engine comprises: encryption algorithm, federal Learning Application Programming Interface (federedLearning API for short), federal Core Application Programming Interface (federedCore API for short), and Compiler (Compiler).
Based on the same application concept, the embodiment of the application also provides a system corresponding to the method for training the federated learning model.
Fig. 18 is a schematic structural diagram of a training system of the federal learning model provided in an embodiment of the present application. As shown in fig. 18, the training system 3000 of the federal learning model includes a training apparatus 1000 of the federal learning model.
Based on the same application concept, the embodiment of the application also provides the electronic equipment.
Fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 19, the electronic device 2000 includes a memory 201, a processor 202, and a computer program stored in the memory 201 and executable on the processor 202, and when the processor executes the computer program, the processor implements the above-mentioned method for training the federal learning model.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (32)

1. A method for training a federated learning model is characterized in that an execution subject is a target training node participating in the federated learning model training, wherein the target training node is any one of the training nodes participating in the federated learning model training, and the method comprises the following steps:
acquiring a gradient secret share of a first learning model on the target training node;
restoring gradient information of the first learning model according to the gradient secret share;
and updating the model parameters of the first learning model according to the gradient information, and retraining the updated first learning model.
2. The method for training a federated learning model as recited in claim 1, wherein the obtaining a gradient secret share of the first learning model on the target training node comprises:
receiving a first gradient secret share of the first learning model sent by the remaining training nodes and assisting nodes participating in the federated learning model training;
obtaining a second gradient secret share of the first learning model on the target training node.
3. The method for training a federal learning model as claimed in claim 1, wherein said updating the model parameters of the first learning model according to the gradient information comprises:
receiving a loss function of the first learning model fed back by the assisting node, wherein the loss function is recovered by the assisting node according to a loss function secret share of the first learning model;
identifying whether the first learning model converges according to the loss function;
and if the first learning model is identified to be not converged, updating the model parameters of the first learning model according to the gradient information.
4. The method for training a federal learning model as claimed in claim 1, further comprising updating the model parameters of the first learning model according to the gradient information, before the updating step
Receiving an update indication fed back by the assisting node, wherein the update indication is generated when the assisting node identifies that the first learning model is not converged according to a loss function of the first learning model.
5. A training method for a federated learning model as described in any of claims 2-4, wherein the obtaining a second gradient secret share of the first learning model on the target training node comprises:
secret sharing processing is carried out on sample data corresponding to the first learning model to obtain secret data shares on the target training nodes;
obtaining an intermediate result of the first learning model, and performing secret sharing processing on the intermediate result to obtain a secret share of the intermediate result on the target training node;
acquiring a label secret share on the target training node, wherein the label secret share is generated after the label training node carries out secret sharing processing on label data of a sample of the label secret share;
and acquiring the second gradient secret share according to the data secret share, the intermediate result secret share and the tag secret share.
6. The method for training a federated learning model as claimed in claim 5, wherein after the secret sharing process is performed on the sample data corresponding to the first learning model, the method further comprises:
and respectively sending other data secret shares generated after the secret sharing processing of the sample data to the residual training nodes and the assisting nodes.
7. The method for training a federated learning model according to claim 5, wherein the target training node is a labeled training node, and wherein the performing the secret sharing process on the sample data corresponding to the first learning model includes:
and carrying out secret sharing processing on the sample data and the label data of the sample.
8. The method for training a federal learning model as claimed in claim 5, further comprising, after the secret sharing process is performed on the intermediate result:
and respectively sending other intermediate result secret shares generated after the intermediate result secret sharing processing to the residual training nodes and the assisting nodes.
9. The method for training a federal learning model as claimed in claim 5, further comprising:
And obtaining the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag, and sending the secret share of the loss function of the first learning model to the assisting node.
10. A method for training a federated learning model is characterized in that an execution subject is an assistance node participating in the federated learning model training, and the method comprises the following steps:
obtaining a first gradient secret share of a first learning model on a target training node, wherein the target training node is any one training node participating in the federal learning model training;
sending a first gradient secret share of a first learning model to the target training node.
11. The method of claim 10, wherein obtaining the first gradient secret share of the first learning model on the target training node comprises:
receiving a data secret share sent by the target training node, wherein the data secret share is generated after secret sharing processing is performed on sample data corresponding to the first learning model;
receiving an intermediate result secret share sent by the target training node, wherein the intermediate result secret share is generated after an intermediate result generated during the training of the first learning model is subjected to secret sharing processing;
Receiving a label secret share sent by a labeled training node, wherein the label secret share is generated after the labeled training node carries out secret sharing processing on label data of a sample of the labeled training node;
and acquiring the first gradient secret share according to the data secret share, the intermediate result secret share and the tag secret share.
12. The method of training a federal learning model as in claim 10, further comprising:
receiving a loss function secret share of the first learning model sent by each training node;
and recovering the loss function of the first learning model according to the received secret share of the loss function.
13. The method for training a federal learning model as claimed in claim 12, wherein said recovering the loss function of the first learning model further comprises:
judging whether the first learning model converges according to the loss function;
if the first learning model is not converged, sending an updating instruction for updating model parameters to the target training node;
and if the first learning model is converged, sending a finishing instruction of finishing the training of the first learning model to the target training node.
14. The method for training a federal learning model as claimed in claim 12, wherein said recovering the loss function of the first learning model further comprises:
and sending the loss function to the target training node.
15. The device for training the federated learning model is characterized in that the training device is provided with a target training node, wherein the target training node is any one training node participating in the federated learning model training;
the device for training the federal learning model comprises:
the first acquisition module is used for acquiring a gradient secret share of a first learning model on the target training node;
a gradient recovery module, configured to recover gradient information of the first learning model according to the gradient secret share;
and the updating module is used for updating the model parameters of the first learning model according to the gradient information and retraining the updated first learning model.
16. The apparatus for training a federal learning model as claimed in claim 15, wherein said first obtaining module comprises:
a first receiving unit, configured to receive a first gradient secret share of the first learning model sent by remaining training nodes and assisting nodes participating in the federated learning model training;
A first obtaining unit, configured to obtain a second gradient secret share of the first learning model on the target training node.
17. A training apparatus for a federal learning model as claimed in claim 15, wherein the update module comprises:
a second receiving unit, configured to receive a loss function of the first learning model fed back by the assisting node, where the loss function is recovered by the assisting node according to a loss function secret share of the first learning model;
an identifying unit configured to identify whether the first learning model converges according to the loss function;
and the updating unit is used for updating the model parameters of the first learning model according to the gradient information when the first learning model is identified to be not converged.
18. The apparatus for training a federal learning model as claimed in claim 15, wherein the first receiving unit is further configured to receive an update indication fed back by the assisting node before updating the model parameters of the first learning model according to the gradient information, wherein the update indication is generated when the assisting node recognizes that the first learning model is not converged according to a loss function of the first learning model.
19. A training apparatus for a federal learning model as claimed in any of claims 16-18, wherein the first obtaining module comprises:
the first secret sharing unit is used for carrying out secret sharing processing on sample data corresponding to the first learning model so as to obtain a data secret share on the target training node;
the second secret sharing unit is used for acquiring an intermediate result of the first learning model and performing secret sharing processing on the intermediate result to acquire an intermediate result secret share on the target training node;
the second obtaining unit is used for obtaining the secret share of the label on the target training node, wherein the secret share of the label is generated after the label training node carries out secret sharing processing on the label data of the sample;
a third obtaining unit, configured to obtain the second gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
20. The apparatus for training a federal learning model as in claim 19, further comprising:
and a first sending module, configured to, after secret sharing processing is performed on sample data corresponding to the first learning model, send secret shares of other data generated after the secret sharing processing of the sample data to the remaining training nodes and the assisting node, respectively.
21. The apparatus for training a federal learning model as claimed in claim 19, wherein the target training node is a labeled training node, and wherein the secret sharing unit is further configured to perform secret sharing processing on the sample data and the label data of the sample.
22. The apparatus for training a federal learning model as in claim 19, further comprising:
and a second sending module, configured to send, after performing secret sharing processing on the intermediate result, other intermediate result secret shares generated after the intermediate result secret sharing processing to the remaining training nodes and the assisting node, respectively.
23. The apparatus for training a federal learning model as in claim 19, further comprising:
and the second obtaining module is used for obtaining the secret share of the loss function of the first learning model according to the secret share of the intermediate result and the secret share of the tag and sending the secret share of the loss function of the first learning model to the assisting node.
24. The device for training the federated learning model is characterized in that the device for training the federated learning model is arranged on an assistance node participating in the federated learning model training;
The device for training the federal learning model comprises:
the system comprises a first obtaining module, a second obtaining module and a third obtaining module, wherein the first obtaining module is used for obtaining a first gradient secret share of a first learning model on a target training node, and the target training node is any one training node participating in the federal learning model training;
a sending module, configured to send a first gradient secret share of a first learning model to the target training node.
25. The apparatus for training a federal learning model as claimed in claim 24, wherein said first acquisition module comprises:
a first receiving unit, configured to receive a data secret share sent by the target training node, where the data secret share is generated after secret sharing processing is performed on sample data corresponding to the first learning model;
a second receiving unit, configured to receive an intermediate result secret share sent by the target training node, where the intermediate result secret share is generated after secret sharing processing is performed on an intermediate result generated during training of the first learning model;
a third receiving unit, configured to receive a secret share of a tag sent by a tagged training node, where the secret share of the tag is generated after the tagged training node performs secret sharing processing on tag data of a sample of the tag;
An obtaining unit, configured to obtain the first gradient secret share according to the data secret share, the intermediate result secret share, and the tag secret share.
26. A training apparatus for a federal learning model as claimed in claim 24, further comprising:
the receiving module is used for receiving the loss function secret share of the first learning model sent by each training node;
and the loss recovery module is used for recovering the loss function of the first learning model according to the received secret share of the loss function.
27. A training apparatus for a federal learning model as claimed in claim 25, further comprising:
the judging module is used for judging whether the first learning model converges or not according to the loss function after the loss function is restored;
and the indication sending module is used for sending an update indication for updating the model parameters to the target training node when the first learning model is not converged, and sending a completion indication for finishing the training of the first learning model to the target training node when the first learning model is converged.
28. The method of training a federal learning model as in claim 25, wherein the sending module is further configured to send the loss function to the target training node after recovering the loss function.
29. A method for training a federated learning model is characterized by comprising the following steps:
the assisting node sends a gradient secret share of a first learning model on a target training node to the target training node;
the rest training nodes send the gradient secret share of the first learning model to the target training node;
the target training node acquires a local gradient secret share of the first learning model;
the target training node recovers the gradient information of the first learning model according to the local gradient secret shares, the gradient secret shares sent by the assisting node and the gradient secret shares sent by the remaining nodes;
and the target training node updates the model parameters of the first learning model according to the gradient information and trains the updated first learning model again.
30. The utility model provides a training system of bang's learning model which characterized in that includes:
the apparatus for training a federal learning model as claimed in claim 15, or the apparatus for training a federal learning model as claimed in claim 24.
31. An electronic device, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the program, implementing a method for training a federal learning model as in any of claims 1-9 or a method for training a federal learning model as in any of claims 10-14.
32. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a method for training a federal learning model as claimed in any one of claims 1 to 9, or implements a method for training a federal learning model as claimed in any one of claims 10 to 14.
CN202010564686.2A 2020-06-19 2020-06-19 Method and device for training federal learning model Pending CN111860829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010564686.2A CN111860829A (en) 2020-06-19 2020-06-19 Method and device for training federal learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010564686.2A CN111860829A (en) 2020-06-19 2020-06-19 Method and device for training federal learning model

Publications (1)

Publication Number Publication Date
CN111860829A true CN111860829A (en) 2020-10-30

Family

ID=72987641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010564686.2A Pending CN111860829A (en) 2020-06-19 2020-06-19 Method and device for training federal learning model

Country Status (1)

Country Link
CN (1) CN111860829A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464287A (en) * 2020-12-12 2021-03-09 同济大学 Multi-party XGboost safety prediction model training method based on secret sharing and federal learning
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112818369A (en) * 2021-02-10 2021-05-18 中国银联股份有限公司 Combined modeling method and device
CN113033828A (en) * 2021-04-29 2021-06-25 江苏超流信息技术有限公司 Model training method, using method, system, credible node and equipment
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
CN114330673A (en) * 2022-03-15 2022-04-12 支付宝(杭州)信息技术有限公司 Method and device for performing multi-party joint training on business prediction model
CN114499866A (en) * 2022-04-08 2022-05-13 深圳致星科技有限公司 Key hierarchical management method and device for federal learning and privacy calculation
CN114611720A (en) * 2022-03-14 2022-06-10 北京字节跳动网络技术有限公司 Federal learning model training method, electronic device and storage medium
CN114648130A (en) * 2022-02-07 2022-06-21 北京航空航天大学 Longitudinal federal learning method and device, electronic equipment and storage medium
CN114764601A (en) * 2022-05-05 2022-07-19 北京瑞莱智慧科技有限公司 Gradient data fusion method and device and storage medium
WO2023116787A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Intelligent model training method and apparatus
WO2023125747A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Model training method and apparatus, and communication device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464287A (en) * 2020-12-12 2021-03-09 同济大学 Multi-party XGboost safety prediction model training method based on secret sharing and federal learning
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112818369A (en) * 2021-02-10 2021-05-18 中国银联股份有限公司 Combined modeling method and device
CN112818369B (en) * 2021-02-10 2024-03-29 中国银联股份有限公司 Combined modeling method and device
CN113033828A (en) * 2021-04-29 2021-06-25 江苏超流信息技术有限公司 Model training method, using method, system, credible node and equipment
CN113033828B (en) * 2021-04-29 2022-03-22 江苏超流信息技术有限公司 Model training method, using method, system, credible node and equipment
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
WO2023116787A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Intelligent model training method and apparatus
WO2023125747A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Model training method and apparatus, and communication device
CN114648130A (en) * 2022-02-07 2022-06-21 北京航空航天大学 Longitudinal federal learning method and device, electronic equipment and storage medium
CN114648130B (en) * 2022-02-07 2024-04-16 北京航空航天大学 Longitudinal federal learning method, device, electronic equipment and storage medium
CN114611720A (en) * 2022-03-14 2022-06-10 北京字节跳动网络技术有限公司 Federal learning model training method, electronic device and storage medium
CN114611720B (en) * 2022-03-14 2023-08-08 抖音视界有限公司 Federal learning model training method, electronic device, and storage medium
CN114330673A (en) * 2022-03-15 2022-04-12 支付宝(杭州)信息技术有限公司 Method and device for performing multi-party joint training on business prediction model
CN114499866A (en) * 2022-04-08 2022-05-13 深圳致星科技有限公司 Key hierarchical management method and device for federal learning and privacy calculation
CN114764601A (en) * 2022-05-05 2022-07-19 北京瑞莱智慧科技有限公司 Gradient data fusion method and device and storage medium
CN114764601B (en) * 2022-05-05 2024-01-30 北京瑞莱智慧科技有限公司 Gradient data fusion method, device and storage medium

Similar Documents

Publication Publication Date Title
CN111860829A (en) Method and device for training federal learning model
CN109002861B (en) Federal modeling method, device and storage medium
CN110929886B (en) Model training and predicting method and system
CN111275207B (en) Semi-supervision-based transverse federal learning optimization method, equipment and storage medium
CN109284313B (en) Federal modeling method, device and readable storage medium based on semi-supervised learning
CN110955915B (en) Method and device for processing private data
CN111310938A (en) Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium
CN111046425B (en) Method and device for risk identification by combining multiple parties
CN111291897A (en) Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium
CN114003949B (en) Model training method and device based on private data set
CN113505882B (en) Data processing method based on federal neural network model, related equipment and medium
CN112199709A (en) Multi-party based privacy data joint training model method and device
CN112862001A (en) Decentralized data modeling method under privacy protection
CN111861099A (en) Model evaluation method and device of federal learning model
CN114611720A (en) Federal learning model training method, electronic device and storage medium
CN111461223A (en) Training method of abnormal transaction identification model and abnormal transaction identification method
CN110727783B (en) Method and device for asking question of user based on dialog system
CN112948883A (en) Multi-party combined modeling method, device and system for protecting private data
CN116306905A (en) Semi-supervised non-independent co-distributed federal learning distillation method and device
CN116756576A (en) Data processing method, model training method, electronic device and storage medium
CN115660814A (en) Risk prediction method and device, computer readable storage medium and electronic equipment
CN114723012A (en) Computing method and device based on distributed training system
CN115292677A (en) Data processing method and device
CN113240430A (en) Mobile payment verification method and device
CN113887740A (en) Method, device and system for jointly updating model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination