WO2021232754A1 - Federated learning modeling method and device, and computer-readable storage medium - Google Patents

Federated learning modeling method and device, and computer-readable storage medium Download PDF

Info

Publication number
WO2021232754A1
WO2021232754A1 PCT/CN2020/135032 CN2020135032W WO2021232754A1 WO 2021232754 A1 WO2021232754 A1 WO 2021232754A1 CN 2020135032 W CN2020135032 W CN 2020135032W WO 2021232754 A1 WO2021232754 A1 WO 2021232754A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameters
verification
model
zero
encryption
Prior art date
Application number
PCT/CN2020/135032
Other languages
French (fr)
Chinese (zh)
Inventor
李月
蔡杭
范力欣
张天豫
吴锦和
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2021232754A1 publication Critical patent/WO2021232754A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2107File encryption

Definitions

  • This application relates to the field of artificial intelligence of financial technology (Fintech), in particular to a federated learning modeling method, equipment and computer-readable storage medium.
  • each participant in federated learning modeling usually feeds back his own encryption model parameters to the coordinator of federated learning modeling. Furthermore, it coordinates the aggregation of the encryption model parameters of the parties and feeds back the aggregated parameters to each participant for federated learning modeling.
  • a malicious party provides false encryption model parameters during the training process, it will It directly affects the overall model quality of the federated model obtained by federated learning modeling, and even leads to the failure of the entire federated learning modeling process, which in turn makes the efficiency and accuracy of federated learning modeling low.
  • the main purpose of this application is to provide a federated learning modeling method, device, and computer-readable storage medium, aiming to solve the technical problem of low efficiency and poor accuracy of federated learning modeling in the prior art.
  • the present application provides a federated learning modeling method, the federated learning modeling method is applied to a first device, and the federated learning modeling method includes:
  • the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters includes:
  • the step of performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters to determine a false encryption model parameter in each of the encryption model parameters, and obtaining a zero-knowledge verification result include:
  • each encryption model parameter is a false encryption model parameter, and a zero-knowledge verification result is obtained.
  • the verification parameters include verification model parameters and verification random parameters
  • the step of separately calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters includes:
  • Encryption processing is performed on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  • the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter
  • the step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results includes:
  • each of the first zero-knowledge verification results is generated.
  • the step of separately verifying whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results includes:
  • the encryption model parameter is the fake encryption model parameter; and if the encryption model parameter corresponds to If the first zero-knowledge proof result is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter.
  • the first device and the second device both perform encryption based on a homomorphic encryption algorithm, wherein,
  • the homomorphic encryption algorithm satisfies:
  • C, C 1 and C 2 are the parameters to be encrypted after encryption
  • PK is the encryption key
  • m, m 1 and m 2 are the parameters to be encrypted
  • r, r 1 and r 2 are the random parameters required for encryption. number.
  • the aggregation processing is performed on each of the trusted model parameters to obtain the aggregation parameters, and the aggregation parameters are fed back to each of the second devices, so that each of the second devices can update their respective
  • the steps for the local training model of the local training model until the local training model reaches the preset training end condition include:
  • the local training model is re-trained iteratively until the local model reaches the preset number of iterations threshold, then the model of the local training model is reacquired Training parameters;
  • the training end condition includes reaching a preset maximum number of iterations, and the loss function corresponding to the local training model converges.
  • the present application also provides a federated learning modeling method, the federated learning modeling method is applied to a second device, and the federated learning modeling method includes:
  • model training parameters include current model parameters and auxiliary model parameters
  • the step of obtaining model training parameters includes:
  • the present application also provides a federated learning modeling device.
  • the federated learning modeling device is a physical device.
  • the federated learning modeling device includes a memory, a processor, and a device stored on the memory and available on the processor.
  • the program of the federated learning modeling method is executed by the processor, the steps of the federated learning modeling method can be realized.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program for implementing the federated learning modeling method, and the program of the federated learning modeling method is executed by a processor to realize the federation as described above. Learn the steps of modeling methods.
  • This application receives the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters, and then performs zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to perform zero-knowledge verification on each of the encryption model parameters.
  • a false encryption model parameter is determined in the encryption model parameters to obtain a zero-knowledge verification result, and then based on the zero-knowledge verification result, each of the second devices is coordinated to perform federated learning modeling.
  • the present application after receiving the encrypted model parameters sent by each second device and the verification parameters corresponding to the encrypted model parameters, the present application performs zero-knowledge verification on each encrypted model parameter based on each of the verification parameters, so as to A false encryption model parameter is determined in each encryption model parameter to obtain a zero-knowledge verification result.
  • the false encryption model parameter can be eliminated from each encryption model parameter,
  • the federated learning modeling is performed by coordinating each of the second devices.
  • this application provides a method for determining false encryption model parameters in each of the local models based on zero-knowledge proof, and furthermore, when a malicious participant provides false encryption model parameters during the training process, it can accurately identify and The false encryption model parameters are removed, thereby avoiding the occurrence of federated learning modeling based on the encryption model parameters mixed with false encryption model parameters, thereby improving the overall model quality of the federated model obtained through federated learning modeling, and thus improving The efficiency and accuracy of federated learning modeling further solve the technical problems of low efficiency and poor accuracy of federated learning modeling.
  • Fig. 1 is a schematic flow chart of the first embodiment of the federated learning modeling method according to the application
  • FIG. 2 is a schematic flowchart of a second embodiment of a federal learning modeling method according to the application.
  • FIG. 3 is a schematic flowchart of a third embodiment of a federal learning modeling method according to the application.
  • FIG. 4 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the application.
  • the embodiment of the present application provides a federated learning modeling method.
  • the federated learning modeling method is applied to a first device.
  • the federated learning modeling method is Modal methods include:
  • Step S10 receiving encrypted model parameters sent by each second device and verification parameters corresponding to the encrypted model parameters
  • the first device negotiates and interacts with each of the second devices to determine standard verification challenge parameters, where the standard verification challenge parameters
  • the quantity can be determined during the negotiation interaction.
  • the first device is a coordinator performing federated learning modeling, and is used to coordinate each of the second devices to perform federated learning modeling
  • the second device is a participant performing federated learning modeling
  • the encryption model parameters It is a model training parameter that is encrypted based on a homomorphic encryption algorithm.
  • the verification parameter is a parameter for zero-knowledge proof, where the verification parameter includes a second verification random parameter and a verification model parameter, wherein the verification model parameter is the first
  • malware participants in each of the participating parties can change the homomorphic encryption encryption parameters of the model training parameters to achieve the purpose of generating false encryption model parameters.
  • When modeling usually directly aggregate the encrypted model parameters sent by each of the second devices to obtain the aggregated parameters. If there are false encryption model parameters in each of the encrypted model parameters, it will affect the efficiency and accuracy of federated model training. For example, assuming that the encryption model parameter sent by the second device A is 5a, when the second device B is a malicious participant, the false encryption model parameter sent is 100b, and the aggregation process is a weighted average, then after the aggregation process The obtained first aggregation parameter is (5a+100b)/2.
  • the sent encryption model parameter is 5b
  • the second aggregation parameter obtained after the aggregation process is ( 5a+5b)/2.
  • first device and the second device in this embodiment both perform encryption based on a homomorphic encryption algorithm, where, in an implementable solution, the homomorphic encryption algorithm should satisfy the following nature:
  • Step S20 performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to determine a false encryption model parameter in each of the encryption model parameters, and obtain a zero-knowledge verification result;
  • zero-knowledge verification is performed on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result
  • the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters are respectively calculated, and then based on the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters.
  • Zero-knowledge proof result respectively verifying whether each encryption model parameter is a false encryption model parameter, so as to determine the false encryption model parameter in each encryption model parameter to obtain a zero-knowledge verification result, wherein the zero-knowledge verification result is Whether each encryption model parameter is a verification result of a false encryption model parameter, the false encryption model parameter is a model training parameter that a malicious participant maliciously encrypts the model training parameter, for example, the encryption when homomorphic encryption is performed by changing Parameters to achieve malicious encryption.
  • the step of performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters to determine a false encryption model parameter in each of the encryption model parameters, and obtaining a zero-knowledge verification result includes:
  • Step S21 respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter
  • the verification parameters include verification model parameters and second verification random parameters.
  • the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter are respectively calculated. Specifically, the following steps are performed for each verification parameter:
  • homomorphic addition is performed on the encryption model parameters to obtain the first zero-knowledge proof result, and based on the preset coordinator public key and the second verification random parameters, the verification model parameters are performed The homomorphic encryption operation obtains the second zero-knowledge proof result.
  • the preset verification challenge parameters are x 1 and x 2
  • Step S22 based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, respectively verify whether each encryption model parameter is a false encryption model parameter, and obtain a zero-knowledge verification result.
  • each encryption model parameter is a false encryption model parameter
  • the zero-knowledge verification result is obtained, specifically For the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter, the following steps are performed:
  • the first zero-knowledge proof result is compared with the second zero-knowledge proof result to determine whether the first zero-knowledge proof result is consistent with the second zero-knowledge proof result, if the first zero-knowledge proof result If the proof result is consistent with the second zero-knowledge proof result, it is determined that when the second device homomorphically encrypts the model training parameters, it has not been maliciously modified, that is, the encrypted model parameters are not false encrypted model parameters If the first zero-knowledge proof result is inconsistent with the second zero-knowledge proof result, it is determined that the second device performed malicious modification when homomorphic encryption of the model training parameters, that is, the encryption The model parameters are fake encrypted model parameters.
  • the step of separately verifying whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results includes:
  • Step S221 comparing the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter respectively;
  • the first zero-knowledge proof result corresponding to each encryption model parameter is compared with the second zero-knowledge proof result, specifically, the encryption model parameter corresponding to the The first zero-knowledge proof result and the second zero-knowledge proof result are respectively compared to calculate the difference between the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter. The difference.
  • Step S222 If the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result, determine that the encryption model parameter is the fake encryption model parameter;
  • the encryption model parameter is the fake encryption model parameter, specifically If the difference is not 0, it is determined that the second device corresponding to the encrypted model parameter maliciously encrypts the model training parameter, and then it is determined that the encrypted model parameter is a false encrypted model parameter, and then the encrypted model The parameter is given a false identification to identify the encryption model parameter as a false encryption model parameter.
  • Step S223 If the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter.
  • the encryption model parameter is not the false encryption model parameter, specifically Ground, if the difference is 0, it is determined that the second device corresponding to the encrypted model parameter has not maliciously encrypted the model training parameter, and then it is determined that the encrypted model parameter is not a false encrypted model parameter, and then the encrypted model The parameter is given a credible identifier to identify the encrypted model parameter as a credible model parameter.
  • Step S30 based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
  • the zero-knowledge verification result includes a calibration identifier corresponding to each of the encryption model parameters, where the calibration identifier is an identifier that identifies whether the encryption model parameter is a false encryption model parameter. .
  • each of the second devices Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling, specifically, based on each of the calibration identifiers, remove false encryption models from each of the encryption model parameters Parameters, each trusted model parameter is obtained, and then each of the trusted model parameters is aggregated to obtain the aggregated parameter, and then based on each of the aggregated model parameters, each of the second devices is coordinated to perform federated learning modeling.
  • the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters includes:
  • Step S31 based on the zero-knowledge verification result, remove the false encryption model parameters from each encryption model parameter to obtain each trusted model parameter;
  • the coordinator can punish the malicious participant corresponding to the false encryption model parameter according to the preset incentive mechanism or cancel the malicious participant's subsequent participation in the federated learning modeling Qualifications.
  • Step S32 performing aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feeding back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models, Until the local training model reaches the preset training end condition.
  • each of the trusted model parameters is aggregated to obtain the aggregated parameters, and the aggregated parameters are fed back to each of the second devices, so that each of the second devices can update their local Train the model until the local training model reaches a preset training end condition, specifically, based on preset aggregation processing rules, perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, wherein the preset aggregation processing
  • the rules include weighted average, summation, etc.
  • each of the aggregation parameters is sent to each of the second devices, so that each of the second devices can participate based on the public key of the participant.
  • the party’s private key decrypts the aggregation parameters to obtain the decryption aggregation parameters, and based on the decryption aggregation parameters, updates the local training model held by the party, obtains the updated local training model, and judges the updated local training model Whether the preset training end condition is reached, if the updated local training model meets the preset training end condition, it is determined that the task of federated learning modeling is completed, and if the updated local training model does not meet the preset training end condition, then Re-train the local training model iteratively until the local model reaches the threshold of the preset number of iterations, then re-acquire the model training parameters of the local training model, and re-encrypt and send the model training parameters to the
  • the coordinator re-feds until the local training model reaches a preset training end condition, where the training end condition includes reaching a preset maximum number of iterations, and the loss function corresponding to the local training model converges.
  • the local training model includes a risk control model, where the risk control model is a machine learning model used to assess the risk of a user’s loan, and when there is a malicious participant in each of the participants, then Perform aggregation based on the false encryption model parameters sent by the malicious participants and the encryption model parameters sent by the normal participants in each of the participants, and the error aggregation parameters that differ greatly from the accurate aggregation parameters will be obtained, and then each The second device updates the risk control model based on the error aggregation parameters, which will reduce the accuracy of the risk control model for user loan risk assessment.
  • the risk control model is a machine learning model used to assess the risk of a user’s loan
  • the encrypted model can be sent from each participant
  • the false encryption model parameters are screened and eliminated from the parameters to obtain the trusted encryption model parameters.
  • the risk control model is always based on the aggregate parameters obtained by the aggregate processing of the trusted encryption model parameters.
  • the update makes the risk control model's assessment of user loan risk more accurate, that is, improves the accuracy of the risk control model's loan risk assessment.
  • each of the second devices by receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters, and then based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters, so as to perform zero-knowledge verification on each of the encryption model parameters.
  • a false encryption model parameter is determined to obtain a zero-knowledge verification result, and then based on the zero-knowledge verification result, each of the second devices is coordinated to perform federated learning modeling.
  • zero-knowledge verification is performed on each of the encryption model parameters respectively, so as to Determine a false encryption model parameter in each encryption model parameter to obtain a zero-knowledge verification result.
  • the false encryption model parameter can be eliminated from each encryption model parameter , To coordinate each of the second devices to perform federated learning modeling.
  • this embodiment provides a method for determining false encryption model parameters in each of the local models based on zero-knowledge proof, and furthermore, when a malicious participant provides false encryption model parameters during the training process, it can accurately identify And remove the false encryption model parameters, thereby avoiding the occurrence of federated learning modeling based on the encryption model parameters mixed with false encryption model parameters, thereby improving the overall model quality of the federated model obtained through federated learning modeling, and thereby improving It improves the efficiency and accuracy of federated learning modeling, and then solves the technical problems of low efficiency and poor accuracy of federated learning modeling.
  • the verification parameters include verification model parameters and verification random parameters
  • the step of separately calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters includes:
  • Step S211 Perform legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results;
  • the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter
  • the encryption model parameters include a first encryption model parameter and a second encryption model parameter
  • the first encryption model parameter is the encryption parameter after the second device homomorphically encrypts the current model parameter
  • the second encryption model parameter is after the second device homomorphically encrypts the previous model parameter
  • the current model parameter is the model parameter extracted when the local training model reaches the threshold of the preset number of training iterations during the current round of federation
  • the previous model parameter is based on the previous round of federation
  • the legality verification is performed on each encryption model parameter, and each of the first zero-knowledge verification results is obtained. Specifically, the following steps are performed for each encryption model parameter:
  • the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter
  • the step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results includes:
  • Step A10 performing exponentiation operations on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first exponentiation operation result corresponding to each encryption model parameter;
  • the first verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain the first exponentiation operation result corresponding to each encryption model parameter.
  • the following steps are performed: based on the first verification challenge parameter, the first encryption model parameter is exponentiated to obtain the first power operation result. For example, assuming that the first verification challenge parameter is x, If the first encryption model parameter is h, the result of the first power operation is h x .
  • Step A20 performing exponentiation operations on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
  • the second verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain the second exponentiation operation result corresponding to each encryption model parameter.
  • the following steps are performed: based on the second verification challenge parameter, an exponentiation operation is performed on the second encryption model parameter to obtain a second exponentiation operation result.
  • Step A30 generating each of the first zero-knowledge verification results based on each of the first power operation results and each of the second power operation results.
  • each of the first zero-knowledge verification results is generated, and specifically, the first power operation result and the result of the second power operation are obtained.
  • the product of the second power operation result and use the product as the first zero-knowledge verification result.
  • Step S212 Perform encryption processing on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  • each of the participants is a legal participant, and the public key of the participant is consistent with the public key of the preset coordinator, and the verification random parameter includes a third verification random Parameters, and the third verification random parameter is calculated based on the first verification random parameter, the second verification random parameter, the first verification challenge parameter, and the second verification challenge parameter.
  • the first verification random parameter is r1
  • the second verification random parameter is r2
  • the first verification challenge parameter is x1
  • the second verification challenge parameter is x2
  • each verification model parameter is encrypted to obtain each of the second zero-knowledge verification results. Specifically, each verification model parameter is executed The following steps:
  • the default coordinator’s public key is P, then
  • the participant does not maliciously encrypt the encryption model parameters, for example, maliciously tampering with the encryption algorithm, maliciously tampering with the encrypted parameters, etc.
  • the first zero-knowledge proof result is the same as the second zero-knowledge proof result. That is, the encryption model parameter provided by the participant to the coordinator is a trusted model parameter. If the participant maliciously encrypts the encryption model parameter, the first zero-knowledge proof result is different from the second zero-knowledge proof result That is, the encryption model parameter provided by the participant to the coordinator is a false encryption model parameter.
  • this embodiment by performing legality verification on each of the encryption model parameters based on preset verification challenge parameters, each of the first zero-knowledge verification results is obtained, and then based on the preset coordinator public key and each of the verification random parameters, Encryption processing is performed on each of the verification model parameters to obtain each of the second zero-knowledge verification results. That is, this embodiment provides a method for calculating the first zero-knowledge proof result and the second zero-knowledge proof result, and then after the first zero-knowledge proof result and the second zero-knowledge proof result are obtained by calculation, the first zero-knowledge proof result and the second zero-knowledge proof result are calculated.
  • the knowledge proof result and the second zero-knowledge proof result can only be compared to determine whether the encryption model parameters are false encryption model parameters, which lays a foundation for determining the false encryption model parameters in each of the encryption model parameters, and then solves the problem of federation. It laid the foundation for learning the technical problems of low modeling efficiency and poor accuracy.
  • the federated learning modeling method is applied to a second device, and the federated Learning modeling methods include:
  • Step B10 Obtain model training parameters and first verification random parameters, and perform encryption processing on the model training parameters based on the first verification random parameters and the preset public key to obtain encrypted model parameters;
  • the federated learning modeling includes at least one round of federation, and in each round of federation, the second device performs iterative training on the local training model until the threshold of the preset number of iterations is reached.
  • the model parameters of the local training model are sent to the first device, and the aggregation parameters fed back by the first device based on the model parameters are connected, and based on the aggregation parameters, the local training model is updated, and all the parameters are updated.
  • the local training model serves as the initial model of the next round of federation until the local training model reaches a preset training end condition, where the preset training end condition includes reaching the maximum number of iterations, convergence of the loss function, and the like.
  • model training parameters include current model parameters and auxiliary model parameters
  • the step of obtaining model training parameters includes:
  • Step B11 Perform iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtain the current model parameters of the local training model;
  • the current model parameter is the current iteration model parameter when the local training model reaches the preset iteration number threshold in the current round of federation.
  • Step B12 Obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
  • the previous model parameters of the local training model are acquired, and the auxiliary model parameters are generated based on the previous model parameters, specifically, the previous-round federation corresponding to the current round of federation is obtained
  • Step B20 generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
  • the preset verification challenge parameters can be encrypted by the coordinator according to the previous encryption sent by each participant in the previous round of the federation.
  • the model parameters and the preset hash function are calculated. For example, if there are 10 participants, then the corresponding 10 previously encrypted model parameters are freely combined, and then the n results of the free combination are input to the preset In the hash function, the verification challenge parameters x 1 and x 2 are obtained . x n , and specific preset verification challenge parameters x 1 , x 2 .
  • the generation method and number of x n are not limited.
  • a verification model parameter and a second verification random parameter are generated, and specifically, the first verification random parameter and the preset verification challenge
  • the parameters perform the exponentiation operation to obtain the second verification random parameter
  • Step B30 sending the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result;
  • the encryption model parameters, the verification model parameters, and the second verification random parameters are sent to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result
  • the encryption model parameters, the verification model parameters, and the second verification random parameters are sent to a first device associated with the second device, so that the first device can base on the encryption model parameters .
  • the verification model parameter and the second verification random parameter calculate the first zero-knowledge proof result and the second zero-knowledge proof result, and determine the result based on the first zero-knowledge proof result and the second zero-knowledge proof result Whether the encryption model parameter is a false encryption model parameter, a determination result is obtained, and the determination result is recorded in the zero-knowledge verification result, where the zero-knowledge verification result includes the determination result corresponding to each of the second devices.
  • Step B40 Receive the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and update the local training model corresponding to the model training parameter based on the aggregation parameter, until all The local training model reaches the preset training end condition.
  • the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter is received, and based on the aggregation parameter, the local training model corresponding to the model training parameter is updated , Until the local training model reaches the preset training end condition, specifically, after the first device obtains the zero-knowledge proof result, the encrypted model sent by each second device will be based on the zero-knowledge proof result
  • the false encryption model parameters are removed from the parameters, and each trusted model parameter is obtained, and each trusted model parameter is aggregated.
  • the aggregation processing includes summation, weighted averaging, etc., to obtain aggregated parameters, and aggregate
  • the parameters are fed back to each of the second devices respectively, and then after the second device ends the aggregation parameters, it decrypts the aggregation parameters based on the preset private key corresponding to the preset public key to obtain the decrypted aggregation parameters, And based on the decryption aggregation parameters, the local training model is updated, and the updated local training model is used as the initial model of the next round of federation until the local training model reaches the preset training end condition.
  • the training end conditions include reaching the maximum number of iterations and the convergence of the loss function.
  • the model training parameters and the first verification random parameters are obtained, and based on the first verification random parameters and the preset public key, the model training parameters are encrypted to obtain encrypted model parameters, and then based on the first verification random parameters.
  • Verify random parameters, the model training parameters, and preset verification challenge parameters generate verification model parameters and second verification random parameters, and then send the encryption model parameters, the verification model parameters, and the second verification random parameters To the first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and then to receive the aggregate parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and based on The aggregation parameter updates the local training model corresponding to the model training parameter until the local training model reaches a preset training end condition.
  • this embodiment provides a federated learning modeling method based on zero-knowledge proof, that is, when the model training parameters are encrypted into encrypted model parameters, the verification model parameters and the second verification dependent parameters are generated at the same time, and then the The encryption model parameters, the verification model parameters, and the second verification random parameters are sent to the first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and then the first device
  • the false encryption model parameters are determined and eliminated in each of the second devices, and then the aggregate parameters received by the second device are obtained by aggregation based on the trusted encryption model parameters by the first device, and further based on the aggregate parameters,
  • the local training model is updated to complete the federated learning modeling, thereby avoiding the update of the local training model based on the aggregated parameters of each encryption model parameter aggregation mixed with false encryption model parameters, making it difficult for the local training model to reach the preset training end conditions and local
  • the low determination of the training model occurs, which in turn improves the efficiency and accuracy of federated
  • FIG. 4 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application.
  • the federated learning modeling device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between the processor 1001 and the memory 1005.
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the federated learning modeling device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on.
  • the rectangular user interface may include a display screen (Display) and an input sub-module such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface.
  • the network interface can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the structure of the federated learning modeling device shown in FIG. 4 does not constitute a limitation on the federated learning modeling device, and may include more or fewer components than shown in the figure, or a combination of certain components, Or different component arrangements.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a federated learning modeling method program.
  • the operating system is a program that manages and controls the hardware and software resources of the federated learning modeling equipment, and supports the running of the federated learning modeling method program and other software and/or programs.
  • the network communication module is used to realize the communication between the various components in the memory 1005 and the communication with other hardware and software in the federated learning modeling method system.
  • the processor 1001 is used to execute the federated learning modeling method program stored in the memory 1005 to implement the steps of the federated learning modeling method described in any one of the above.
  • the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
  • An embodiment of the present application also provides a federated learning modeling device, the federated learning modeling device is applied to a first device, and the federated learning modeling device includes:
  • a receiving module configured to receive encrypted model parameters sent by each second device and verification parameters corresponding to the encrypted model parameters
  • the zero-knowledge verification module is configured to perform zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to determine a false encryption model parameter in each of the encryption model parameters, and obtain a zero-knowledge verification result;
  • the coordination module is configured to coordinate each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters.
  • the coordination module includes:
  • the elimination sub-module is used to eliminate the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the trusted model parameters;
  • the aggregation sub-module is configured to perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their local
  • the model is trained until the local training model reaches a preset training end condition.
  • the zero-knowledge verification module includes:
  • the calculation sub-module is used to calculate the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter;
  • the zero-knowledge verification sub-module is configured to verify whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, and obtain a zero-knowledge verification result.
  • the calculation sub-module includes:
  • a legitimacy verification unit configured to verify the legitimacy of each encryption model parameter based on preset verification challenge parameters, and obtain each of the first zero-knowledge verification results
  • the encryption unit is configured to perform encryption processing on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  • the legality verification unit includes:
  • the first power operation subunit is configured to perform power operations on the first verification challenge parameter and each of the encryption model parameters to obtain the first power operation result corresponding to each of the encryption model parameters;
  • the second power operation subunit is configured to perform power operations on the second verification challenge parameter and each of the encryption model parameters to obtain the second power operation result corresponding to each of the encryption model parameters;
  • a generating subunit is configured to generate each of the first zero-knowledge verification results based on each of the first power operation results and each of the second power operation results.
  • the zero-knowledge verification sub-module includes:
  • a comparison unit configured to compare the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter respectively;
  • a first determining unit configured to determine that the encryption model parameter is the false encryption model parameter if the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result;
  • the second determination unit is configured to determine that the encryption model parameter is not the false encryption model parameter if the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result.
  • the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
  • this embodiment also provides a federated learning modeling device, the federated learning modeling device is applied to a second device, and the federated learning modeling device includes:
  • An encryption module configured to obtain model training parameters and first verification random parameters, and based on the first verification random parameters and a preset public key, perform encryption processing on the model training parameters to obtain encrypted model parameters;
  • a generating module configured to generate a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
  • a sending module configured to send the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result;
  • the model update module is configured to receive the aggregate parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and update the local training model corresponding to the model training parameter based on the aggregate parameter , Until the local training model reaches the preset training end condition.
  • the encryption module includes:
  • An obtaining sub-module configured to perform iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtain the current model parameters of the local training model;
  • a generating sub-module is used to obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
  • the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A federated learning modeling method and device, and a computer-readable storage medium. The federated learning modeling method comprises: receiving encryption model parameters and verification parameters corresponding to the encryption model parameters that are sent by second devices (S10); on the basis of the verification parameters, respectively performing zero knowledge verification on the encryption model parameters so as to determine false encryption model parameters within the encryption model parameters and obtain a zero knowledge verification result (S20); and on the basis of the zero knowledge verification result and the encryption model parameters, coordinating the second devices to perform federated learning modeling (S30).

Description

联邦学习建模方法、设备及计算机可读存储介质Federated learning modeling method, equipment and computer readable storage medium
相关申请的交叉引用Cross-references to related applications
本申请要求:2020年5月22日申请的、申请号为202010445868.8、名称为“联邦学习建模方法、设备及可读存储介质”的中国专利申请的优先权,在此将其引入作为参考。This application requires: the priority of the Chinese patent application filed on May 22, 2020, with application number 202010445868.8, and titled "Federal Learning Modeling Method, Equipment, and Readable Storage Medium", which is hereby incorporated by reference.
技术领域Technical field
本申请涉及金融科技(Fintech)的人工智能领域,尤其涉及一种联邦学习建模方法、设备及计算机可读存储介质。This application relates to the field of artificial intelligence of financial technology (Fintech), in particular to a federated learning modeling method, equipment and computer-readable storage medium.
背景技术Background technique
随着金融科技,尤其是互联网科技金融的不断发展,越来越多的技术(如分布式、区块链Blockchain、人工智能等)应用在金融领域,但金融业也对技术提出了更高的要求,如对金融业对应待办事项的分发也有更高的要求。With the continuous development of financial technology, especially Internet technology and finance, more and more technologies (such as distributed, blockchain, artificial intelligence, etc.) are applied in the financial field, but the financial industry has also proposed higher technology Requirements, such as the distribution of to-do items in the financial industry, also have higher requirements.
随着计算机软件和人工智能的不断发展,联邦学习建模发展的也越来越成熟,目前,联邦学习建模的各参与者通常将己方的加密模型参数反馈至联邦学习建模的协调者,进而协调方者各加密模型参数进行聚合,并将聚合后聚合参数反馈至各参与者,以进行联邦学习建模,但是,若有若恶意参与方在训练过程中提供虚假的加密模型参数,会直接影响联邦学习建模获得的联邦模型的整体模型质量,甚至会导致整个联邦学习建模过程失效,进而使得联邦学习建模的效率和精确度偏低。With the continuous development of computer software and artificial intelligence, the development of federated learning modeling has become more and more mature. At present, each participant in federated learning modeling usually feeds back his own encryption model parameters to the coordinator of federated learning modeling. Furthermore, it coordinates the aggregation of the encryption model parameters of the parties and feeds back the aggregated parameters to each participant for federated learning modeling. However, if a malicious party provides false encryption model parameters during the training process, it will It directly affects the overall model quality of the federated model obtained by federated learning modeling, and even leads to the failure of the entire federated learning modeling process, which in turn makes the efficiency and accuracy of federated learning modeling low.
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。The above content is only used to assist the understanding of the technical solution of the application, and does not mean that the above content is recognized as prior art.
发明内容Summary of the invention
本申请的主要目的在于提供一种联邦学习建模方法、设备及计算机可读存储介质,旨在解决现有技术中联邦学习建模效率低且精确度差的技术问题。The main purpose of this application is to provide a federated learning modeling method, device, and computer-readable storage medium, aiming to solve the technical problem of low efficiency and poor accuracy of federated learning modeling in the prior art.
为实现上述目的,本申请提供一种联邦学习建模方法,所述联邦学习建模方法应用于第一设备,所述联邦学习建模方法包括:To achieve the above objective, the present application provides a federated learning modeling method, the federated learning modeling method is applied to a first device, and the federated learning modeling method includes:
接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;Receiving encrypted model parameters and verification parameters corresponding to the encrypted model parameters sent by each second device;
基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;以及Based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result; and
基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
在一实施例中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:In an embodiment, the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters includes:
基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;以及Based on the zero-knowledge verification result, remove the false encryption model parameters from each of the encryption model parameters to obtain each trusted model parameter; and
对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models until the The local training model reaches the preset training end condition.
在一实施例中,所述基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:In an embodiment, the step of performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters to determine a false encryption model parameter in each of the encryption model parameters, and obtaining a zero-knowledge verification result include:
分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;以及Respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters; and
基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, it is verified whether each encryption model parameter is a false encryption model parameter, and a zero-knowledge verification result is obtained.
在一实施例中,所述验证参数包括验证模型参数和验证随机参数,In an embodiment, the verification parameters include verification model parameters and verification random parameters,
所述分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果的步骤包括:The step of separately calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters includes:
基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;以及Based on preset verification challenge parameters, perform legality verification on each of the encryption model parameters to obtain each of the first zero-knowledge verification results; and
基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。Encryption processing is performed on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
在一实施例中,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;In an embodiment, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results includes:
分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;Performing an exponentiation operation on the first verification challenge parameter and each of the encryption model parameters to obtain a first exponentiation operation result corresponding to each of the encryption model parameters;
分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;以及Performing an exponentiation operation on the second verification challenge parameter and each of the encryption model parameters to obtain a second exponentiation operation result corresponding to each of the encryption model parameters; and
基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Based on each of the first power operation results and each of the second power operation results, each of the first zero-knowledge verification results is generated.
在一实施例中,所述基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数的步骤包括:In an embodiment, the step of separately verifying whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results includes:
将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比;Respectively comparing the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters;
若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果不一致,则判定所述加密模型参数为所述虚假加密模型参数;以及若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果一致,则判定所述加密模型参数不为所述虚假加密模型参数。If the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is the fake encryption model parameter; and if the encryption model parameter corresponds to If the first zero-knowledge proof result is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter.
在一实施例中,所述第一设备和第二设备均基于同态加密算法进行加密,其中,In an embodiment, the first device and the second device both perform encryption based on a homomorphic encryption algorithm, wherein,
同态加密算法满足:The homomorphic encryption algorithm satisfies:
C=Enc(PK,m,r),对于C 1=Enc(PK,m 1,r 1))和C 2=Enc(PK,m 2,r 2),满足:
Figure PCTCN2020135032-appb-000001
C = Enc (PK, m, r), for C 1 = Enc (PK, m 1 , r 1 )) and C 2 = Enc (PK, m 2 , r 2 ), satisfying:
Figure PCTCN2020135032-appb-000001
其中,C、C 1和C 2均为加密之后的待加密参数,PK为加密的密钥,m、m 1和m 2为待加密参数,r、r 1和r 2为加密所需的随机数。 Among them, C, C 1 and C 2 are the parameters to be encrypted after encryption, PK is the encryption key, m, m 1 and m 2 are the parameters to be encrypted, and r, r 1 and r 2 are the random parameters required for encryption. number.
在一实施例中,所述对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件的步骤包括:In an embodiment, the aggregation processing is performed on each of the trusted model parameters to obtain the aggregation parameters, and the aggregation parameters are fed back to each of the second devices, so that each of the second devices can update their respective The steps for the local training model of the local training model until the local training model reaches the preset training end condition include:
对所述聚合参数进行解密,获得解密聚合参数;Decrypt the aggregation parameters to obtain decrypted aggregation parameters;
并基于解密聚合参数,对己方持有的本地训练模型进行更新,获得更新后的本地训练模型;And based on the decryption aggregation parameters, update the local training model held by oneself to obtain the updated local training model;
判断更新后的本地训练模型是否达到预设训练结束条件;Determine whether the updated local training model meets the preset training end conditions;
若更新后的本地训练模型达到预设训练结束条件,则判定完成所述联邦学习建模的任务;If the updated local training model reaches the preset training end condition, it is determined that the task of the federated learning modeling is completed;
若更新后的本地训练模型未达到预设训练结束条件,则重新对所述本地训练模型进行迭代训练,直至所述本地模型达到预设迭代次数阀值,则重新获取所述本地训练模型的模型训练参数;If the updated local training model does not reach the preset training end condition, the local training model is re-trained iteratively until the local model reaches the preset number of iterations threshold, then the model of the local training model is reacquired Training parameters;
将所述模型训练参数重新加密发送至所述协调方,以重新进行联邦,直至所述本地训练模型达到预设训练结束条件;以及Re-encrypting the model training parameters and sending them to the coordinator for re-federation until the local training model reaches a preset training end condition; and
其中,所述训练结束条件包括达到预设最大迭代次数,所述本地训练模型对应的损失函数收敛。为实现上述目的,本申请还提供一种联邦学习建模方法,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:Wherein, the training end condition includes reaching a preset maximum number of iterations, and the loss function corresponding to the local training model converges. To achieve the above objective, the present application also provides a federated learning modeling method, the federated learning modeling method is applied to a second device, and the federated learning modeling method includes:
获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数;Acquiring model training parameters and first verification random parameters, and performing encryption processing on the model training parameters based on the first verification random parameters and the preset public key to obtain encrypted model parameters;
基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数;Generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果;以及Sending the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result; and
接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。Receive the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and based on the aggregation parameter, update the local training model corresponding to the model training parameter until the local training The model reaches the preset training end condition.
在一实施例中,所述模型训练参数包括当前模型参数和辅助模型参数,In an embodiment, the model training parameters include current model parameters and auxiliary model parameters,
所述获取模型训练参数的步骤包括:The step of obtaining model training parameters includes:
对所述模型训练参数对应的本地训练模型进行迭代训练,直至所述本地训练模型达到预设迭代次数阀值,获取所述本地训练模型的所述当前模型参数;以及Performing iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtaining the current model parameters of the local training model; and
获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数。Obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
本申请还提供一种联邦学习建模设备,所述联邦学习建模设备为实体设备,所述联邦学习建模设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述联邦学习建模方法的程序,所述联邦学习建模方法的程序被处理器执行时可实现如上述的联邦学习建模方法的步骤。The present application also provides a federated learning modeling device. The federated learning modeling device is a physical device. The federated learning modeling device includes a memory, a processor, and a device stored on the memory and available on the processor. When the program of the federated learning modeling method is executed by the processor, the steps of the federated learning modeling method can be realized.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有实现联邦学习建模方法的程序,所述联邦学习建模方法的程序被处理器执行时实现如上述的联邦学习建模方法的步骤。The present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program for implementing the federated learning modeling method, and the program of the federated learning modeling method is executed by a processor to realize the federation as described above. Learn the steps of modeling methods.
本申请通过接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数,进而基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,进而基于所述零知识验证结果,协调各所述第二设备进行联邦学习建模。也即,本申请在接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数之后,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假 加密模型参数,获得零知识验证结果,在一实施例中,基于所述零知识验证结果,可在各所述加密模型参数中剔除所述虚假加密模型参数,以协调各所述第二设备进行联邦学习建模。也即,本申请提供了一种基于零知识证明在各所述本地模型中确定虚假加密模型参数的方法,进而当有若恶意参与方在训练过程中提供虚假加密模型参数时,可准确识别并剔除虚假加密模型参数,进而避免了基于混合有虚假加密模型参数的各加密模型参数进行联邦学习建模的情况发生,进而提高了通过联邦学习建模获得的联邦模型的整体模型质量,进而提高了联邦学习建模的效率和精确度,进而解决了联邦学习建模效率低且精确度差的技术问题。This application receives the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters, and then performs zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to perform zero-knowledge verification on each of the encryption model parameters. A false encryption model parameter is determined in the encryption model parameters to obtain a zero-knowledge verification result, and then based on the zero-knowledge verification result, each of the second devices is coordinated to perform federated learning modeling. That is, after receiving the encrypted model parameters sent by each second device and the verification parameters corresponding to the encrypted model parameters, the present application performs zero-knowledge verification on each encrypted model parameter based on each of the verification parameters, so as to A false encryption model parameter is determined in each encryption model parameter to obtain a zero-knowledge verification result. In one embodiment, based on the zero-knowledge verification result, the false encryption model parameter can be eliminated from each encryption model parameter, The federated learning modeling is performed by coordinating each of the second devices. That is, this application provides a method for determining false encryption model parameters in each of the local models based on zero-knowledge proof, and furthermore, when a malicious participant provides false encryption model parameters during the training process, it can accurately identify and The false encryption model parameters are removed, thereby avoiding the occurrence of federated learning modeling based on the encryption model parameters mixed with false encryption model parameters, thereby improving the overall model quality of the federated model obtained through federated learning modeling, and thus improving The efficiency and accuracy of federated learning modeling further solve the technical problems of low efficiency and poor accuracy of federated learning modeling.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The drawings herein are incorporated into the specification and constitute a part of the specification, show embodiments that conform to the application, and are used together with the specification to explain the principle of the application.
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, those of ordinary skill in the art are In other words, other drawings can be obtained based on these drawings without creative labor.
图1为本申请联邦学习建模方法第一实施例的流程示意图;Fig. 1 is a schematic flow chart of the first embodiment of the federated learning modeling method according to the application;
图2为本申请联邦学习建模方法第二实施例的流程示意图;FIG. 2 is a schematic flowchart of a second embodiment of a federal learning modeling method according to the application;
图3为本申请联邦学习建模方法第三实施例的流程示意图;FIG. 3 is a schematic flowchart of a third embodiment of a federal learning modeling method according to the application;
图4为本申请实施例方案涉及的硬件运行环境的设备结构示意图。FIG. 4 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the application.
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the objectives, functional characteristics, and advantages of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application.
本申请实施例提供一种联邦学习建模方法,在本申请联邦学习建模方法的第一实施例中,所述联邦学习建模方法应用于第一设备,参照图1,所述联邦学习建模方法包括:The embodiment of the present application provides a federated learning modeling method. In the first embodiment of the federated learning modeling method of the present application, the federated learning modeling method is applied to a first device. Referring to FIG. 1, the federated learning modeling method is Modal methods include:
步骤S10,接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;Step S10, receiving encrypted model parameters sent by each second device and verification parameters corresponding to the encrypted model parameters;
在本实施例中,需要说明的是,在进行联邦学习建模之前,所述第一设备与各所述第二设备进行协商交互,确定标准验证挑战参数,其中,所述标准验证挑战参数的数量可在协商交互时确定。所述第一设备为进行联邦学习建模的协调方,用于协调各所述第二设备进行联邦学习建模,所述第二设备为进行联邦学习建模的参与方,所述加密模型参数为基于同态加密算法进行加密的模型训练参数,例如,假设第二设备持有的参与方公钥为P,用于进行同态加密的第一验证随机参数为r 1,基于同态加密算法对模型训练参数m进行加密后的加密模型参数h m=Enc(P,m,r 1),其中,Enc为同态加密符号,在一实施例中,所述模型训练参数为所述第二设备的本地训练模型的模型参数,例如,假设所述本地训练模型为线性模型,表达式为Y=β 01x 12x 2+…+β nx n,则所述模型参数为向量(β 012+…+β n)。 In this embodiment, it should be noted that before performing federated learning modeling, the first device negotiates and interacts with each of the second devices to determine standard verification challenge parameters, where the standard verification challenge parameters The quantity can be determined during the negotiation interaction. The first device is a coordinator performing federated learning modeling, and is used to coordinate each of the second devices to perform federated learning modeling, the second device is a participant performing federated learning modeling, and the encryption model parameters It is a model training parameter that is encrypted based on a homomorphic encryption algorithm. For example, suppose that the public key of the participant held by the second device is P, and the first verification random parameter used for homomorphic encryption is r 1 , based on the homomorphic encryption algorithm The encrypted model parameter h m after encrypting the model training parameter m = Enc(P, m, r 1 ), where Enc is a homomorphic encryption symbol. In an embodiment, the model training parameter is the second The model parameters of the local training model of the device, for example, assuming that the local training model is a linear model, and the expression is Y=β 01 x 12 x 2 +...+β n x n , then the model The parameter is a vector (β 012 +...+β n ).
另外地,需要说明的是,所述验证参数为用于进行零知识证明的参数,其中,所述验证参数包括第二验证随机参数和验证模型参数,其中,所述验证模型参数为所述第二设备基于模型训练参数和验证挑 战参数生成的,例如,假设所述验证挑战参数包括参数x 1和x 2,所述模型训练参数为m,则所述验证模型参数为n=m*x 1+m*x 2,在一实施例中,所述第二验证随机参数为所述第二设备基于所述第一验证随机参数和所述验证挑战参数生成的,例如,假设所述第一验证随机参数为r 1,所述验证挑战参数包括参数x 1和参数x 2,则所述第二验证随机参数
Figure PCTCN2020135032-appb-000002
In addition, it should be noted that the verification parameter is a parameter for zero-knowledge proof, where the verification parameter includes a second verification random parameter and a verification model parameter, wherein the verification model parameter is the first The second device is generated based on model training parameters and verification challenge parameters. For example, assuming that the verification challenge parameters include parameters x 1 and x 2 , and the model training parameter is m, then the verification model parameter is n=m*x 1 +m*x 2 , in one embodiment, the second verification random parameter is generated by the second device based on the first verification random parameter and the verification challenge parameter. For example, suppose the first verification The random parameter is r 1 , and the verification challenge parameter includes a parameter x 1 and a parameter x 2 , then the second verification random parameter
Figure PCTCN2020135032-appb-000002
另外地,需要说明的是,各所述参与方中的恶意参与方可通过更改对模型训练参数的同态加密的加密参数,以达到生成虚假加密模型参数的目的,而由于协调方在进行联邦建模时,通常直接对各所述第二设备发送的加密模型参数进行聚合处理,获得聚合参数,若各所述加密模型参数中存在虚假加密模型参数,将影响联邦模型训练的效率和精确度,例如,假设第二设备A发送的加密模型参数为5a,当所述第二设备B为恶意参与方,则发送的虚假加密模型参数为100b,所述聚合处理为加权平均,则聚合处理后获得的第一聚合参数为(5a+100b)/2,当所述第二设备B不为恶意参与方时,则发送的加密模型参数为5b,则聚合处理后获得的第二聚合参数为(5a+5b)/2,由此可见,若各第二设备中存在恶意参与方,第一设备在对各第二设备发送的加密模型参数进行聚合处理获得的第二聚合参数和在无恶意参与方时获得的第一集合参数的差距极大,进而若各所述第二设备中存在恶意参与方,将极大程度上影响联邦模型训练的效率和精确度。In addition, it should be noted that malicious participants in each of the participating parties can change the homomorphic encryption encryption parameters of the model training parameters to achieve the purpose of generating false encryption model parameters. When modeling, usually directly aggregate the encrypted model parameters sent by each of the second devices to obtain the aggregated parameters. If there are false encryption model parameters in each of the encrypted model parameters, it will affect the efficiency and accuracy of federated model training. For example, assuming that the encryption model parameter sent by the second device A is 5a, when the second device B is a malicious participant, the false encryption model parameter sent is 100b, and the aggregation process is a weighted average, then after the aggregation process The obtained first aggregation parameter is (5a+100b)/2. When the second device B is not a malicious participant, the sent encryption model parameter is 5b, and the second aggregation parameter obtained after the aggregation process is ( 5a+5b)/2. It can be seen that if there is a malicious participant in each second device, the The gap between the first set of parameters obtained by the time is extremely large, and if there are malicious participants in each of the second devices, the efficiency and accuracy of the federation model training will be greatly affected.
另外地,需要说明的是,本实施例中所述第一设备和第二设备均基于同态加密算法进行加密,其中,在一种可实施的方案中,所述同态加密算法应满足以下性质:In addition, it should be noted that the first device and the second device in this embodiment both perform encryption based on a homomorphic encryption algorithm, where, in an implementable solution, the homomorphic encryption algorithm should satisfy the following nature:
C=Enc(PK,m,r),且对于C 1=Enc(PK,m 1,r 1))和C 2=Enc(PK,m 2,r 2),满足:
Figure PCTCN2020135032-appb-000003
其中,C、C 1和C 2均为加密之后的待加密参数,PK为加密的密钥,m、m 1和m 2为待加密参数,r、r 1和r 2为加密所需的随机数。
C = Enc (PK, m, r), and for C 1 = Enc (PK, m 1 , r 1 )) and C 2 = Enc (PK, m 2 , r 2 ), satisfy:
Figure PCTCN2020135032-appb-000003
Among them, C, C 1 and C 2 are the parameters to be encrypted after encryption, PK is the encryption key, m, m 1 and m 2 are the parameters to be encrypted, and r, r 1 and r 2 are the random parameters required for encryption. number.
步骤S20,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;Step S20, performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to determine a false encryption model parameter in each of the encryption model parameters, and obtain a zero-knowledge verification result;
在本实施例中,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,具体地,基于各所述验证参数,分别计算各所述加密模型参数对应的第一零知识证明结果和第二零知识证明结果,进而基于各所述加密模型参数对应的第一零知识证明结果和第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,其中,所述零知识验证结果为各所述加密模型参数是否为虚假加密模型参数的验证结果,所述虚假加密模型参数为恶意参与方对模型训练参数进行了恶意加密的模型训练参数,例如,通过更改进行同态加密时的加密参数,以实现恶意加密。In this embodiment, based on each of the verification parameters, zero-knowledge verification is performed on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result, specifically, Based on each of the verification parameters, the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters are respectively calculated, and then based on the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters. Zero-knowledge proof result, respectively verifying whether each encryption model parameter is a false encryption model parameter, so as to determine the false encryption model parameter in each encryption model parameter to obtain a zero-knowledge verification result, wherein the zero-knowledge verification result is Whether each encryption model parameter is a verification result of a false encryption model parameter, the false encryption model parameter is a model training parameter that a malicious participant maliciously encrypts the model training parameter, for example, the encryption when homomorphic encryption is performed by changing Parameters to achieve malicious encryption.
其中,所述基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:Wherein, the step of performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters to determine a false encryption model parameter in each of the encryption model parameters, and obtaining a zero-knowledge verification result includes:
步骤S21,分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;Step S21, respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter;
在本实施例中,需要说明的是,所述验证参数包括验证模型参数和第二验证随机参数。In this embodiment, it should be noted that the verification parameters include verification model parameters and second verification random parameters.
分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果,具体地,对于每一所 述验证参数均执行以下步骤:The first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter are respectively calculated. Specifically, the following steps are performed for each verification parameter:
基于预设验证挑战参数,对所述加密模型参数进行同态加和运算,获得第一零知识证明结果,并基于预设协调方公钥、第二验证随机参数,对所述验证模型参数进行同态加密运算,获得第二零知识证明结果,例如,假设预设验证挑战参数为x 1和x 2,加密模型参数为h m=Enc(P 1,m,r 1),其中,P 1为参与方公钥,r 1为第一验证随机参数,m为模型训练参数,且验证模型参数n=m*x 1+m*x 2,且
Figure PCTCN2020135032-appb-000004
进而第一零知识验证结果为
Figure PCTCN2020135032-appb-000005
所述第二零知识验证结果为
Figure PCTCN2020135032-appb-000006
其中,P 2为协调方公钥,且若各所述第二设备为进行恶意修改,则所述参与方公钥和协调方公钥应当一致。
Based on the preset verification challenge parameters, homomorphic addition is performed on the encryption model parameters to obtain the first zero-knowledge proof result, and based on the preset coordinator public key and the second verification random parameters, the verification model parameters are performed The homomorphic encryption operation obtains the second zero-knowledge proof result. For example, suppose that the preset verification challenge parameters are x 1 and x 2 , and the encryption model parameters are h m =Enc(P 1 , m, r 1 ), where P 1 Is the public key of the participant, r 1 is the first verification random parameter, m is the model training parameter, and the verification model parameter n=m*x 1 +m*x 2 , and
Figure PCTCN2020135032-appb-000004
Then the first zero-knowledge verification result is
Figure PCTCN2020135032-appb-000005
The second zero-knowledge verification result is
Figure PCTCN2020135032-appb-000006
Where P 2 is the public key of the coordinator, and if each of the second devices is maliciously modified, the public key of the participant and the public key of the coordinator should be consistent.
步骤S22,基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Step S22, based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, respectively verify whether each encryption model parameter is a false encryption model parameter, and obtain a zero-knowledge verification result.
在本实施例中,基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果,具体地,对于每一所述验证参数对应的第一零知识证明结果和第二零知识证明结果均执行以下步骤:In this embodiment, based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, it is verified whether each encryption model parameter is a false encryption model parameter, and the zero-knowledge verification result is obtained, specifically For the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter, the following steps are performed:
将所述第一零知识证明结果与所述第二零知识证明结果进行比对,判断所述第一零知识证明结果与所述第二零知识证明结果是否一致,若所述第一零知识证明结果与所述第二零知识证明结果一致,则判定所述第二设备在对模型训练参数进行同态加密时,并未进行恶意修改,也即,所述加密模型参数不是虚假加密模型参数,若所述第一零知识证明结果与所述第二零知识证明结果不一致,则判定所述第二设备在对模型训练参数进行同态加密时,进行了恶意修改,也即,所述加密模型参数是虚假加密模型参数。The first zero-knowledge proof result is compared with the second zero-knowledge proof result to determine whether the first zero-knowledge proof result is consistent with the second zero-knowledge proof result, if the first zero-knowledge proof result If the proof result is consistent with the second zero-knowledge proof result, it is determined that when the second device homomorphically encrypts the model training parameters, it has not been maliciously modified, that is, the encrypted model parameters are not false encrypted model parameters If the first zero-knowledge proof result is inconsistent with the second zero-knowledge proof result, it is determined that the second device performed malicious modification when homomorphic encryption of the model training parameters, that is, the encryption The model parameters are fake encrypted model parameters.
其中,所述基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数的步骤包括:Wherein, the step of separately verifying whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results includes:
步骤S221,将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比;Step S221, comparing the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter respectively;
在本实施例中,将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比,具体地,将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比,以分别计算每一所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果之间的差值。In this embodiment, the first zero-knowledge proof result corresponding to each encryption model parameter is compared with the second zero-knowledge proof result, specifically, the encryption model parameter corresponding to the The first zero-knowledge proof result and the second zero-knowledge proof result are respectively compared to calculate the difference between the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter. The difference.
步骤S222,若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果不一致,则判定所述加密模型参数为所述虚假加密模型参数;Step S222: If the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result, determine that the encryption model parameter is the fake encryption model parameter;
在本实施例中,若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果不一致,则判定所述加密模型参数为所述虚假加密模型参数,具体地,若所述差值不为0,则判定所述加密模型参数对应的第二设备对模型训练参数进行了恶意加密,进而判定所述加密模型参数是虚假加密模型参数,进而为所述加密模型参数赋予虚假标识,以标识所述加密模型参数为虚假加密模型参数。In this embodiment, if the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is the fake encryption model parameter, specifically If the difference is not 0, it is determined that the second device corresponding to the encrypted model parameter maliciously encrypts the model training parameter, and then it is determined that the encrypted model parameter is a false encrypted model parameter, and then the encrypted model The parameter is given a false identification to identify the encryption model parameter as a false encryption model parameter.
步骤S223,若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果一致,则判定所述加密模型参数不为所述虚假加密模型参数。Step S223: If the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter.
在本实施例中,若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果一致,则判定所述加密模型参数不为所述虚假加密模型参数,具体地,若所述差值为0,则判定所述加密模型参数对应的第二设备未对模型训练参数进行恶意加密,进而判定所述加密模型参数不是虚假加密模型参数,进而为所述加密模型参数赋予可信标识,以标识所述加密模型参数为可信模型参数。In this embodiment, if the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter, specifically Ground, if the difference is 0, it is determined that the second device corresponding to the encrypted model parameter has not maliciously encrypted the model training parameter, and then it is determined that the encrypted model parameter is not a false encrypted model parameter, and then the encrypted model The parameter is given a credible identifier to identify the encrypted model parameter as a credible model parameter.
步骤S30,基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。Step S30, based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
在本实施例中,需要说明的是,所述零知识验证结果包括各所述加密模型参数对应的标定标识,其中,所述标定标识为标识所述加密模型参数是否为虚假加密模型参数的标识。In this embodiment, it should be noted that the zero-knowledge verification result includes a calibration identifier corresponding to each of the encryption model parameters, where the calibration identifier is an identifier that identifies whether the encryption model parameter is a false encryption model parameter. .
基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模,具体地,基于各所述标定标识,在各所述加密模型参数中剔除虚假加密模型参数,获得各可信模型参数,进而对各所述可信模型参数进行聚合处理,获得聚合参数,进而基于各所述聚合模型参数,协调各所述第二设备进行联邦学习建模。Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling, specifically, based on each of the calibration identifiers, remove false encryption models from each of the encryption model parameters Parameters, each trusted model parameter is obtained, and then each of the trusted model parameters is aggregated to obtain the aggregated parameter, and then based on each of the aggregated model parameters, each of the second devices is coordinated to perform federated learning modeling.
其中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:Wherein, the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters includes:
步骤S31,基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;Step S31, based on the zero-knowledge verification result, remove the false encryption model parameters from each encryption model parameter to obtain each trusted model parameter;
在本实施例中,需要说明的是,协调方在发现虚假加密模型参数后,可以按照预设激励机制处罚所述虚假加密模型参数对应的恶意参与方或者取消恶意参与方后续参与联邦学习建模的资格。In this embodiment, it should be noted that after discovering the false encryption model parameters, the coordinator can punish the malicious participant corresponding to the false encryption model parameter according to the preset incentive mechanism or cancel the malicious participant's subsequent participation in the federated learning modeling Qualifications.
步骤S32,对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Step S32, performing aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feeding back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models, Until the local training model reaches the preset training end condition.
在本实施例中,对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件,具体地,基于预设聚合处理规则,对各所述可信模型参数进行聚合处理,获得聚合参数,其中,所述预设集合处理规则包括加权求平均、求和等,在一实施例中,将各所述聚合参数分别发送至各所述第二设备,以供各所述第二设备基于所述参与方公钥对应的参与方私钥,对所述聚合参数进行解密,获得解密聚合参数,并基于解密聚合参数,对己方持有的本地训练模型进行更新,获得更新后的本地训练模型,并判断更新后的本地训练模型是否达到预设训练结束条件,若更新后的本地训练模型达到预设训练结束条件,则判定完成所述联邦学习建模的任务,若更新后的本地训练模型未达到预设训练结束条件,则重新对所述本地训练模型进行迭代训练,直至所述本地模型达到预设迭代次数阀值,则重新获取所述本地训练模型的模型训练参数,并将所述模型训练参数重新加密发送至所述协调方,以重新进行联邦,直至所述本地训练模型达到预设训练结束条件,其中,所述训练结束条件包括达到预设最大迭代次数,所述本地训练模型对应的损失函数收敛等。In this embodiment, each of the trusted model parameters is aggregated to obtain the aggregated parameters, and the aggregated parameters are fed back to each of the second devices, so that each of the second devices can update their local Train the model until the local training model reaches a preset training end condition, specifically, based on preset aggregation processing rules, perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, wherein the preset aggregation processing The rules include weighted average, summation, etc. In one embodiment, each of the aggregation parameters is sent to each of the second devices, so that each of the second devices can participate based on the public key of the participant. The party’s private key decrypts the aggregation parameters to obtain the decryption aggregation parameters, and based on the decryption aggregation parameters, updates the local training model held by the party, obtains the updated local training model, and judges the updated local training model Whether the preset training end condition is reached, if the updated local training model meets the preset training end condition, it is determined that the task of federated learning modeling is completed, and if the updated local training model does not meet the preset training end condition, then Re-train the local training model iteratively until the local model reaches the threshold of the preset number of iterations, then re-acquire the model training parameters of the local training model, and re-encrypt and send the model training parameters to the The coordinator re-feds until the local training model reaches a preset training end condition, where the training end condition includes reaching a preset maximum number of iterations, and the loss function corresponding to the local training model converges.
在一实施例中,所述本地训练模型包括风控模型,其中,所述风控模型为用于评估用户贷款风险的机器学习模型,进而当各所述参与方中存在恶意参与方时,则基于所述恶意参与方发送的虚假加密模型 参数和各所述参与方中的正常参与方发送的加密模型参数进行聚合,将获得相比于与准确聚合参数相差极大的误差聚合参数,进而各第二设备基于误差聚合参数更新风控模型,将降低风控模型对用户贷款风险评估的准确性,进而若基于本申请中的联邦学习建模方法,可从各所述参与方发送的加密模型参数中筛选并剔除虚假加密模型参数,获得可信加密模型参数,进而在整个联邦学习建模过程中,所述风控模型总是基于对各可信加密模型参数进行聚合处理获得的聚合参数进行更新的,进而使得风控模型对用户贷款风险的评估更加准确,也即,提高了风控模型的贷款风险评估准确率。In one embodiment, the local training model includes a risk control model, where the risk control model is a machine learning model used to assess the risk of a user’s loan, and when there is a malicious participant in each of the participants, then Perform aggregation based on the false encryption model parameters sent by the malicious participants and the encryption model parameters sent by the normal participants in each of the participants, and the error aggregation parameters that differ greatly from the accurate aggregation parameters will be obtained, and then each The second device updates the risk control model based on the error aggregation parameters, which will reduce the accuracy of the risk control model for user loan risk assessment. Furthermore, if it is based on the federated learning modeling method in this application, the encrypted model can be sent from each participant The false encryption model parameters are screened and eliminated from the parameters to obtain the trusted encryption model parameters. Then, in the entire federated learning modeling process, the risk control model is always based on the aggregate parameters obtained by the aggregate processing of the trusted encryption model parameters. The update makes the risk control model's assessment of user loan risk more accurate, that is, improves the accuracy of the risk control model's loan risk assessment.
本实施例通过接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数,进而基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,进而基于所述零知识验证结果,协调各所述第二设备进行联邦学习建模。也即,本实施例在接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数之后,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,在一实施例中,基于所述零知识验证结果,可在各所述加密模型参数中剔除所述虚假加密模型参数,以协调各所述第二设备进行联邦学习建模。也即,本实施例提供了一种基于零知识证明在各所述本地模型中确定虚假加密模型参数的方法,进而当有若恶意参与方在训练过程中提供虚假加密模型参数时,可准确识别并剔除虚假加密模型参数,进而避免了基于混合有虚假加密模型参数的各加密模型参数进行联邦学习建模的情况发生,进而提高了通过联邦学习建模获得的联邦模型的整体模型质量,进而提高了联邦学习建模的效率和精确度,进而解决了联邦学习建模效率低且精确度差的技术问题。In this embodiment, by receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters, and then based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters, so as to perform zero-knowledge verification on each of the encryption model parameters. In the encryption model parameters, a false encryption model parameter is determined to obtain a zero-knowledge verification result, and then based on the zero-knowledge verification result, each of the second devices is coordinated to perform federated learning modeling. That is, in this embodiment, after receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters, based on each of the verification parameters, zero-knowledge verification is performed on each of the encryption model parameters respectively, so as to Determine a false encryption model parameter in each encryption model parameter to obtain a zero-knowledge verification result. In one embodiment, based on the zero-knowledge verification result, the false encryption model parameter can be eliminated from each encryption model parameter , To coordinate each of the second devices to perform federated learning modeling. That is, this embodiment provides a method for determining false encryption model parameters in each of the local models based on zero-knowledge proof, and furthermore, when a malicious participant provides false encryption model parameters during the training process, it can accurately identify And remove the false encryption model parameters, thereby avoiding the occurrence of federated learning modeling based on the encryption model parameters mixed with false encryption model parameters, thereby improving the overall model quality of the federated model obtained through federated learning modeling, and thereby improving It improves the efficiency and accuracy of federated learning modeling, and then solves the technical problems of low efficiency and poor accuracy of federated learning modeling.
在一实施例中,参照图2,基于本申请中第一实施例,在本申请的另一实施例中,所述验证参数包括验证模型参数和验证随机参数,In an embodiment, referring to FIG. 2, based on the first embodiment of the present application, in another embodiment of the present application, the verification parameters include verification model parameters and verification random parameters,
所述分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果的步骤包括:The step of separately calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters includes:
步骤S211,基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;Step S211: Perform legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results;
在本实施例中,需要说明的是,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数,所述加密模型参数包括第一加密模型参数和第二加密模型参数,其中,所述第一加密模型参数为所述第二设备对当前模型参数进行同态加密之后的加密参数,所述第二加密模型参数为所述第二设备对在前模型参数进行同态加密之后的加密参数,其中,所述当前模型参数为在进行本轮联邦时,所述本地训练模型达到预设训练迭代次数阀值时提取的模型参数,所述在前模型参数为基于本轮联邦之前的在前轮次联邦是的模型参数,例如,取前三轮联邦对应的历史模型参数,并对各所述历史模型参数进行加权求平均,获得所述在前模型参数,例如,假设,所述当前模型参数为m,所述在前模型参数为m 0,则第一加密模型参数h 0=Enc(P,m 0,r 1),其中,P为参与方公钥,r 1为第一验证随机参数,第二加密模型参数h m=Enc(P,m,r 2),其中,r 2为第二验证随机参数。 In this embodiment, it should be noted that the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter, and the encryption model parameters include a first encryption model parameter and a second encryption model parameter, where , The first encryption model parameter is the encryption parameter after the second device homomorphically encrypts the current model parameter, and the second encryption model parameter is after the second device homomorphically encrypts the previous model parameter Where the current model parameter is the model parameter extracted when the local training model reaches the threshold of the preset number of training iterations during the current round of federation, and the previous model parameter is based on the previous round of federation The model parameters of the previous round of federation are, for example, taking the historical model parameters corresponding to the previous three rounds of federation, and performing a weighted average of each of the historical model parameters to obtain the previous model parameters, for example, assuming that The current model parameter is m, the previous model parameter is m 0 , then the first encrypted model parameter h 0 =Enc(P, m 0 , r 1 ), where P is the public key of the participant, and r 1 is the first encryption model parameter. One verification random parameter, the second encryption model parameter h m =Enc(P, m, r 2 ), where r 2 is the second verification random parameter.
基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果,具体地,对于每一所述加密模型参数均执行以下步骤:Based on the preset verification challenge parameters, the legality verification is performed on each encryption model parameter, and each of the first zero-knowledge verification results is obtained. Specifically, the following steps are performed for each encryption model parameter:
基于所述第一验证挑战参数和所述第二验证挑战参数,分别对所述第一加密模型参数和第二加密模型参数进行幂操作并求和,获得第一零知识验证结果,例如,假设第一加密模型参数h 0=Enc(P,m 0,r 1),第二加密模型参数h m=Enc(P,m,r 2),第一验证挑战参数为x 1,第二验证挑战参数为x 2,则所述第一零知识证明结果
Figure PCTCN2020135032-appb-000007
进而基于同态加密算法的性质,可得
Figure PCTCN2020135032-appb-000008
其中,P为参与方公钥,x 1为第一验证挑战参数,x 2为第二验证挑战参数,r 1为第一验证随机参数,r 2为第二验证随机参数,所述当前模型参数为m,所述在前模型参数为m 0
Based on the first verification challenge parameter and the second verification challenge parameter, the first encryption model parameter and the second encryption model parameter are respectively exponentiated and summed to obtain the first zero-knowledge verification result, for example, suppose The first encryption model parameter h 0 =Enc(P, m 0 , r 1 ), the second encryption model parameter h m = Enc(P, m, r 2 ), the first verification challenge parameter is x 1 , and the second verification challenge The parameter is x 2 , then the first zero-knowledge proof result
Figure PCTCN2020135032-appb-000007
Based on the nature of the homomorphic encryption algorithm, we can get
Figure PCTCN2020135032-appb-000008
Where P is the public key of the participant, x 1 is the first verification challenge parameter, x 2 is the second verification challenge parameter, r 1 is the first verification random parameter, r 2 is the second verification random parameter, the current model parameter Is m, and the previous model parameter is m 0 .
其中,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;Wherein, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results includes:
步骤A10,分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;Step A10, performing exponentiation operations on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first exponentiation operation result corresponding to each encryption model parameter;
在本实施例中,分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果,具体地,对于每一所述加密模型参数,均执行以下步骤:基于所述第一验证挑战参数,对所述第一加密模型参数进行幂操作,获得第一幂操作结果,例如,假设所述第一验证挑战参数为x,所述第一加密模型参数为h,则所述第一幂操作结果为h xIn this embodiment, the first verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain the first exponentiation operation result corresponding to each encryption model parameter. Specifically, for each encryption model parameter, For model parameters, the following steps are performed: based on the first verification challenge parameter, the first encryption model parameter is exponentiated to obtain the first power operation result. For example, assuming that the first verification challenge parameter is x, If the first encryption model parameter is h, the result of the first power operation is h x .
步骤A20,分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;Step A20, performing exponentiation operations on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
在本实施例中,分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果,具体地,对于每一所述加密模型参数,均执行以下步骤:基于所述第二验证挑战参数,对所述第二加密模型参数进行幂操作,获得第二幂操作结果。In this embodiment, the second verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain the second exponentiation operation result corresponding to each encryption model parameter. Specifically, for each encryption model parameter For the model parameters, the following steps are performed: based on the second verification challenge parameter, an exponentiation operation is performed on the second encryption model parameter to obtain a second exponentiation operation result.
步骤A30,基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Step A30, generating each of the first zero-knowledge verification results based on each of the first power operation results and each of the second power operation results.
在本实施例中,基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果,具体地,求取所述第一幂操作结果和所述第二幂操作结果的乘积,并将所述乘积作为所述第一零知识验证结果。In this embodiment, based on each of the first power operation results and each of the second power operation results, each of the first zero-knowledge verification results is generated, and specifically, the first power operation result and the result of the second power operation are obtained. The product of the second power operation result, and use the product as the first zero-knowledge verification result.
步骤S212,基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。Step S212: Perform encryption processing on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
在本实施例中,需要说明的是,各所述参与方均为合法参与方,进而所述参与方公钥和所述预设协调方公钥一致,所述验证随机参数包括第三验证随机参数,且所述第三验证随机参数为基于第一验证随机参数、第二验证随机参数、第一验证挑战参数和第二验证挑战参数计算获得的,例如,假设第一验证随机参数为r1,第二验证随机参数为r2,第一验证挑战参数为x1,第二验证挑战参数为x2,则第三验证挑战参数
Figure PCTCN2020135032-appb-000009
In this embodiment, it should be noted that each of the participants is a legal participant, and the public key of the participant is consistent with the public key of the preset coordinator, and the verification random parameter includes a third verification random Parameters, and the third verification random parameter is calculated based on the first verification random parameter, the second verification random parameter, the first verification challenge parameter, and the second verification challenge parameter. For example, assuming that the first verification random parameter is r1, The second verification random parameter is r2, the first verification challenge parameter is x1, the second verification challenge parameter is x2, then the third verification challenge parameter
Figure PCTCN2020135032-appb-000009
另外地,所述验证模型参数为基于第一验证挑战参数、第二验证挑战参数、当前模型参数和在前模 型参数计算获得的,例如,假设所述第一验证挑战参数为x1,第二验证挑战参数为x2,当前模型参数为m,在前模型参数为m 0,则所述验证模型参数为n=m 0x 1+mx 2Additionally, the verification model parameters are calculated based on the first verification challenge parameters, the second verification challenge parameters, the current model parameters, and the previous model parameters. For example, assuming that the first verification challenge parameter is x1, the second verification challenge parameter is x1. The challenge parameter is x2, the current model parameter is m, and the previous model parameter is m 0 , then the verification model parameter is n=m 0 x 1 +mx 2 .
基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果,具体地,对于每一所述验证模型参数均执行以下步骤:Based on the preset public key of the coordinator and each of the verification random parameters, each verification model parameter is encrypted to obtain each of the second zero-knowledge verification results. Specifically, each verification model parameter is executed The following steps:
基于预设协调方公钥和所述第三验证挑战参数,对所述验证模型参数进行同态加密,获得所述第二零知识验证结果,例如,假设第三验证挑战参数
Figure PCTCN2020135032-appb-000010
其中,第一验证随机参数为r1,第二验证随机参数为r2,第一验证挑战参数为x1,第二验证挑战参数为x2,所述验证模型参数为n=m 0x 1+mx 2,预设协调方公钥为P,则
Figure PCTCN2020135032-appb-000011
Based on the preset coordinator public key and the third verification challenge parameter, homomorphic encryption is performed on the verification model parameter to obtain the second zero-knowledge verification result, for example, suppose the third verification challenge parameter
Figure PCTCN2020135032-appb-000010
Wherein, the first verification random parameter is r1, the second verification random parameter is r2, the first verification challenge parameter is x1, the second verification challenge parameter is x2, and the verification model parameter is n=m 0 x 1 +mx 2 , The default coordinator’s public key is P, then
Figure PCTCN2020135032-appb-000011
在一实施例中,若参与方没有对加密模型参数进行恶意加密,例如,恶意篡改加密算法、恶意篡改加密的参数等,则所述第一零知识证明结果与第二零知识证明结果相同,也即,所述参与方向协调方提供的加密模型参数为可信模型参数,若参与方对加密模型参数进行了恶意加密,则所述第一零知识证明结果与第二零知识证明结果不相同,也即,所述参与方向协调方提供的加密模型参数为虚假加密模型参数。In one embodiment, if the participant does not maliciously encrypt the encryption model parameters, for example, maliciously tampering with the encryption algorithm, maliciously tampering with the encrypted parameters, etc., the first zero-knowledge proof result is the same as the second zero-knowledge proof result. That is, the encryption model parameter provided by the participant to the coordinator is a trusted model parameter. If the participant maliciously encrypts the encryption model parameter, the first zero-knowledge proof result is different from the second zero-knowledge proof result That is, the encryption model parameter provided by the participant to the coordinator is a false encryption model parameter.
本实施例通过基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果,进而基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。也即,本实施例提供了一种计算第一零知识证明结果和第二零知识证明结果的方法,进而在计算获得第一零知识证明结果和第二零知识证明结果之后,将第一零知识证明结果和第二零知识证明结果仅比对,即可确定加密模型参数是否为虚假加密模型参数,进而为在各所述加密模型参数中确定虚假加密模型参数奠定了基础,进而为解决联邦学习建模效率低且精确度差的技术问题奠定了基础。In this embodiment, by performing legality verification on each of the encryption model parameters based on preset verification challenge parameters, each of the first zero-knowledge verification results is obtained, and then based on the preset coordinator public key and each of the verification random parameters, Encryption processing is performed on each of the verification model parameters to obtain each of the second zero-knowledge verification results. That is, this embodiment provides a method for calculating the first zero-knowledge proof result and the second zero-knowledge proof result, and then after the first zero-knowledge proof result and the second zero-knowledge proof result are obtained by calculation, the first zero-knowledge proof result and the second zero-knowledge proof result are calculated. The knowledge proof result and the second zero-knowledge proof result can only be compared to determine whether the encryption model parameters are false encryption model parameters, which lays a foundation for determining the false encryption model parameters in each of the encryption model parameters, and then solves the problem of federation. It laid the foundation for learning the technical problems of low modeling efficiency and poor accuracy.
在一实施例中,参照图3,基于本申请中第一实施例和第二实施例,在本申请的另一实施例中,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:In an embodiment, referring to FIG. 3, based on the first and second embodiments of the present application, in another embodiment of the present application, the federated learning modeling method is applied to a second device, and the federated Learning modeling methods include:
步骤B10,获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数;Step B10: Obtain model training parameters and first verification random parameters, and perform encryption processing on the model training parameters based on the first verification random parameters and the preset public key to obtain encrypted model parameters;
在本实施例中,所述联邦学习建模至少包括一轮联邦,且在每一轮联邦中,所述第二设备对本地训练模型进行迭代训练,直至达到预设迭代次数阀值,则将所述本地训练模型的模型参数发送至所述第一设备,并接所述第一设备基于所述模型参数反馈的聚合参数,并基于所述聚合参数,更新所述本地训练模型,并将所述本地训练模型作为下一轮联邦的初始模型,直至所述本地训练模型达到预设训练结束条件,其中,所述预设训练结束条件包括达到最大迭代次数、损失函数收敛等。In this embodiment, the federated learning modeling includes at least one round of federation, and in each round of federation, the second device performs iterative training on the local training model until the threshold of the preset number of iterations is reached. The model parameters of the local training model are sent to the first device, and the aggregation parameters fed back by the first device based on the model parameters are connected, and based on the aggregation parameters, the local training model is updated, and all the parameters are updated. The local training model serves as the initial model of the next round of federation until the local training model reaches a preset training end condition, where the preset training end condition includes reaching the maximum number of iterations, convergence of the loss function, and the like.
获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数,具体地,当所述本地训练模型达到预设迭代次数阀值时,提取本地训练模型的模型参数作为模型训练参数,并获取第一验证随机参数,进而基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行同态加密处理,获得加密模型参数,例如,假设所述模型训练参数为m,所述第一验证随机参数为r 1,所述预设公钥为P,则所述加密模型参数为 h m=Enc(P,m,r 1),其中,Enc为同态加密符号。 Obtain model training parameters and first verification random parameters, and perform encryption processing on the model training parameters based on the first verification random parameters and the preset public key to obtain encrypted model parameters, specifically, when the local training model When the threshold of the preset number of iterations is reached, the model parameters of the local training model are extracted as the model training parameters, and the first verification random parameters are obtained, and then based on the first verification random parameters and the preset public key, the model training parameters are Perform homomorphic encryption processing to obtain encryption model parameters. For example, assuming that the model training parameter is m, the first verification random parameter is r 1 , and the preset public key is P, then the encryption model parameter is h m = Enc(P, m, r 1 ), where Enc is a homomorphic encryption symbol.
其中,所述模型训练参数包括当前模型参数和辅助模型参数,Wherein, the model training parameters include current model parameters and auxiliary model parameters,
所述获取模型训练参数的步骤包括:The step of obtaining model training parameters includes:
步骤B11,对所述模型训练参数对应的本地训练模型进行迭代训练,直至所述本地训练模型达到预设迭代次数阀值,获取所述本地训练模型的所述当前模型参数;Step B11: Perform iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtain the current model parameters of the local training model;
在本实施例中,需要说明的是,所述当前模型参数为所述本地训练模型在本轮联邦中达到预设迭代次数阀值时的本轮迭代模型参数。In this embodiment, it should be noted that the current model parameter is the current iteration model parameter when the local training model reaches the preset iteration number threshold in the current round of federation.
步骤B12,获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数。Step B12: Obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
在本实施例中,获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数,具体地,获取所述本轮联邦对应的在前轮次联邦的各在前迭代模型参数,并对各所述在前迭代模型参数进行加权求平均,获得所述辅助模型参数,例如,假设各所述在前迭代模型参数为a,b,c,a对应的权重为20%,b对应的权重为30%,c对应的权重为50%,则所述辅助模型参数m 0=a*20%+b*30%+c*50%。 In this embodiment, the previous model parameters of the local training model are acquired, and the auxiliary model parameters are generated based on the previous model parameters, specifically, the previous-round federation corresponding to the current round of federation is obtained Each of the previous iteration model parameters of each of the previous iteration model parameters is weighted and averaged to obtain the auxiliary model parameters. For example, assuming that each of the previous iteration model parameters is a, b, c, a corresponding to The weight of is 20%, the weight of b is 30%, and the weight of c is 50%, then the auxiliary model parameter m 0 =a*20%+b*30%+c*50%.
步骤B20,基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数;Step B20, generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
在本实施例中,需要说明的是,在一种可行的实施方案中,所述预设验证挑战参数可以由所述协调方根据各参与方在在前轮次联邦中发送的在各前加密模型参数和预设哈希函数计算得到的,例如,假设存在10个参与方,则对对应的10个所述在前加密模型参数进行自由组合,然后将自由组合的n个结果输入到预设哈希函数中,得到验证挑战参数x 1、x 2。x n,且具体的预设验证挑战参数x 1、x 2。x n的生成方式和数量均不做限定。 In this embodiment, it should be noted that, in a feasible implementation, the preset verification challenge parameters can be encrypted by the coordinator according to the previous encryption sent by each participant in the previous round of the federation. The model parameters and the preset hash function are calculated. For example, if there are 10 participants, then the corresponding 10 previously encrypted model parameters are freely combined, and then the n results of the free combination are input to the preset In the hash function, the verification challenge parameters x 1 and x 2 are obtained . x n , and specific preset verification challenge parameters x 1 , x 2 . The generation method and number of x n are not limited.
基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数,具体地,对所述第一验证随机参数和所述预设验证挑战参数执行幂操作,获得第二验证随机参数,并基于所述模型训练参数和预设验证挑战参数,生成验证模型参数,例如,假设,假设所述第一验证随机参数为r 1,所述模型训练参数为m,所述预设验证挑战参数为x 1和x 2,所述验证模型参数n=m*x 1+m*x 2,所述第二验证随机参数
Figure PCTCN2020135032-appb-000012
Based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter, a verification model parameter and a second verification random parameter are generated, and specifically, the first verification random parameter and the preset verification challenge The parameters perform the exponentiation operation to obtain the second verification random parameter, and based on the model training parameters and preset verification challenge parameters, the verification model parameters are generated, for example, assuming that the first verification random parameter is r 1 , the model The training parameter is m, the preset verification challenge parameters are x 1 and x 2 , the verification model parameter n=m*x 1 +m*x 2 , the second verification random parameter
Figure PCTCN2020135032-appb-000012
步骤B30,将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果;Step B30, sending the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result;
在本实施例中,将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果,具体地,将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至与所述第二设备关联的第一设备,以供所述第一设备基于所述加密模型参数、所述验证模型参数和所述第二验证随机参数,计算第一零知识证明结果和第二零知识证明结果,并基于所述第一零知识证明结果和第二零知识证明结果,确定所述加密模型参数是否为虚假加密模型参数,获得确定结果,并将所述确定结果记录于所述零知识验证结果,其中,所述零知识验证结果 包括各所述第二设备对应的确定结果。In this embodiment, the encryption model parameters, the verification model parameters, and the second verification random parameters are sent to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result, Specifically, the encryption model parameters, the verification model parameters, and the second verification random parameters are sent to a first device associated with the second device, so that the first device can base on the encryption model parameters , The verification model parameter and the second verification random parameter, calculate the first zero-knowledge proof result and the second zero-knowledge proof result, and determine the result based on the first zero-knowledge proof result and the second zero-knowledge proof result Whether the encryption model parameter is a false encryption model parameter, a determination result is obtained, and the determination result is recorded in the zero-knowledge verification result, where the zero-knowledge verification result includes the determination result corresponding to each of the second devices.
步骤B40,接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。Step B40: Receive the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and update the local training model corresponding to the model training parameter based on the aggregation parameter, until all The local training model reaches the preset training end condition.
在本实施例中,接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件,具体地,所述第一设备在获得零知识证明结果后,将基于所述零知识证明结果,在各所述第二设备发送的加密模型参数中剔除虚假加密模型参数,获得各可信模型参数,并对各所述可信模型参进行聚合处理,其中,所述聚合处理包括求和、加权求平均等,获得聚合参数,并将聚合参数分别反馈至各所述第二设备,进而第二设备在结束所述聚合参数后,基于所述预设公钥对应的预设私钥,对所述聚合参数进行解密,获得解密聚合参数,并基于所述解密聚合参数,更新所述本地训练模型,并将更新后的所述本地训练模型作为下一轮联邦的初始模型,直至所述本地训练模型达到预设训练结束条件,其中,所述训练结束条件包括达到最大迭代次数和损失函数收敛等。In this embodiment, the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter is received, and based on the aggregation parameter, the local training model corresponding to the model training parameter is updated , Until the local training model reaches the preset training end condition, specifically, after the first device obtains the zero-knowledge proof result, the encrypted model sent by each second device will be based on the zero-knowledge proof result The false encryption model parameters are removed from the parameters, and each trusted model parameter is obtained, and each trusted model parameter is aggregated. The aggregation processing includes summation, weighted averaging, etc., to obtain aggregated parameters, and aggregate The parameters are fed back to each of the second devices respectively, and then after the second device ends the aggregation parameters, it decrypts the aggregation parameters based on the preset private key corresponding to the preset public key to obtain the decrypted aggregation parameters, And based on the decryption aggregation parameters, the local training model is updated, and the updated local training model is used as the initial model of the next round of federation until the local training model reaches the preset training end condition. The training end conditions include reaching the maximum number of iterations and the convergence of the loss function.
本实施例通过获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数,进而基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数,进而将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果,进而接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。也即,本实施例提供了一种基于零知识证明的联邦学习建模方法,也即,在对模型训练参数加密为加密模型参数时,同时生成验证模型参数和第二验证随参数,进而将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果,进而所述第一设备在各所述第二设备中确定并剔除虚假加密模型参数,进而所述第二设备接收的聚合参数所述第一设备为基于可信的加密模型参数聚合获得的,进而基于所述聚合参数,对本地训练模型进行更新,以完成联邦学习建模,进而避免了基于混合有虚假加密模型参数的各加密模型参数聚合的聚合参数更新本地训练模型,使得本地训练模型难以达到预设训练结束条件且本地训练模型确定低的情况发生,进而提高了联邦学习建模的效率和精确度,进而解决了联邦学习建模效率低且精确度差的技术问题。In this embodiment, the model training parameters and the first verification random parameters are obtained, and based on the first verification random parameters and the preset public key, the model training parameters are encrypted to obtain encrypted model parameters, and then based on the first verification random parameters. 1. Verify random parameters, the model training parameters, and preset verification challenge parameters, generate verification model parameters and second verification random parameters, and then send the encryption model parameters, the verification model parameters, and the second verification random parameters To the first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and then to receive the aggregate parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and based on The aggregation parameter updates the local training model corresponding to the model training parameter until the local training model reaches a preset training end condition. That is, this embodiment provides a federated learning modeling method based on zero-knowledge proof, that is, when the model training parameters are encrypted into encrypted model parameters, the verification model parameters and the second verification dependent parameters are generated at the same time, and then the The encryption model parameters, the verification model parameters, and the second verification random parameters are sent to the first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and then the first device The false encryption model parameters are determined and eliminated in each of the second devices, and then the aggregate parameters received by the second device are obtained by aggregation based on the trusted encryption model parameters by the first device, and further based on the aggregate parameters, The local training model is updated to complete the federated learning modeling, thereby avoiding the update of the local training model based on the aggregated parameters of each encryption model parameter aggregation mixed with false encryption model parameters, making it difficult for the local training model to reach the preset training end conditions and local The low determination of the training model occurs, which in turn improves the efficiency and accuracy of federated learning modeling, thereby solving the technical problem of low efficiency and poor accuracy of federated learning modeling.
参照图4,图4是本申请实施例方案涉及的硬件运行环境的设备结构示意图。Referring to FIG. 4, FIG. 4 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application.
如图4所示,该联邦学习建模设备可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储设备。As shown in FIG. 4, the federated learning modeling device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. Among them, the communication bus 1002 is used to implement connection and communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
在一实施例中,该联邦学习建模设备还可以包括矩形用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。矩形用户接口可以包括显示屏(Display)、 输入子模块比如键盘(Keyboard),可选矩形用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。In an embodiment, the federated learning modeling device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. The rectangular user interface may include a display screen (Display) and an input sub-module such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface. The network interface can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
本领域技术人员可以理解,图4中示出的联邦学习建模设备结构并不构成对联邦学习建模设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the structure of the federated learning modeling device shown in FIG. 4 does not constitute a limitation on the federated learning modeling device, and may include more or fewer components than shown in the figure, or a combination of certain components, Or different component arrangements.
如图4所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及联邦学习建模方法程序。操作系统是管理和控制联邦学习建模设备硬件和软件资源的程序,支持联邦学习建模方法程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与联邦学习建模方法系统中其它硬件和软件之间通信。As shown in FIG. 4, the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a federated learning modeling method program. The operating system is a program that manages and controls the hardware and software resources of the federated learning modeling equipment, and supports the running of the federated learning modeling method program and other software and/or programs. The network communication module is used to realize the communication between the various components in the memory 1005 and the communication with other hardware and software in the federated learning modeling method system.
在图4所示的联邦学习建模设备中,处理器1001用于执行存储器1005中存储的联邦学习建模方法程序,实现上述任一项所述的联邦学习建模方法的步骤。In the federated learning modeling device shown in FIG. 4, the processor 1001 is used to execute the federated learning modeling method program stored in the memory 1005 to implement the steps of the federated learning modeling method described in any one of the above.
本申请联邦学习建模设备具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。The specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
本申请实施例还提供一种联邦学习建模装置,所述联邦学习建模装置应用于第一设备,所述联邦学习建模装置包括:An embodiment of the present application also provides a federated learning modeling device, the federated learning modeling device is applied to a first device, and the federated learning modeling device includes:
接收模块,用于接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;A receiving module, configured to receive encrypted model parameters sent by each second device and verification parameters corresponding to the encrypted model parameters;
零知识验证模块,用于基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;The zero-knowledge verification module is configured to perform zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to determine a false encryption model parameter in each of the encryption model parameters, and obtain a zero-knowledge verification result;
协调模块,用于基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。The coordination module is configured to coordinate each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters.
在一实施例中,所述协调模块包括:In an embodiment, the coordination module includes:
剔除子模块,用于基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;The elimination sub-module is used to eliminate the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the trusted model parameters;
聚合子模块,用于对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。The aggregation sub-module is configured to perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their local The model is trained until the local training model reaches a preset training end condition.
在一实施例中,所述零知识验证模块包括:In an embodiment, the zero-knowledge verification module includes:
计算子模块,用于分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;The calculation sub-module is used to calculate the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter;
零知识验证子模块,用于基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。The zero-knowledge verification sub-module is configured to verify whether each encryption model parameter is a false encryption model parameter based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, and obtain a zero-knowledge verification result.
在一实施例中,所述计算子模块包括:In an embodiment, the calculation sub-module includes:
合法性验证单元,用于基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;A legitimacy verification unit, configured to verify the legitimacy of each encryption model parameter based on preset verification challenge parameters, and obtain each of the first zero-knowledge verification results;
加密单元,用于基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。The encryption unit is configured to perform encryption processing on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
在一实施例中,所述合法性验证单元包括:In an embodiment, the legality verification unit includes:
第一幂操作子单元,用于分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;The first power operation subunit is configured to perform power operations on the first verification challenge parameter and each of the encryption model parameters to obtain the first power operation result corresponding to each of the encryption model parameters;
第二幂操作子单元,用于分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;The second power operation subunit is configured to perform power operations on the second verification challenge parameter and each of the encryption model parameters to obtain the second power operation result corresponding to each of the encryption model parameters;
生成子单元,用于基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。A generating subunit is configured to generate each of the first zero-knowledge verification results based on each of the first power operation results and each of the second power operation results.
在一实施例中,所述零知识验证子模块包括:In an embodiment, the zero-knowledge verification sub-module includes:
对比单元,用于将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比;A comparison unit, configured to compare the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each encryption model parameter respectively;
第一判定单元,用于若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果不一致,则判定所述加密模型参数为所述虚假加密模型参数;A first determining unit, configured to determine that the encryption model parameter is the false encryption model parameter if the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result;
第二判定单元,用于若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果一致,则判定所述加密模型参数不为所述虚假加密模型参数。The second determination unit is configured to determine that the encryption model parameter is not the false encryption model parameter if the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result.
本申请联邦学习建模装置的具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。The specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
为实现所述目的,本实施例还提供一种联邦学习建模装置,所述联邦学习建模装置应用于第二设备,所述联邦学习建模装置包括:To achieve the objective, this embodiment also provides a federated learning modeling device, the federated learning modeling device is applied to a second device, and the federated learning modeling device includes:
加密模块,用于获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数;An encryption module, configured to obtain model training parameters and first verification random parameters, and based on the first verification random parameters and a preset public key, perform encryption processing on the model training parameters to obtain encrypted model parameters;
生成模块,用于基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数;A generating module, configured to generate a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
发送模块,用于将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果;A sending module, configured to send the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result;
模型更新模块,用于接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。The model update module is configured to receive the aggregate parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and update the local training model corresponding to the model training parameter based on the aggregate parameter , Until the local training model reaches the preset training end condition.
在一实施例中,所述加密模块包括:In an embodiment, the encryption module includes:
获取子模块,用于对所述模型训练参数对应的本地训练模型进行迭代训练,直至所述本地训练模型达到预设迭代次数阀值,获取所述本地训练模型的所述当前模型参数;An obtaining sub-module, configured to perform iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtain the current model parameters of the local training model;
生成子模块,用于获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数。A generating sub-module is used to obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
本申请联邦学习建模装置的具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。The specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利处理范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent processing of this application.

Claims (20)

  1. 一种联邦学习建模方法,其中,所述联邦学习建模方法应用于第一设备,所述联邦学习建模方法包括:A federated learning modeling method, wherein the federated learning modeling method is applied to a first device, and the federated learning modeling method includes:
    接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;Receiving encrypted model parameters and verification parameters corresponding to the encrypted model parameters sent by each second device;
    基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;以及Based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result; and
    基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
  2. 如权利要求1所述联邦学习建模方法,其中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:5. The federated learning modeling method according to claim 1, wherein the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters comprises:
    基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;以及Based on the zero-knowledge verification result, remove the false encryption model parameters from each of the encryption model parameters to obtain each trusted model parameter; and
    对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models until the The local training model reaches the preset training end condition.
  3. 如权利要求1所述联邦学习建模方法,其中,所述基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:5. The federated learning modeling method according to claim 1, wherein said zero-knowledge verification is performed on each of said encryption model parameters based on each of said verification parameters, so as to determine a false encryption model parameter in each of said encryption model parameters , The steps to obtain zero-knowledge verification results include:
    分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;以及Respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters; and
    基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, it is verified whether each encryption model parameter is a false encryption model parameter, and a zero-knowledge verification result is obtained.
  4. 如权利要求3所述联邦学习建模方法,其中,所述验证参数包括验证模型参数和验证随机参数,The federated learning modeling method according to claim 3, wherein the verification parameters include verification model parameters and verification random parameters,
    所述分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果的步骤包括:The step of separately calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters includes:
    基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;以及Based on preset verification challenge parameters, perform legality verification on each of the encryption model parameters to obtain each of the first zero-knowledge verification results; and
    基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。Encryption processing is performed on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  5. 如权利要求4所述联邦学习建模方法,其中,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;5. The federated learning modeling method according to claim 4, wherein the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
    所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters to obtain each of the first zero-knowledge verification results includes:
    分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的 第一幂操作结果;Performing exponentiation operations on the first verification challenge parameter and each encryption model parameter to obtain a first exponentiation operation result corresponding to each encryption model parameter;
    分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;以及Performing an exponentiation operation on the second verification challenge parameter and each of the encryption model parameters to obtain a second exponentiation operation result corresponding to each of the encryption model parameters; and
    基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Based on each of the first power operation results and each of the second power operation results, each of the first zero-knowledge verification results is generated.
  6. 如权利要求3所述联邦学习建模方法,其中,所述基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数的步骤包括:5. The federated learning modeling method according to claim 3, wherein the verification is based on each of the first zero-knowledge proof results and each of the second zero-knowledge proof results to verify whether each of the encryption model parameters is a false encryption model The parameters of the steps include:
    将各所述加密模型参数对应的所述第一零知识证明结果与所述第二零知识证明结果分别进行对比;Respectively comparing the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the encryption model parameters;
    若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果不一致,则判定所述加密模型参数为所述虚假加密模型参数;以及If the first zero-knowledge proof result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge proof result, determining that the encryption model parameter is the fake encryption model parameter; and
    若所述加密模型参数对应的所述第一零知识证明结果和所述第二零知识证明结果一致,则判定所述加密模型参数不为所述虚假加密模型参数。If the first zero-knowledge proof result corresponding to the encryption model parameter is consistent with the second zero-knowledge proof result, it is determined that the encryption model parameter is not the false encryption model parameter.
  7. 如权利要求1所述联邦学习建模方法,其中,所述第一设备和第二设备均基于同态加密算法进行加密,其中,The federated learning modeling method according to claim 1, wherein the first device and the second device both perform encryption based on a homomorphic encryption algorithm, wherein,
    同态加密算法满足:The homomorphic encryption algorithm satisfies:
    C=Enc(PK,m,r),对于C 1=Enc(PK,m 1,r 1))和C 2=Enc(PK,m 2,r 2),满足:
    Figure PCTCN2020135032-appb-100001
    C = Enc (PK, m, r), for C 1 = Enc (PK, m 1 , r 1 )) and C 2 = Enc (PK, m 2 , r 2 ), satisfying:
    Figure PCTCN2020135032-appb-100001
    其中,C、C 1和C 2均为加密之后的待加密参数,PK为加密的密钥,m、m 1和m 2为待加密参数,r、r 1和r 2为加密所需的随机数。 Among them, C, C 1 and C 2 are the parameters to be encrypted after encryption, PK is the encryption key, m, m 1 and m 2 are the parameters to be encrypted, and r, r 1 and r 2 are the random parameters required for encryption. number.
  8. 如权利要求3所述联邦学习建模方法,其中,所述对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件的步骤包括:5. The federated learning modeling method according to claim 3, wherein said aggregation processing is performed on each of said trusted model parameters to obtain aggregation parameters, and said aggregation parameters are fed back to each of said second devices respectively for Each of the second devices updates their local training model until the local training model reaches a preset training end condition. The steps include:
    对所述聚合参数进行解密,获得解密聚合参数;Decrypt the aggregation parameters to obtain decrypted aggregation parameters;
    并基于解密聚合参数,对己方持有的本地训练模型进行更新,获得更新后的本地训练模型;And based on the decryption aggregation parameters, update the local training model held by oneself to obtain the updated local training model;
    判断更新后的本地训练模型是否达到预设训练结束条件;Determine whether the updated local training model meets the preset training end conditions;
    若更新后的本地训练模型达到预设训练结束条件,则判定完成所述联邦学习建模的任务;If the updated local training model reaches the preset training end condition, it is determined that the task of the federated learning modeling is completed;
    若更新后的本地训练模型未达到预设训练结束条件,则重新对所述本地训练模型进行迭代训练,直至所述本地模型达到预设迭代次数阀值,则重新获取所述本地训练模型的模型训练参数;以及If the updated local training model does not reach the preset training end condition, the local training model is re-trained iteratively until the local model reaches the preset number of iterations threshold, then the model of the local training model is reacquired Training parameters; and
    将所述模型训练参数重新加密发送至所述协调方,以重新进行联邦,直至所述本地训练模型达到预设训练结束条件;Re-encrypting the model training parameters and sending them to the coordinator for re-federation until the local training model reaches a preset training end condition;
    其中,所述训练结束条件包括达到预设最大迭代次数,所述本地训练模型对应的损失函数收敛。Wherein, the training end condition includes reaching a preset maximum number of iterations, and the loss function corresponding to the local training model converges.
  9. 一种联邦学习建模方法,其中,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方 法包括:A federated learning modeling method, wherein the federated learning modeling method is applied to a second device, and the federated learning modeling method includes:
    获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数;Acquiring model training parameters and first verification random parameters, and performing encryption processing on the model training parameters based on the first verification random parameters and the preset public key to obtain encrypted model parameters;
    基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数;Generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter, and the preset verification challenge parameter;
    将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果;以及Sending the encryption model parameters, the verification model parameters, and the second verification random parameters to a first device for the first device to perform zero-knowledge verification and obtain a zero-knowledge verification result; and
    接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。Receive the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and based on the aggregation parameter, update the local training model corresponding to the model training parameter until the local training The model reaches the preset training end condition.
  10. 如权利要求9所述联邦学习建模方法,其中,所述模型训练参数包括当前模型参数和辅助模型参数,9. The federated learning modeling method according to claim 9, wherein the model training parameters include current model parameters and auxiliary model parameters,
    所述获取模型训练参数的步骤包括:The step of obtaining model training parameters includes:
    对所述模型训练参数对应的本地训练模型进行迭代训练,直至所述本地训练模型达到预设迭代次数阀值,获取所述本地训练模型的所述当前模型参数;以及Performing iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration number threshold, and obtaining the current model parameters of the local training model; and
    获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数。Obtain the previous model parameters of the local training model, and generate the auxiliary model parameters based on the previous model parameters.
  11. 一种联邦学习建模设备,其中,所述联邦学习建模设备包括:存储器、处理器以及存储在存储器上的用于实现所述联邦学习建模方法的程序,A federated learning modeling device, wherein the federated learning modeling device includes a memory, a processor, and a program stored on the memory for implementing the federated learning modeling method,
    所述存储器用于存储实现联邦学习建模方法的程序;The memory is used to store a program for implementing the federated learning modeling method;
    所述处理器用于执行实现所述联邦学习建模方法的程序,以实现:The processor is used to execute a program for implementing the federated learning modeling method to achieve:
    接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;Receiving encrypted model parameters and verification parameters corresponding to the encrypted model parameters sent by each second device;
    基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;以及Based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result; and
    基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
  12. 如权利要求11所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:The federated learning modeling device according to claim 11, wherein, in the processor for executing the program for realizing the federated learning modeling method, the verification result based on the zero-knowledge and each of the encryption model parameters , The steps of coordinating each of the second devices to perform federated learning modeling include:
    基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;以及Based on the zero-knowledge verification result, remove the false encryption model parameters from each of the encryption model parameters to obtain each trusted model parameter; and
    对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models until the The local training model reaches the preset training end condition.
  13. 如权利要求11所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:The federated learning modeling device according to claim 11, wherein, in the processor for executing the program for realizing the federated learning modeling method, based on each of the verification parameters, each of the encrypted model parameters is performed separately Zero-knowledge verification, to determine the false encryption model parameters in each of the encryption model parameters, and the steps of obtaining the zero-knowledge verification result include:
    分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;以及Respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters; and
    基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, it is verified whether each encryption model parameter is a false encryption model parameter, and a zero-knowledge verification result is obtained.
  14. 如权利要求13所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述验证参数包括验证模型参数和验证随机参数,所述分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果的步骤包括:The federated learning modeling device according to claim 13, wherein, in the processor for executing the program for realizing the federated learning modeling method, the verification parameters include verification model parameters and verification random parameters, and the respective The step of calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter includes:
    基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;以及Based on preset verification challenge parameters, perform legality verification on each of the encryption model parameters to obtain each of the first zero-knowledge verification results; and
    基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。Encryption processing is performed on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  15. 如权利要求14所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The federated learning modeling device according to claim 14, wherein, in the processor for executing the program for realizing the federated learning modeling method, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter. Verification challenge parameters; the step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters, and obtaining each of the first zero-knowledge verification results includes:
    分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;Performing an exponentiation operation on the first verification challenge parameter and each of the encryption model parameters to obtain a first exponentiation operation result corresponding to each of the encryption model parameters;
    分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;以及Performing an exponentiation operation on the second verification challenge parameter and each of the encryption model parameters to obtain a second exponentiation operation result corresponding to each of the encryption model parameters; and
    基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Based on each of the first power operation results and each of the second power operation results, each of the first zero-knowledge verification results is generated.
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有实现联邦学习建模方法的程序,所述实现联邦学习建模方法的程序被处理器执行以实现:A computer-readable storage medium, wherein a program for implementing the federated learning modeling method is stored on the computer-readable storage medium, and the program for implementing the federated learning modeling method is executed by a processor to realize:
    接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;Receiving encrypted model parameters and verification parameters corresponding to the encrypted model parameters sent by each second device;
    基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;以及Based on each of the verification parameters, perform zero-knowledge verification on each of the encryption model parameters to determine a false encryption model parameter in each of the encryption model parameters to obtain a zero-knowledge verification result; and
    基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。Based on the zero-knowledge verification result and each of the encryption model parameters, coordinate each of the second devices to perform federated learning modeling.
  17. 如权利要求16所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:The federated learning modeling device according to claim 16, wherein, in the processor for executing the program for realizing the federated learning modeling method, the verification result based on the zero-knowledge and each of the encryption model parameters , The steps of coordinating each of the second devices to perform federated learning modeling include:
    基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;以及Based on the zero-knowledge verification result, remove the false encryption model parameters from each of the encryption model parameters to obtain each trusted model parameter; and
    对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models until the The local training model reaches the preset training end condition.
  18. 如权利要求16所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:The federated learning modeling device according to claim 16, wherein, in the processor for executing the program for realizing the federated learning modeling method, based on each of the verification parameters, each of the encryption model parameters is performed separately Zero-knowledge verification, to determine the false encryption model parameters in each of the encryption model parameters, and the steps of obtaining the zero-knowledge verification result include:
    分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果;以及Respectively calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each of the verification parameters; and
    基于各所述第一零知识证明结果和各所述第二零知识证明结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Based on each of the first zero-knowledge certification results and each of the second zero-knowledge certification results, it is verified whether each encryption model parameter is a false encryption model parameter, and a zero-knowledge verification result is obtained.
  19. 如权利要求18所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述验证参数包括验证模型参数和验证随机参数,所述分别计算各所述验证参数对应的第一零知识证明结果和第二零知识证明结果的步骤包括:The federated learning modeling device according to claim 18, wherein, in the processor for executing the program for realizing the federated learning modeling method, the verification parameters include verification model parameters and verification random parameters, and the respective The step of calculating the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to each verification parameter includes:
    基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果;以及Based on preset verification challenge parameters, perform legality verification on each of the encryption model parameters to obtain each of the first zero-knowledge verification results; and
    基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各所述第二零知识验证结果。Encryption processing is performed on each verification model parameter based on the preset coordinator public key and each of the verification random parameters to obtain each of the second zero-knowledge verification results.
  20. 如权利要求19所述的联邦学习建模设备,其中,在所述处理器用于执行实现所述联邦学习建模方法的程序中,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The federated learning modeling device according to claim 19, wherein, in the processor for executing the program for realizing the federated learning modeling method, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter. Verification challenge parameters; the step of performing legality verification on each of the encryption model parameters based on preset verification challenge parameters, and obtaining each of the first zero-knowledge verification results includes:
    分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;Performing an exponentiation operation on the first verification challenge parameter and each of the encryption model parameters to obtain a first exponentiation operation result corresponding to each of the encryption model parameters;
    分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;以及Performing an exponentiation operation on the second verification challenge parameter and each of the encryption model parameters to obtain a second exponentiation operation result corresponding to each of the encryption model parameters; and
    基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Based on each of the first power operation results and each of the second power operation results, each of the first zero-knowledge verification results is generated.
PCT/CN2020/135032 2020-05-22 2020-12-09 Federated learning modeling method and device, and computer-readable storage medium WO2021232754A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010445868.8A CN111598254B (en) 2020-05-22 2020-05-22 Federal learning modeling method, device and readable storage medium
CN202010445868.8 2020-05-22

Publications (1)

Publication Number Publication Date
WO2021232754A1 true WO2021232754A1 (en) 2021-11-25

Family

ID=72189770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135032 WO2021232754A1 (en) 2020-05-22 2020-12-09 Federated learning modeling method and device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111598254B (en)
WO (1) WO2021232754A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239857A (en) * 2021-12-29 2022-03-25 湖南工商大学 Data right determining method, device, equipment and medium based on federal learning
CN114466358A (en) * 2022-01-30 2022-05-10 全球能源互联网研究院有限公司 User identity continuous authentication method and device
CN114760023A (en) * 2022-04-19 2022-07-15 光大科技有限公司 Model training method and device based on federal learning and storage medium
CN114800545A (en) * 2022-01-18 2022-07-29 泉州华中科技大学智能制造研究院 Robot control method based on federal learning
CN114897177A (en) * 2022-04-06 2022-08-12 中国电信股份有限公司 Data modeling method and device, electronic equipment and storage medium
CN115174046A (en) * 2022-06-10 2022-10-11 湖北工业大学 Federal learning bidirectional verifiable privacy protection method and system on vector space
CN115292738A (en) * 2022-10-08 2022-11-04 豪符密码检测技术(成都)有限责任公司 Method for detecting security and correctness of federated learning model and data
CN117972802A (en) * 2024-03-29 2024-05-03 苏州元脑智能科技有限公司 Field programmable gate array chip, aggregation method, device, equipment and medium

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598254B (en) * 2020-05-22 2021-10-08 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium
CN112132277A (en) * 2020-09-21 2020-12-25 平安科技(深圳)有限公司 Federal learning model training method and device, terminal equipment and storage medium
CN112381000A (en) * 2020-11-16 2021-02-19 深圳前海微众银行股份有限公司 Face recognition method, device, equipment and storage medium based on federal learning
CN112434818B (en) * 2020-11-19 2023-09-26 脸萌有限公司 Model construction method, device, medium and electronic equipment
CN112446025A (en) * 2020-11-23 2021-03-05 平安科技(深圳)有限公司 Federal learning defense method and device, electronic equipment and storage medium
CN112434619B (en) * 2020-11-26 2024-03-26 新奥新智科技有限公司 Case information extraction method, apparatus, device and computer readable medium
CN112434620B (en) * 2020-11-26 2024-03-01 新奥新智科技有限公司 Scene text recognition method, device, equipment and computer readable medium
CN112632636B (en) * 2020-12-23 2024-06-04 深圳前海微众银行股份有限公司 Ciphertext data comparison result proving and verifying method and device
CN112860800A (en) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 Trusted network application method and device based on block chain and federal learning
CN113111124B (en) * 2021-03-24 2021-11-26 广州大学 Block chain-based federal learning data auditing system and method
CN112949760B (en) * 2021-03-30 2024-05-10 平安科技(深圳)有限公司 Model precision control method, device and storage medium based on federal learning
CN113420886B (en) * 2021-06-21 2024-05-10 平安科技(深圳)有限公司 Training method, device, equipment and storage medium for longitudinal federal learning model
CN113435121B (en) * 2021-06-30 2023-08-22 平安科技(深圳)有限公司 Model training verification method, device, equipment and medium based on federal learning
CN113487043A (en) * 2021-07-22 2021-10-08 深圳前海微众银行股份有限公司 Federal learning modeling optimization method, apparatus, medium, and computer program product
CN113849805A (en) * 2021-09-23 2021-12-28 国网山东省电力公司济宁供电公司 Mobile user credibility authentication method and device, electronic equipment and storage medium
CN115277197B (en) * 2022-07-27 2024-01-16 深圳前海微众银行股份有限公司 Model ownership verification method, electronic device, medium and program product
CN117575291B (en) * 2024-01-15 2024-05-10 湖南科技大学 Federal learning data collaborative management method based on edge parameter entropy

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263936A (en) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 Laterally federation's learning method, device, equipment and computer storage medium
CN110378487A (en) * 2019-07-18 2019-10-25 深圳前海微众银行股份有限公司 Laterally model parameter verification method, device, equipment and medium in federal study
CN110503207A (en) * 2019-08-28 2019-11-26 深圳前海微众银行股份有限公司 Federation's study credit management method, device, equipment and readable storage medium storing program for executing
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110912713A (en) * 2019-12-20 2020-03-24 支付宝(杭州)信息技术有限公司 Method and device for processing model data by combining multiple parties
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10490066B2 (en) * 2016-12-29 2019-11-26 X Development Llc Dynamic traffic control
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
US11606358B2 (en) * 2018-09-18 2023-03-14 Cyral Inc. Tokenization and encryption of sensitive data
CN109635462A (en) * 2018-12-17 2019-04-16 深圳前海微众银行股份有限公司 Model parameter training method, device, equipment and medium based on federation's study
CN110490335A (en) * 2019-08-07 2019-11-22 深圳前海微众银行股份有限公司 A kind of method and device calculating participant's contribution rate
KR20190103090A (en) * 2019-08-15 2019-09-04 엘지전자 주식회사 Method and apparatus for learning a model to generate poi data using federated learning
CN110443375B (en) * 2019-08-16 2021-06-11 深圳前海微众银行股份有限公司 Method and device for federated learning
CN110908893A (en) * 2019-10-08 2020-03-24 深圳逻辑汇科技有限公司 Sandbox mechanism for federal learning
CN110874484A (en) * 2019-10-16 2020-03-10 众安信息技术服务有限公司 Data processing method and system based on neural network and federal learning
CN110991655B (en) * 2019-12-17 2021-04-02 支付宝(杭州)信息技术有限公司 Method and device for processing model data by combining multiple parties
CN111178524B (en) * 2019-12-24 2024-06-14 中国平安人寿保险股份有限公司 Data processing method, device, equipment and medium based on federal learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263936A (en) * 2019-06-14 2019-09-20 深圳前海微众银行股份有限公司 Laterally federation's learning method, device, equipment and computer storage medium
CN110378487A (en) * 2019-07-18 2019-10-25 深圳前海微众银行股份有限公司 Laterally model parameter verification method, device, equipment and medium in federal study
CN110503207A (en) * 2019-08-28 2019-11-26 深圳前海微众银行股份有限公司 Federation's study credit management method, device, equipment and readable storage medium storing program for executing
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN110912713A (en) * 2019-12-20 2020-03-24 支付宝(杭州)信息技术有限公司 Method and device for processing model data by combining multiple parties
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239857B (en) * 2021-12-29 2022-11-22 湖南工商大学 Data right determining method, device, equipment and medium based on federal learning
CN114239857A (en) * 2021-12-29 2022-03-25 湖南工商大学 Data right determining method, device, equipment and medium based on federal learning
CN114800545B (en) * 2022-01-18 2023-10-27 泉州华中科技大学智能制造研究院 Robot control method based on federal learning
CN114800545A (en) * 2022-01-18 2022-07-29 泉州华中科技大学智能制造研究院 Robot control method based on federal learning
CN114466358A (en) * 2022-01-30 2022-05-10 全球能源互联网研究院有限公司 User identity continuous authentication method and device
CN114466358B (en) * 2022-01-30 2023-10-31 全球能源互联网研究院有限公司 User identity continuous authentication method and device based on zero trust
CN114897177A (en) * 2022-04-06 2022-08-12 中国电信股份有限公司 Data modeling method and device, electronic equipment and storage medium
CN114760023A (en) * 2022-04-19 2022-07-15 光大科技有限公司 Model training method and device based on federal learning and storage medium
CN115174046A (en) * 2022-06-10 2022-10-11 湖北工业大学 Federal learning bidirectional verifiable privacy protection method and system on vector space
CN115174046B (en) * 2022-06-10 2024-04-30 湖北工业大学 Federal learning bidirectional verifiable privacy protection method and system in vector space
CN115292738A (en) * 2022-10-08 2022-11-04 豪符密码检测技术(成都)有限责任公司 Method for detecting security and correctness of federated learning model and data
CN115292738B (en) * 2022-10-08 2023-01-17 豪符密码检测技术(成都)有限责任公司 Method for detecting security and correctness of federated learning model and data
CN117972802A (en) * 2024-03-29 2024-05-03 苏州元脑智能科技有限公司 Field programmable gate array chip, aggregation method, device, equipment and medium

Also Published As

Publication number Publication date
CN111598254B (en) 2021-10-08
CN111598254A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
WO2021232754A1 (en) Federated learning modeling method and device, and computer-readable storage medium
WO2020177392A1 (en) Federated learning-based model parameter training method, apparatus and device, and medium
JP7033120B2 (en) Methods and systems for quantum key distribution based on trusted computing
US20210143987A1 (en) Privacy-preserving federated learning
WO2020181822A1 (en) Method and apparatus for checking consistency of encrypted data, and computer device and storage medium
WO2021159798A1 (en) Method for optimizing longitudinal federated learning system, device and readable storage medium
CN112749392B (en) Method and system for detecting abnormal nodes in federated learning
CN112104619A (en) Data access control system and method based on outsourcing ciphertext attribute encryption
WO2010137508A1 (en) Signature device, signature verification device, anonymous authentication system, signing method, signature authentication method, and programs therefor
US20170288866A1 (en) Systems and methods of creating a distributed ring of trust
WO2019153491A1 (en) Student information storage method and apparatus, readable storage medium, and terminal device
US20200250655A1 (en) Efficient, environmental and consumer friendly consensus method for cryptographic transactions
WO2020253108A1 (en) Information hiding method, apparatus, device, and storage medium
CN112261137B (en) Model training method and system based on joint learning
US20220374544A1 (en) Secure aggregation of information using federated learning
US20240039896A1 (en) Bandwidth controlled multi-party joint data processing methods and apparatuses
US20220210140A1 (en) Systems and methods for federated learning on blockchain
WO2021135793A1 (en) Multi-party secret sharing method and apparatus, and readable storage medium
CN113222180A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
CN113609781A (en) Automobile production mold optimization method, system, equipment and medium based on federal learning
CN111767411A (en) Knowledge graph representation learning optimization method and device and readable storage medium
CN112865980A (en) Block chain encryption voting method, computer device and storage medium
CN115277010A (en) Identity authentication method, system, computer device and storage medium
CN109257165B (en) Encryption and decryption method and encryption and decryption system for fine-grained mobile access
CN117540426A (en) Method and device for sharing energy power data based on homomorphic encryption and federal learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20936787

Country of ref document: EP

Kind code of ref document: A1