CN112949865B - Joint learning contribution degree evaluation method based on SIGMA protocol - Google Patents

Joint learning contribution degree evaluation method based on SIGMA protocol Download PDF

Info

Publication number
CN112949865B
CN112949865B CN202110292470.XA CN202110292470A CN112949865B CN 112949865 B CN112949865 B CN 112949865B CN 202110292470 A CN202110292470 A CN 202110292470A CN 112949865 B CN112949865 B CN 112949865B
Authority
CN
China
Prior art keywords
gradient
model
participant
ciphertext
contribution degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110292470.XA
Other languages
Chinese (zh)
Other versions
CN112949865A (en
Inventor
万俊平
殷丽华
孙哲
那崇宁
李丹
李超
罗熙
韦南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Zhejiang Lab
Original Assignee
Guangzhou University
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University, Zhejiang Lab filed Critical Guangzhou University
Priority to CN202110292470.XA priority Critical patent/CN112949865B/en
Publication of CN112949865A publication Critical patent/CN112949865A/en
Application granted granted Critical
Publication of CN112949865B publication Critical patent/CN112949865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for evaluating the federal learning contribution degree based on a SIGMA protocol, wherein a training direction sends a model and an untrustworthy credible execution program, a non-interactive SIGMA protocol and related parameters to each participant; the participant trains the model by using a local data set to obtain a gradient, runs a trusted execution program, extracts the gradient of the participant, updates the model, runs a test module to test the accuracy of the new model, and calculates the contribution degree of the gradient; the participator encodes and encrypts the gradient according to an encryption algorithm and sends the gradient to the training party; the participant generates a random value, encrypts the random value by using an encryption algorithm, inputs all ciphertext generated currently into a hash function sandbox, outputs the hash value and calculates a commitment; and the participant uploads the commitment, the ciphertext and the contribution degree to the training party, the training party calculates a hash value and verifies the commitment, and if the commitment is verified, the gradient ciphertext and the contribution degree are bound and recorded in a database. The method can realize gradient certification without revealing privacy.

Description

Joint learning contribution degree evaluation method based on SIGMA protocol
Technical Field
The invention relates to the field of federal learning and cryptography, in particular to a federal learning contribution degree evaluation method based on a SIGMA protocol.
Background
Google provides federal learning for the first time in 2016, and the process is that a model training party delivers training tasks to a plurality of participants with local data sets, each participant trains a model to be trained, gradients are generated and uploaded to the model training party, and the model training party aggregates the gradients to update the models. The advantage of federal learning under this concept is that the local data sets of each participant are fully utilized for global model training, and the data sets that may contain privacy are not directly acquired. But the problem is that the gradient has a certain correlation with the training data, and the model training party can deduce the relevant information of the training data set from the uploaded gradient. Therefore, how to obtain the final aggregation gradient by the model trainer on the premise of not revealing gradient information and ensure that the uploaded gradient information is correct is a popular research content. CN111552986B proposes a block chain-based federal modeling method, which uses homomorphic encryption to hide uploaded gradients and perform auditing work on training data, but fails to construct a corresponding relationship between the gradients and the contribution degrees, and may cause part of poor participants to unintentionally damage an aggregation model. CN111797142A provides a real-time auditing scheme based on intelligent contracts on block chains, which requires a certain interaction process and causes unnecessary workload.
Overall, the problems of the prior art are: 1) A contribution degree evaluation method which does not reveal privacy is lacked in federal learning; 2) The auditing process is mostly an interactive process, and large workload is brought to auditing.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a SIGMA protocol-based federal learning contribution degree evaluation method, wherein a model training party constructs a trusted execution program in advance and distributes the trusted execution program to each participant, the contribution degree evaluation is carried out on the gradient locally before the gradient encryption, and in order to prevent the uploaded gradient and the contribution degree from being inconsistent, the participant needs to follow a non-interactive SIGMA protocol to additionally generate a commitment of the gradient so as to bind and record the gradient and the contribution degree.
The purpose of the invention is realized by the following technical scheme:
a federal learning contribution degree evaluation method based on a SIGMA protocol comprises the following steps:
the method comprises the following steps: the model training direction sends the models of the current batch, the model parameters, the untrustworthy credible execution program, the non-interactive SIGMA protocol and the related parameters to all the participants participating in the model training; the trusted executive comprises a testing module and a hash function sandbox H (x);
step two: training the model by using a local data set by the participant to obtain a gradient, running a trusted execution program, extracting the gradient of the participant by the trusted execution program, updating the model by using the gradient, running a test module in the trusted execution program to test the accuracy of the new model, and then calculating the contribution degree of the gradient;
step three: the participant encodes the gradient extracted by the trusted executive program according to the encryption algorithm of the non-interactive SIGMA protocol to obtain x i And using two generators of the encryption algorithm to encrypt the two generators respectively to obtain x i G、x i H, sending the data to a model training party;
step four: the participant generates a random value, which is encoded to obtain v i And respectively encrypting the two generative elements in the encryption algorithm in the protocol again to obtain v i G、v i H, the participant then inputs all the ciphertext currently generated into a hash function sandbox H (x) in the trusted execution program, outputs a hash value c from the sandbox, and uses the formula Com i =v i -x i c generating acceptance Com i
Step five: participant uploads acceptance Com i 、v i G、v i H and the degree of contribution of the gradient to a model trainer, and the model trainer uses the currently received x i G、x i H、v i G、v i H and backup Hash function sandbox H (x) calculate HashThe value c. Verification formula v i G=rG+c(x i G) And v i H=rH+c(x i H) If the judgment result is positive, the model training party passes the verification, the ciphertext gradient uploaded by the participant and the gradient plaintext locally participating in updating and testing are indicated to be corresponding, and the gradient ciphertext and the contribution degree of the gradient ciphertext are bound and recorded in a database; if the verification fails, the ciphertext does not correspond to the plaintext, the gradient is abandoned and recorded, and an error is fed back to the participant.
In order to ensure that the SIGMA protocol does not reveal the gradient of the participants, the SIGMA protocol uses an elliptic curve encryption algorithm or a discrete logarithm encryption algorithm, and the SIGMA protocol also comprises some two generators G and H in the encryption algorithm.
In order to test the accuracy change of the model before and after training, the test module comprises a test data set and stores the current accuracy of the model to be trained; and after the updated model is subjected to accuracy rate test, calculating the accuracy rate change before and after updating, and recording the accuracy rate change as the contribution degree of the corresponding gradient.
In order to ensure the normal operation of the program, the trusted execution program has a signature function, and when the operation of a certain module is completed, the operation result is signed, and a third party can confirm whether the program operates normally or not by inquiring the signature.
In order to ensure that the generation of the commitment is not counterfeitable, the input of the hash function sandbox H (x) is all ciphertexts participating in the verification operation, the output is a random value, and when the input is different, the output random values are also different.
The invention has the following beneficial effects:
(1) Gradient proof without revealing privacy: the non-interactive SIGMA protocol used may generate gradient proof information. And on the premise of not revealing gradient plaintext, the verification of the gradient authenticity by the model participant is completed.
(2) Time-saving gradient validation procedure: when any third party needs to carry out correctness verification and contribution degree query on the gradient of a certain party, only the commitment Com needs to be processed i The judgment of the equation is made without any interaction.
Drawings
Fig. 1 is a schematic diagram of the federal learning contribution evaluation method based on the SIGMA protocol of the present invention.
Fig. 2 is a flowchart of the federal learning contribution evaluation method based on the SIGMA protocol of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will be more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
In horizontal federal learning, for example, a model of word habit inference for a certain user input method, in order to provide better input method experience for a user, the input method can train a sentence prediction model thereof, so that the user can have subsequent recommended words as accurate as possible for the input of a certain word. However, in order to protect the privacy of the users, the service provider cannot directly obtain the input content of each user, but hopes that the terminals jointly participate in the training of the prediction model by using respective local data, and the model gradient needs to be continuously exchanged in the training process. In the gradient interaction process, homomorphic encryption is generally used to ensure that the privacy information of the gradient is not leaked, but because many participants are not completely credible, the encrypted gradient ciphertext has unreadability, the contribution degree of the gradient ciphertext can not be evaluated, and the authenticity of the ciphertext is also to be studied. Therefore, the scheme is designed, and the accuracy of gradient contribution degree evaluation and the trueness and the credibility of the ciphertext are ensured.
As shown in fig. 1, the present solution includes a model training party and a participant. The model training party sets a test module and a Hash function sandbox H (x), packages the test module and the Hash function sandbox H (x) into a credible executive program, and sends the credible executive program and the model to be trained to the participants. And the participator receives the model and the credible executive program of the model training party, runs the credible executive program, completes contribution degree evaluation and correctness commitment and submits the gradient to the model training party. And the model training party performs correctness verification after receiving the gradient of the participant and completes the recording of the gradient and the contribution degree thereof.
As shown in fig. 2, the specific process is as follows:
the first step is as follows: the model training direction sends the models, the model parameters, the non-tampered credible execution programs, the non-interactive SIGMA protocol and the related parameters of the current batch to all the participants participating in the model training; the trusted executive comprises a test module and a hash function sandbox H (x);
the second step: training the model by using a local data set by the participant to obtain a gradient, running a trusted execution program, extracting the gradient of the participant by the trusted execution program, updating the model by using the gradient, running a test module in the trusted execution program to test the accuracy of the new model, and then calculating the contribution degree of the gradient; as one implementation mode, the test module comprises a test data set and saves the current accuracy of the model to be trained; and after the updated model is subjected to accuracy rate test, calculating the accuracy rate change before and after updating, and recording the accuracy rate change as the contribution degree of the corresponding gradient.
The third step: the participator encodes the gradient extracted by the credible executive program according to the encryption algorithm of the non-interactive SIGMA protocol to obtain x i And using two generators of encryption algorithm to encrypt them respectively to obtain x i G、x i H, sending the data to a model training party; as one implementation mode, the encryption algorithm is an elliptic curve encryption algorithm or a discrete logarithm encryption algorithm, and some two generators G and H of the encryption algorithm are included in the protocol.
The fourth step: the participant generates a random value, which is encoded to obtain v i And respectively encrypting the two generative elements in the encryption algorithm in the protocol again to obtain v i G、v i H, the participant then inputs all the ciphertext currently generated into a hash function sandbox H (x) in the trusted execution program, outputs a hash value c from the sandbox, and uses the formula Com i =v i -x i c generating acceptance Com i
The fifth step: participant uploads acceptance Com i 、v i G、v i H and the degree of contribution of the gradient to a model trainer, and the model trainer uses the currently received x i G、x i H、v i G、v i H and the backup hash function sandbox H (x) compute the hash value c. Verification formula v i G=rG+c(x i G) And v i H=rH+c(x i H) And if so, the model training party passes the verification, and the ciphertext gradient uploaded by the participant and the gradient plaintext locally participating in updating and testing correspond to each other. Binding and recording the gradient ciphertext and the contribution degree thereof in a database; if the verification is not passed, the ciphertext does not correspond to the plaintext, the gradient is abandoned to be recorded, and errors are fed back to the participants.
Similarly, in financial systems, multiple banks collaborate together, training the wind control model using local data sets to circumvent credit risk and equitable planning interest rates. In a medical system, multiple hospitals use patient data to perform deep cooperation, and a drug effect evaluation model is improved according to a large number of medical records and drug effect records of patients, so that the contribution degree evaluation method can be used.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (4)

1. A federal learning contribution evaluation method based on a SIGMA protocol is characterized by comprising the following steps:
the method comprises the following steps: the model training direction sends the models of the current batch, the model parameters, the untrustworthy credible execution program, the non-interactive SIGMA protocol and the related parameters to all the participants participating in the model training; the trusted executive comprises a test module and a hash function sandbox H (x);
step two: training the model by using a local data set by the participant to obtain a gradient, running a trusted execution program, extracting the gradient of the participant by the trusted execution program, updating the model by using the gradient, running a test module in the trusted execution program to test the accuracy of the new model, and then calculating the contribution degree of the gradient;
the test module comprises a test data set and stores the current accuracy of the model to be trained; after the updated model is subjected to accuracy rate test, calculating the accuracy rate change before and after updating, and recording the accuracy rate change as the contribution degree of the corresponding gradient;
step three: the participant encodes the gradient extracted by the trusted executive program according to the encryption algorithm of the non-interactive SIGMA protocol to obtain x i And using two generators of the encryption algorithm to encrypt the two generators respectively to obtain x i G、x i H, sending the data to a model training party;
step four: the participant generates a random value, and encodes the random value to obtain v i And respectively encrypting the two generative elements in the encryption algorithm in the protocol again to obtain v i G、v i H, the participant then inputs all the ciphertext currently generated into a hash function sandbox H (x) in the trusted execution program, outputs a hash value c from the sandbox, and uses the formula Com i =v i -x i c generating acceptance Com i
Step five: participant upload promise Com i 、v i G、v i H and the degree of contribution of the gradient to a model trainer, and the model trainer uses the currently received x i G、x i H、v i G、v i H and the backup hash function sandbox H (x) calculate a hash value c; verification formula v i G=rG+c(x i G) And v i H=rH+c(x i H) If the judgment result is positive, the model training party passes the verification, the ciphertext gradient uploaded by the participant and the gradient plaintext locally participating in updating and testing are indicated to be corresponding, and the gradient ciphertext and the contribution degree of the gradient ciphertext are bound and recorded in a database; if the verification is not passed, the ciphertext does not correspond to the plaintext, the gradient is abandoned to be recorded, and errors are fed back to the participants.
2. The SIGMA protocol-based federal learning contribution evaluation method of claim 1, wherein: the SIGMA protocol uses an elliptic curve encryption algorithm or a discrete logarithm encryption algorithm, and the protocol also comprises some two generators G and H in the encryption algorithm.
3. The SIGMA protocol-based federal learning contribution assessment method as claimed in claim 1 or 2, wherein: the trusted execution program has a signature function, and when the operation of a certain module is completed, the operation result is signed, and a third party can confirm whether the program normally operates by inquiring the signature.
4. The SIGMA protocol-based federal learning contribution assessment method of claim 2, wherein: the input of the hash function sandbox H (x) is all ciphertexts participating in verification operation, the output is a random value, and when the input is different, the output random values are also different.
CN202110292470.XA 2021-03-18 2021-03-18 Joint learning contribution degree evaluation method based on SIGMA protocol Active CN112949865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110292470.XA CN112949865B (en) 2021-03-18 2021-03-18 Joint learning contribution degree evaluation method based on SIGMA protocol

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110292470.XA CN112949865B (en) 2021-03-18 2021-03-18 Joint learning contribution degree evaluation method based on SIGMA protocol

Publications (2)

Publication Number Publication Date
CN112949865A CN112949865A (en) 2021-06-11
CN112949865B true CN112949865B (en) 2022-10-28

Family

ID=76227002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110292470.XA Active CN112949865B (en) 2021-03-18 2021-03-18 Joint learning contribution degree evaluation method based on SIGMA protocol

Country Status (1)

Country Link
CN (1) CN112949865B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469371B (en) * 2021-07-01 2023-05-02 建信金融科技有限责任公司 Federal learning method and apparatus
CN113421251A (en) * 2021-07-05 2021-09-21 海南大学 Data processing method and system based on lung CT image
CN114912136B (en) * 2022-07-14 2022-10-28 之江实验室 Competition mechanism based cooperative analysis method and system for medical data on block chain
CN115423208A (en) * 2022-09-27 2022-12-02 深圳先进技术研究院 Electronic insurance value prediction method and device based on privacy calculation
CN115292738B (en) * 2022-10-08 2023-01-17 豪符密码检测技术(成都)有限责任公司 Method for detecting security and correctness of federated learning model and data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443063A (en) * 2019-06-26 2019-11-12 电子科技大学 The method of the federal deep learning of self adaptive protection privacy
CN111241580A (en) * 2020-01-09 2020-06-05 广州大学 Trusted execution environment-based federated learning method
CN111950739A (en) * 2020-08-13 2020-11-17 深圳前海微众银行股份有限公司 Data processing method, device, equipment and medium based on block chain
CN112329940A (en) * 2020-11-02 2021-02-05 北京邮电大学 Personalized model training method and system combining federal learning and user portrait

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200218940A1 (en) * 2019-01-08 2020-07-09 International Business Machines Corporation Creating and managing machine learning models in a shared network environment
US11443240B2 (en) * 2019-09-06 2022-09-13 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443063A (en) * 2019-06-26 2019-11-12 电子科技大学 The method of the federal deep learning of self adaptive protection privacy
CN111241580A (en) * 2020-01-09 2020-06-05 广州大学 Trusted execution environment-based federated learning method
CN111950739A (en) * 2020-08-13 2020-11-17 深圳前海微众银行股份有限公司 Data processing method, device, equipment and medium based on block chain
CN112329940A (en) * 2020-11-02 2021-02-05 北京邮电大学 Personalized model training method and system combining federal learning and user portrait

Also Published As

Publication number Publication date
CN112949865A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112949865B (en) Joint learning contribution degree evaluation method based on SIGMA protocol
CN110189192B (en) Information recommendation model generation method and device
CN113159327B (en) Model training method and device based on federal learning system and electronic equipment
CN108616539B (en) A kind of method and system of block chain transaction record access
CN113204787B (en) Block chain-based federated learning privacy protection method, system, device and medium
WO2022206510A1 (en) Model training method and apparatus for federated learning, and device and storage medium
KR102145701B1 (en) Prevent false display of input data by participants in secure multi-party calculations
CN111814985A (en) Model training method under federated learning network and related equipment thereof
CN112288100A (en) Method, system and device for updating model parameters based on federal learning
CN112347500B (en) Machine learning method, device, system, equipment and storage medium of distributed system
CN109547477A (en) A kind of data processing method and its device, medium, terminal
CN112270597A (en) Business processing and credit evaluation model training method, device, equipment and medium
CN110879827B (en) Information processing method and equipment based on block chain network
CN113992360A (en) Block chain cross-chain-based federated learning method and equipment
CN105187218B (en) A kind of digitized record signature, the verification method of multi-core infrastructure
CN113435121B (en) Model training verification method, device, equipment and medium based on federal learning
CN114841363A (en) Privacy protection and verifiable federal learning method based on zero-knowledge proof
CN115455476A (en) Longitudinal federal learning privacy protection method and system based on multi-key homomorphic encryption
CN114443754A (en) Block chain-based federated learning processing method, device, system and medium
CN111914281B (en) Bayesian model training method and device based on blockchain and homomorphic encryption
CN114329621A (en) Block chain cross-chain interactive data integrity verification method
CN117290887A (en) Account blockchain-based accountability privacy protection intelligent contract implementation method
CN111769945B (en) Auction processing method based on block chain and block chain link point
CN113807157A (en) Method, device and system for training neural network model based on federal learning
CN116260662A (en) Tracing storage method, tracing storage system and tracing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant