CN116187433B - Federal quasi-newton training method and device based on secret sharing and storage medium - Google Patents

Federal quasi-newton training method and device based on secret sharing and storage medium Download PDF

Info

Publication number
CN116187433B
CN116187433B CN202310474442.9A CN202310474442A CN116187433B CN 116187433 B CN116187433 B CN 116187433B CN 202310474442 A CN202310474442 A CN 202310474442A CN 116187433 B CN116187433 B CN 116187433B
Authority
CN
China
Prior art keywords
initiator
fragments
segmentation
gradient
model parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310474442.9A
Other languages
Chinese (zh)
Other versions
CN116187433A (en
Inventor
徐宸
任江哲
毛仁歆
李陆沁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lanxiang Zhilian Hangzhou Technology Co ltd
Original Assignee
Lanxiang Zhilian Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lanxiang Zhilian Hangzhou Technology Co ltd filed Critical Lanxiang Zhilian Hangzhou Technology Co ltd
Priority to CN202310474442.9A priority Critical patent/CN116187433B/en
Publication of CN116187433A publication Critical patent/CN116187433A/en
Application granted granted Critical
Publication of CN116187433B publication Critical patent/CN116187433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/085Secret sharing or secret splitting, e.g. threshold schemes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioethics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application discloses a federal quasi-Newton training method and device based on secret sharing and a storage medium, wherein the federal quasi-Newton training method based on secret sharing comprises the following steps: initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants; based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model. The method aims at solving the problems that in the prior art, a reliable third party is needed to calculate a Hemson matrix based on a homomorphic encryption scheme of the Federal quasi-Newton method; based on the secret sharing scheme, the method is used for approximately calculating the hessian matrix and is limited to a logistic regression algorithm, so that the application range is narrow and the model precision is damaged.

Description

Federal quasi-newton training method and device based on secret sharing and storage medium
Technical Field
The application relates to the technical field of computer information processing, in particular to a federal quasi-Newton training method and device based on secret sharing and a storage medium.
Background
The quasi-newton method is a numerical convex optimization method for gradually approaching a function argument to a function extreme point, and is widely applied to machine learning algorithms such as logistic regression (classification model) and linear regression (regression model) which need to maximize/minimize an objective function. Because the information of the first-order gradient and the second-order hessian matrix is used, the method has higher convergence speed and higher index precision than the gradient method. The random quasi-newton rule further realizes that different batch data can be used for training, thereby further accelerating the performance. In the federal random quasi-newton method, the whole flow of calculating the gradient and the hessian matrix is based on secret sharing encryption, so that the data privacy is ensured not to be revealed.
Data exchange in the training process of the longitudinal federal random quasi-Newton method should meet the following extended safety requirements:
characteristic data of the initiator and the participant are not revealed;
model parameters of all the parties are not revealed, except that the model parameters are required to be aggregated to one party through negotiation of the two parties;
The first-order gradient and the second-order hessian matrix of the initiator sample are not leaked, and the model parameter difference and the gradient difference are obtained;
the label data of the initiator sample is not revealed.
However, the federal quasi-newton method in the industry at present is based on homomorphic encryption schemes, and a trusted third party is required to calculate the hessian matrix; based on the secret sharing scheme, the Heisen matrix is approximately calculated and limited to a logistic regression algorithm, so that the application range is narrow and the model precision is damaged.
Disclosure of Invention
The embodiment of the application aims to provide a federal quasi-Newton training method, a federal quasi-Newton training device and a storage medium based on secret sharing, which are used for solving the problem that a reliable third party is required to calculate a Hemson matrix according to a homomorphic encryption scheme of federal quasi-Newton method in the prior art; based on the secret sharing scheme, the method is used for approximately calculating the hessian matrix and is limited to a logistic regression algorithm, so that the application range is narrow and the model precision is damaged.
To achieve the above object, an embodiment of the present application provides a federal quasi-newton training method based on secret sharing, which is applied to an initiator, and includes: initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants;
Based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
Optionally, the obtaining the updated global model parameter fragment by using the random quasi-newton method through iteration of a preset iteration round number to complete training of the model includes:
s1, obtaining the step length of the round according to the current iteration round number, and sending the step length to the participants in a slicing way;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
s3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
S4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
Optionally, the step of obtaining updated global model parameter fragments by using the random quasi-newton method through iteration of a preset iteration round number to complete training of the model further includes:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
Optionally, the obtaining the gradient difference vector slice based on the global gradient slice obtained in the steps S3 and S5 includes:
using the formulaCalculating the gradient difference vector slice, wherein +_>Representing the global gradient slice, ++ >Representing the global model parameter slices, +.>Representing data fragments, ++>Representing the super ginseng slices, the->Representing the update vector shards.
To achieve the above object, an embodiment of the present application further provides another federal quasi-newton training method based on secret sharing, where the method is applied to a participant, and includes:
acquiring a hessian matrix fragment and a super-parameter fragment sent by an initiator, wherein the hessian matrix fragment is obtained by initializing a hessian matrix by the initiator;
based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
Optionally, the obtaining the updated global model parameter fragment by using the random quasi-newton method through iteration of a preset iteration round number to complete training of the model includes:
s1, acquiring a step length fragment corresponding to the step length of the current iteration round number;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
S3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
Optionally, the step of obtaining updated global model parameter fragments by using the random quasi-newton method through iteration of a preset iteration round number to complete training of the model further includes:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
In order to achieve the above object, the present application further provides a federal quasi-newton training device based on secret sharing, including: a memory; and
a processor coupled to the memory, the processor configured to:
initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants;
based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
To achieve the above object, the present application further provides another federal quasi-newton training apparatus based on secret sharing, including: a memory; and
a processor coupled to the memory, the processor configured to:
acquiring a hessian matrix fragment and a super-parameter fragment sent by an initiator, wherein the hessian matrix fragment is obtained by initializing a hessian matrix by the initiator;
based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
To achieve the above object, the present application also provides a computer storage medium having stored thereon a computer program which, when executed by a machine, implements the steps of the method as described above.
The embodiment of the application has the following advantages:
the embodiment of the application provides a federal quasi-Newton training method based on secret sharing, which is applied to an initiator and comprises the following steps: initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants; based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
By the method, training can be completed without a third party, the accurate value of the hessian matrix is calculated according to the scheme, the method is suitable for machine learning tasks of any maximized/minimized objective function, and model accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
FIG. 1 is a flowchart of a federal quasi-Newton training method based on secret sharing according to an embodiment of the present application;
FIG. 2a is a timing diagram of an initiator of a federal quasi-Newton training method based on secret sharing according to an embodiment of the present application;
FIG. 2b is a timing diagram of a participant in a federal quasi-Newton training method based on secret sharing according to an embodiment of the present application;
FIG. 3 is a flowchart of another federal quasi-Newton training method based on secret sharing according to an embodiment of the present application;
fig. 4 is a block diagram of a federal quasi-newton training device based on secret sharing according to an embodiment of the present application.
Detailed Description
Other advantages and advantages of the present application will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In addition, the technical features of the different embodiments of the present application described below may be combined with each other as long as they do not collide with each other.
An embodiment of the present application provides a federal quasi-newton training method based on secret sharing, referring to fig. 1, 2a and 2b, fig. 1 is a flowchart of a federal quasi-newton training method based on secret sharing provided in an embodiment of the present application, it should be understood that the method may further include additional blocks not shown and/or blocks not shown may be omitted, and the scope of the present application is not limited in this respect.
The application scene of the method provided by the embodiment of the application comprises the following steps: the bank A and the Internet company B cooperate, and on the premise of not revealing the data, a classification model is established to identify clients who are easy to generate overdue repayment. The two parties can firstly carry out hiding and intersection, and the clients shared by the two parties are screened out; based on the scheme, using log loss as an objective function, performing longitudinal federal modeling, and training a two-classification model; thereby screening customers who are prone to overdue loan repayment.
The participants in the application can have more than two, and when the participants have more than two, each participant respectively obtains different fragments to participate in the training task in the embodiment of the application. The following examples illustrate the solution of the present application using one initiator and one participant as examples.
At step 101, initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants.
Specifically, the initiator initializes the hessian matrix with a diagonal matrix. Refer to the formula in FIG. 2aWherein I is [ -feature dimension ]>Identity matrix of feature dimension and sends hessian matrix fragment ++to the participants>The method comprises the steps of carrying out a first treatment on the surface of the And a plurality of corresponding participants acquire the hessian matrix fragments. Initiating sending a super-ginseng to the participant>And c-piece->And->The corresponding participants need to receive their fragments. (in FIGS. 2a and 2b +.>And->Is also a super ginseng
At step 102, based on the hessian matrix slicing and the super-parametric slicing, and by using the acquired initiator data slicing, the participant data slicing, the initiator model parameter slicing, the participant model parameter slicing and the initiator label slicing, an updated global model parameter slicing is obtained by using a random quasi-newton method through iteration of a preset iteration round number, so as to complete training of a model.
In some embodiments, the obtaining the updated global model parameter fragment by using the random quasi-newton method through iteration of the preset iteration round number to complete training of the model includes:
S1, obtaining the step length of the round according to the current iteration round number, and sending the step length to the participants in a slicing way;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
s3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
Specifically, referring to steps 3) to 12) of fig. 2a and 2 b), the iterative step of the random quasi-newton method is performed until a condition for stopping the iteration is satisfied, the condition including reaching a preset number of iterative rounds or the trained model meeting the requirement, and the performing step includes:
Using the formulaT is the current iteration round number, and the initiator calculates the step length of the round according to t and sends the step length of the initiator to the participant for step length slicing +.>The method comprises the steps of carrying out a first treatment on the surface of the The participant receives the step length fragments;
the initiator and/or the participant perform: obtaining initiator model parameter shards, participant model parameter shards, initiator data shards, participant data shards, initiator tag shards in the sharing state, in fig. 2a and 2bIs otherwise defined asThe initiator or participant data is data used by each party for model training; the initiator tag is a predicted attribute of each training sample, located at the initiator;
the initiator and/or the participant perform: according to the loss function of the training task, calculating the gradient, wherein both sides can obtain the participant gradient segmentation and the initiator gradient segmentation in the sharing state, namely, calculating(participant gradient fragmentation and initiator gradient fragmentation);
the initiator and/or the participant perform: splicing the initiator gradient slices and the participant gradient slices to obtain global gradient slices, namelyWherein->Representing global gradient slices; likewise, the initiator model parameter slices and the participant model parameter slices are spliced to obtain global model parameter slices, namely +. >Wherein->Representing global model parameter slices;
the initiator and/or the participant perform: the hessian matrix slicing is multiplied by the global gradient slicing to obtain updated direction slicing, namelyWherein->Fragmenting for updating the direction;
the initiator and/or the participant perform: step length slicing multiplied by updating direction slicing to obtain updating vectorSlicing, i.e.Wherein->Representing update vector slicing and updating global model parameter slicing, i.e. +.>
Repeating 4) to 6 in fig. 2a and 2b based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments
In some embodiments, the performing the iteration of the preset iteration round number by using the random quasi-newton method to obtain the updated global model parameter fragment, so as to complete training of the model, further includes:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
And S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
In particular, with reference to fig. 2a and 2b, the initiator and/or the participant performs: based on the global gradient patches obtained in 9) and 6) in fig. 2a and 2b, a gradient difference vector patch is calculated comprising: using the formulaCalculating the gradient difference vector slice, wherein +_>Representing the wholeLocal gradient fragmentation, ->Representing the global model parameter slices, +.>Representing data fragments, ++>Representing the super ginseng slices, the->Representing the update vector shards;
the initiator and/or the participant perform: updating the hessian matrix slice based on the update vector slice and the gradient difference vector slice obtained in fig. 2a and fig. 2b, 8) and 10);
specifically, the calculation method comprises the following steps:
the initiator and/or the participant perform: t=t+1, and the next iteration is entered.
By the method, training can be completed without a third party or a coordination node, the accurate value of the hessian matrix is calculated according to the scheme, the method is suitable for machine learning tasks of any maximized/minimized objective function, and model accuracy is improved. The scheme is a random method, batch data can be used for training, and training time consumption is shortened. The whole training process is encrypted without risk of privacy disclosure.
An embodiment of the present application provides another federal quasi-newton training method based on secret sharing, and referring to fig. 3, fig. 3 is a flowchart of another federal quasi-newton training method based on secret sharing provided in an embodiment of the present application, it should be understood that the method may further include additional blocks not shown and/or blocks shown may be omitted, and the scope of the present application is not limited in this respect.
At step 201, hessian matrix slices and super-parametric slices sent by an initiator are obtained by the initiator by initializing hessian matrices.
At step 202, based on the hessian matrix slicing and the super-parametric slicing, and by using the acquired initiator data slicing, the participant data slicing, the initiator model parameter slicing, the participant model parameter slicing and the initiator label slicing, an updated global model parameter slicing is obtained by using a random quasi-newton method through iteration of a preset iteration round number, so as to complete training of a model.
In some embodiments, the obtaining the updated global model parameter fragment by using the random quasi-newton method through iteration of the preset iteration round number to complete training of the model includes:
S1, acquiring a step length fragment corresponding to the step length of the current iteration round number;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
s3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
In some embodiments, the performing the iteration of the preset iteration round number by using the random quasi-newton method to obtain the updated global model parameter fragment, so as to complete training of the model, further includes:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
S6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
Fig. 4 is a block diagram of a federal quasi-newton training device based on secret sharing according to an embodiment of the present application. The device comprises:
a memory 301; and a processor 302 connected to the memory 301, the processor 302 being configured to: initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants;
based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
In some embodiments, the processor 302 is further configured to: the method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, obtaining the step length of the round according to the current iteration round number, and sending the step length to the participants in a slicing way;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
s3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
In some embodiments, the processor 302 is further configured to: the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
In some embodiments, the processor 302 is further configured to: the global gradient slicing based on the steps S3 and S5, obtaining gradient difference vector slicing, includes:
using the formulaCalculating the gradient difference vector slice, wherein +_>Representing the global gradient slice, ++>Representing the global model parameter slices, +.>Representing data fragments, ++>Representing the super ginseng slices, the->Representing the update vector shards.
In some embodiments, the processor 302 is configured to: acquiring a hessian matrix fragment and a super-parameter fragment sent by an initiator, wherein the hessian matrix fragment is obtained by initializing a hessian matrix by the initiator;
based on the hessian matrix segmentation and the super-parametric segmentation, and by using the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, the updated global model parameter segmentation is obtained by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of the model.
In some embodiments, the processor 302 is further configured to: the method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, acquiring a step length fragment corresponding to the step length of the current iteration round number;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
S3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining the hessian matrix segmentation, the global gradient segmentation and the step length segmentation, and multiplying to obtain an update vector segmentation so as to update the global model parameter segmentation, thereby updating the parameters of the trained model by using the updated global model parameter segmentation.
In some embodiments, the processor 302 is further configured to: the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
And S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
The present application may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
While the application has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the application and are intended to be within the scope of the application as claimed.

Claims (5)

1. A federal quasi-newton training method based on secret sharing, the method being applied to an initiator and comprising:
the method comprises the steps that under the premise that the initiator and the participant do not reveal the data, a classification model is built to identify clients easy to generate overdue repayment, the initiator and the participant firstly carry out hidden interaction, the clients shared by the initiator and the participant are screened out, then the longitudinal federal modeling is carried out by using log loss as an objective function, and the classification model is trained, so that the clients easy to generate overdue repayment are screened out, and the method specifically comprises the following steps:
initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants;
based on the hessian matrix segmentation and the super-parametric segmentation, and by utilizing the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, obtaining updated global model parameter segmentation by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of a model; wherein the method comprises the steps of
Initiator data or participant data is data used by each party to perform model training, and an initiator tag is a predicted attribute of each training sample and is positioned at the initiator;
The method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, obtaining the step length of the round according to the current iteration round number, and sending the step length to the participants in a slicing way;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
s3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining an update vector fragment after multiplying the hessian matrix fragment, the global gradient fragment and the step length fragment so as to update the global model parameter fragment, thereby updating the parameters of the trained model by using the updated global model parameter fragment;
the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
S5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5; wherein,,
the global gradient slicing based on the steps S3 and S5, obtaining gradient difference vector slicing, includes:
using the formulaCalculating the gradient difference vector slice, wherein +_>Representing the global gradient slice, ++>Representing the global model parameter slices, +.>Representing data fragments, ++>The method is used for representing the super-ginseng slicing,representing the update vector shards;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
2. A federal quasi-newton training method based on secret sharing, the method being applied to a participant and comprising:
the method comprises the steps that under the premise that the initiator and the participant do not reveal the data, a classification model is built to identify clients easy to generate overdue repayment, the initiator and the participant firstly carry out hidden interaction, the clients shared by the initiator and the participant are screened out, then the longitudinal federal modeling is carried out by using log loss as an objective function, and the classification model is trained, so that the clients easy to generate overdue repayment are screened out, and the method specifically comprises the following steps:
Acquiring a hessian matrix fragment and a super-parameter fragment sent by the initiator, wherein the hessian matrix fragment is obtained by initializing a hessian matrix by the initiator;
based on the hessian matrix segmentation and the super-parametric segmentation, and by utilizing the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, obtaining updated global model parameter segmentation by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of a model; wherein the method comprises the steps of
Initiator data or participant data is data used by each party to perform model training, and an initiator tag is a predicted attribute of each training sample and is positioned at the initiator;
the method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, acquiring a step length fragment corresponding to the step length of the current iteration round number;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
S3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining an update vector fragment after multiplying the hessian matrix fragment, the global gradient fragment and the step length fragment so as to update the global model parameter fragment, thereby updating the parameters of the trained model by using the updated global model parameter fragment; the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5; wherein,,
the global gradient slicing based on the steps S3 and S5, obtaining gradient difference vector slicing, includes:
using the formulaCalculating the gradient difference vector slice, wherein +_ >Representing the global gradient slice, ++>Representing the global model parameter slices, +.>Representing data fragments, ++>The method is used for representing the super-ginseng slicing,representing the update vector shards;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
3. A federal quasi-newton training device based on secret sharing, comprising:
a memory; and
a processor coupled to the memory, the processor configured to:
the method comprises the steps that under the premise that the initiator and the participant do not reveal the data, a classification model is built to identify clients which are prone to overdue repayment, the initiator and the participant carry out hidden interaction, clients shared by the initiator and the participant are screened out, then longitudinal federal modeling is carried out by using log loss as an objective function, and the classification model is trained, so that the clients which are prone to overdue repayment are screened out, and specifically the method comprises the following steps:
initializing a hessian matrix to obtain hessian matrix fragments, and sending the hessian matrix fragments and the super-parameter fragments corresponding to the participants;
Based on the hessian matrix segmentation and the super-parametric segmentation, and by utilizing the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, obtaining updated global model parameter segmentation by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of a model; wherein the method comprises the steps of
Initiator data or participant data is data used by each party to perform model training, and an initiator tag is a predicted attribute of each training sample and is positioned at the initiator;
the method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, obtaining the step length of the round according to the current iteration round number, and sending the step length to the participants in a slicing way;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
S3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining an update vector fragment after multiplying the hessian matrix fragment, the global gradient fragment and the step length fragment so as to update the global model parameter fragment, thereby updating the parameters of the trained model by using the updated global model parameter fragment;
the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5; wherein,,
the global gradient slicing based on the steps S3 and S5, obtaining gradient difference vector slicing, includes:
using the formulaCalculating the gradient difference vector slice, wherein +_ >Representing the global gradient slice, ++>Representing the global model parameter slices, +.>Representing data fragments, ++>The method is used for representing the super-ginseng slicing,representing the update vector shards;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
4. A federal quasi-newton training device based on secret sharing, comprising:
a memory; and
a processor coupled to the memory, the processor configured to:
the method comprises the steps that under the premise that the initiator and the participant do not reveal the data, a classification model is built to identify clients which are prone to overdue repayment, the initiator and the participant carry out hidden interaction, clients shared by the initiator and the participant are screened out, then longitudinal federal modeling is carried out by using log loss as an objective function, and the classification model is trained, so that the clients which are prone to overdue repayment are screened out, and specifically the method comprises the following steps:
acquiring a hessian matrix fragment and a super-parameter fragment sent by the initiator, wherein the hessian matrix fragment is obtained by initializing a hessian matrix by the initiator;
Based on the hessian matrix segmentation and the super-parametric segmentation, and by utilizing the acquired initiator data segmentation, the participant data segmentation, the initiator model parameter segmentation, the participant model parameter segmentation and the initiator label segmentation, obtaining updated global model parameter segmentation by using a random quasi-Newton method through iteration of a preset iteration round number so as to complete training of a model; wherein the method comprises the steps of
Initiator data or participant data is data used by each party to perform model training, and an initiator tag is a predicted attribute of each training sample and is positioned at the initiator;
the method for obtaining updated global model parameter fragments by using random quasi-Newton method through iteration of a preset iteration round number to complete training of the model comprises the following steps:
s1, acquiring a step length fragment corresponding to the step length of the current iteration round number;
s2, acquiring the initiator data fragmentation, the participant data fragmentation, the initiator model parameter fragmentation, the participant model parameter fragmentation and the initiator label fragmentation under the sharing state, and obtaining an initiator gradient fragmentation and a participant gradient fragmentation corresponding to the gradient calculated according to the loss function of the training task;
S3, acquiring the global gradient fragments obtained after the initiator gradient fragments and the participant gradient fragments are spliced, and acquiring the global model parameter fragments obtained after the initiator model parameter fragments and the participant model parameter fragments are spliced;
s4, obtaining an update vector fragment after multiplying the hessian matrix fragment, the global gradient fragment and the step length fragment so as to update the global model parameter fragment, thereby updating the parameters of the trained model by using the updated global model parameter fragment; the method for obtaining updated global model parameter fragments by using the random quasi-Newton method through iteration of a preset iteration round number to complete training of the model further comprises the following steps:
s5, repeatedly executing the steps S1 to S3 based on the model with the model parameters updated by the global model parameter fragments to obtain new global gradient fragments;
s6, obtaining gradient difference vector fragments based on the global gradient fragments obtained in the steps S3 and S5; wherein,,
the global gradient slicing based on the steps S3 and S5, obtaining gradient difference vector slicing, includes:
using the formulaCalculating the gradient difference vector slice, wherein +_ >Representing the global gradient slice, ++>Representing the global model parameter slices, +.>Representing data fragments, ++>The method is used for representing the super-ginseng slicing,representing the update vector shards;
s7, updating the hessian matrix segmentation based on the update vector segmentation and the gradient difference vector segmentation obtained in the steps S4 and S6;
and S8, adding one to the iteration round number, and entering the next round of iteration to complete the training of the model.
5. A computer storage medium having stored thereon a computer program, which when executed by a machine performs the steps of the method according to any of claims 1 to 2.
CN202310474442.9A 2023-04-28 2023-04-28 Federal quasi-newton training method and device based on secret sharing and storage medium Active CN116187433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310474442.9A CN116187433B (en) 2023-04-28 2023-04-28 Federal quasi-newton training method and device based on secret sharing and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310474442.9A CN116187433B (en) 2023-04-28 2023-04-28 Federal quasi-newton training method and device based on secret sharing and storage medium

Publications (2)

Publication Number Publication Date
CN116187433A CN116187433A (en) 2023-05-30
CN116187433B true CN116187433B (en) 2023-09-29

Family

ID=86433079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310474442.9A Active CN116187433B (en) 2023-04-28 2023-04-28 Federal quasi-newton training method and device based on secret sharing and storage medium

Country Status (1)

Country Link
CN (1) CN116187433B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117527447B (en) * 2024-01-05 2024-03-22 厦门身份宝网络科技有限公司 Secret sharing method and system for multiparty secure computation

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10600006B1 (en) * 2019-01-11 2020-03-24 Alibaba Group Holding Limited Logistic regression modeling scheme using secrete sharing
CN111783130A (en) * 2020-09-04 2020-10-16 支付宝(杭州)信息技术有限公司 Data processing method and device for privacy protection and server
WO2021204271A1 (en) * 2020-04-10 2021-10-14 支付宝(杭州)信息技术有限公司 Data privacy protected joint training of service prediction model by two parties
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
CN114611720A (en) * 2022-03-14 2022-06-10 北京字节跳动网络技术有限公司 Federal learning model training method, electronic device and storage medium
CN114707660A (en) * 2022-04-07 2022-07-05 医渡云(北京)技术有限公司 Federal model training method and device, storage medium and electronic equipment
CN114925786A (en) * 2022-07-08 2022-08-19 蓝象智联(杭州)科技有限公司 Longitudinal federal linear support vector classification method based on secret sharing
CN115730182A (en) * 2022-11-17 2023-03-03 天翼电子商务有限公司 Approximate calculation method for inverse matrix under anonymized fragment data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10600006B1 (en) * 2019-01-11 2020-03-24 Alibaba Group Holding Limited Logistic regression modeling scheme using secrete sharing
WO2021204271A1 (en) * 2020-04-10 2021-10-14 支付宝(杭州)信息技术有限公司 Data privacy protected joint training of service prediction model by two parties
CN111783130A (en) * 2020-09-04 2020-10-16 支付宝(杭州)信息技术有限公司 Data processing method and device for privacy protection and server
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
CN114611720A (en) * 2022-03-14 2022-06-10 北京字节跳动网络技术有限公司 Federal learning model training method, electronic device and storage medium
CN114707660A (en) * 2022-04-07 2022-07-05 医渡云(北京)技术有限公司 Federal model training method and device, storage medium and electronic equipment
CN114925786A (en) * 2022-07-08 2022-08-19 蓝象智联(杭州)科技有限公司 Longitudinal federal linear support vector classification method based on secret sharing
CN115730182A (en) * 2022-11-17 2023-03-03 天翼电子商务有限公司 Approximate calculation method for inverse matrix under anonymized fragment data

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Privacy-Preserving Federated Learning for Internet of Medical Things Under Edge Computing;Ruijin Wang et al.;《 IEEE Journal of Biomedical and Health Informatics 》;第27卷(第2期);第854 - 865页 *
基于同态加密和秘密分享的纵向联邦LR协议研究;符芳诚等;《信息通信技术与政策》(第05期);第34-44页 *
基于秘密分享和梯度选择的高效安全联邦学习;董业;侯炜;陈小军;曾帅;;计算机研究与发展(第10期);第2241-2250页 *
联邦学习隐私保护机制综述;王浩竣等;《现代计算机》;第28卷(第14期);第1-12页 *

Also Published As

Publication number Publication date
CN116187433A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN111931950B (en) Method and system for updating model parameters based on federal learning
EP3602379B1 (en) A distributed multi-party security model training framework for privacy protection
EP3602410B1 (en) A logistic regression modeling scheme using secret sharing
US11176469B2 (en) Model training methods, apparatuses, and systems
US11341411B2 (en) Method, apparatus, and system for training neural network model
CN116187433B (en) Federal quasi-newton training method and device based on secret sharing and storage medium
CN112805769B (en) Secret S-type function calculation system, secret S-type function calculation device, secret S-type function calculation method, and recording medium
CN109359476B (en) Hidden input two-party mode matching method and device
CN112805768B (en) Secret S-type function calculation system and method therefor, secret logistic regression calculation system and method therefor, secret S-type function calculation device, secret logistic regression calculation device, and program
US20230186049A1 (en) Training method and apparatus for a neural network model, device and storage medium
CN111415013A (en) Privacy machine learning model generation and training method and device and electronic equipment
Bastrakov et al. Fast method for verifying Chernikov rules in Fourier-Motzkin elimination
CN114925786A (en) Longitudinal federal linear support vector classification method based on secret sharing
WO2021184346A1 (en) Private machine learning model generation and training methods, apparatus, and electronic device
CN114881247A (en) Longitudinal federal feature derivation method, device and medium based on privacy computation
Singh et al. Zero knowledge proofs towards verifiable decentralized ai pipelines
CN113965313A (en) Model training method, device, equipment and storage medium based on homomorphic encryption
CN111523674A (en) Model training method, device and system
US20200374107A1 (en) Server device, secret equality determination system, secret equality determination method and secret equality determination program recording medium
CN115134078B (en) Secret sharing-based statistical method, device and storage medium
CN111949655A (en) Form display method and device, electronic equipment and medium
US11290456B2 (en) Secret equality determination system, secret equality determination method and secret equality determination program recording medium
CN115906177A (en) Aggregate security intersection method and device, electronic equipment and storage medium
WO2023038940A1 (en) Systems and methods for tree-based model inference using multi-party computation
CN114880693A (en) Method and device for generating activation function, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant