CN113592097B - Training method and device of federal model and electronic equipment - Google Patents

Training method and device of federal model and electronic equipment Download PDF

Info

Publication number
CN113592097B
CN113592097B CN202110839499.5A CN202110839499A CN113592097B CN 113592097 B CN113592097 B CN 113592097B CN 202110839499 A CN202110839499 A CN 202110839499A CN 113592097 B CN113592097 B CN 113592097B
Authority
CN
China
Prior art keywords
derivative
sample
encrypted
federal model
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110839499.5A
Other languages
Chinese (zh)
Other versions
CN113592097A (en
Inventor
冯泽瑾
杨恺
陈忠
范昊
陈晓霖
王虎
黄志翔
彭南博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202110839499.5A priority Critical patent/CN113592097B/en
Publication of CN113592097A publication Critical patent/CN113592097A/en
Application granted granted Critical
Publication of CN113592097B publication Critical patent/CN113592097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Storage Device Security (AREA)

Abstract

The application provides a training method and device of a federal model and electronic equipment, wherein the method comprises the following steps: based on sample label information and sample characteristic information, training a federal model by a cooperative data party to obtain an intermediate result, wherein the intermediate result comprises a first derivative and a second derivative of a loss function corresponding to each sample; performing joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample; transmitting the encrypted derivative to the data party, the encrypted derivative being used to generate an encrypted derivative aggregate value; and receiving the encrypted derivative aggregate value sent by the data party, and further updating the parameters of the federal model according to the encrypted derivative aggregate value. Therefore, the method carries out joint encryption on the first derivative and the second derivative of the sample, and can reduce the encryption times of the intermediate result, thereby reducing the calculation complexity of the modeling process, reducing the transmission times of the intermediate result, effectively reducing the communication burden of the modeling process and improving the training efficiency of the federal model.

Description

Training method and device of federal model and electronic equipment
Technical Field
The application relates to the technical field of deep learning, in particular to a training method and device of a federal model, electronic equipment and a storage medium.
Background
At present, federation learning can realize federation modeling and federation training on the premise of not needing data sharing, has higher data security, can solve the problem of data island, and is widely applied. In the federal model training process in the related technology, the intermediate result of model training is encrypted and transmitted, the encryption times and the transmission times are more, the calculation complexity and the communication burden of the modeling process are larger, and the training efficiency of the federal model is lower.
Disclosure of Invention
The method aims at solving one of the technical problems of high computational complexity and communication burden in the training process of the federal model and low training efficiency of the federal model in the related technology at least to a certain extent.
Therefore, in the embodiment of the first aspect of the present application, a training method of a federal model is provided, where at least a first derivative of a sample is subjected to joint encryption, so that the number of encryption times of an intermediate result can be reduced, thereby reducing the computational complexity of a modeling process, reducing the number of transmission times of the intermediate result, effectively reducing the communication burden of the modeling process, and improving the training efficiency of the federal model.
Embodiments of the second aspect of the present application provide another federal model training method.
Embodiments of a third aspect of the present application provide a federal model training apparatus.
Another federal model training arrangement is presented in an embodiment of a fourth aspect of the present application.
An embodiment of a fifth aspect of the present application proposes an electronic device.
Embodiments of a sixth aspect of the present application provide a computer-readable storage medium.
An embodiment of a first aspect of the present application provides a method for training a federal model, including: based on sample label information and sample characteristic information, training a federal model by a cooperative data party to obtain an intermediate result of the federal model, wherein the intermediate result of the federal model comprises a first derivative and a second derivative of a loss function corresponding to each sample; performing joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample; sending the encrypted derivative corresponding to the sample to a data party, wherein the encrypted derivative corresponding to the sample is used for generating an encrypted derivative aggregate value; and receiving the encrypted derivative aggregate value sent by the data party, and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
According to the training method of the federation model, the first derivative and the second derivative of the sample can be encrypted in a combined mode to obtain the encrypted derivative corresponding to the sample, the encrypted derivative corresponding to the sample is sent to the data side, the encrypted derivative aggregate value sent by the data side can be received, and parameters of the federation model are further updated according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
In addition, the training method of the federal model according to the above embodiment of the present application may further have the following additional technical features:
in one embodiment of the present application, the performing joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample includes: encoding the first derivative and the second derivative of the sample, respectively; splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample; encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
In one embodiment of the present application, said encoding said first derivative and said second derivative of said samples, respectively, comprises: acquiring a preset coding strategy, wherein the preset coding strategy comprises preset symbol bits, preset coding bits and a preset coding mode; obtaining target sign positions of the first derivative and the second derivative from the preset sign positions; and respectively encoding the first derivative and the second derivative according to the preset encoding mode based on the target sign bit and the preset encoding bit number.
In one embodiment of the present application, before the performing joint encryption on the first derivative and the second derivative of the sample, the method further includes: gradient clipping is performed on the derivative.
In one embodiment of the present application, the gradient clipping of the derivative comprises: if the derivative is greater than the upper limit value of the preset gradient range, converting the derivative into the upper limit value; alternatively, if the derivative is less than the lower limit of the preset gradient range, the derivative is converted to the lower limit.
In an embodiment of the present application, the splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample includes: splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples; and adding a preset identification bit at a preset position of the candidate splicing derivative corresponding to the sample, and generating the splicing derivative corresponding to the sample.
In one embodiment of the present application, the preset position includes at least one of a front end position of the candidate spliced derivative, a spliced position of the encoded first and second derivatives, and a tail end position of the candidate spliced derivative.
In one embodiment of the present application, the further updating the parameters of the federal model according to the encrypted derivative aggregate value includes: decrypting the encrypted derivative aggregate value, and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value; parameters of the federal model are further updated based on the derivative aggregate values.
In one embodiment of the present application, the federal model is a federal safety promotion tree model.
An embodiment of the second aspect of the present application proposes another federal model training method, including: based on the sample characteristic information, the cooperative business party performs federal model training; receiving an encrypted derivative corresponding to the sample sent by the service party; generating an encrypted derivative aggregate value corresponding to a plurality of samples according to the encrypted derivatives corresponding to the plurality of samples; and sending the encrypted derivative aggregate value to the service party, wherein the encrypted derivative aggregate value is used for updating parameters of the federal model.
According to the training method of the federation model, the encrypted derivative corresponding to the sample sent by the service party is received, the encrypted derivative aggregate value corresponding to the plurality of samples is generated according to the encrypted derivative corresponding to the plurality of samples, and the encrypted derivative aggregate value is sent to the service party so that the service party can update the parameters of the federation model according to the encrypted derivative aggregate value. Therefore, the encrypted derivative aggregate value can be generated according to the encrypted derivative, and only the encrypted derivative aggregate value is required to be sent to a business party, so that the transmission times of intermediate results can be reduced, the communication burden in the modeling process is effectively lightened, and the training efficiency of the federal model is improved.
An embodiment of a third aspect of the present application provides a training device for a federal model, including: the acquisition module is used for training the federal model according to the sample label information and the sample characteristic information in cooperation with the data party so as to acquire an intermediate result of the federal model, wherein the intermediate result of the federal model comprises a first derivative and a second derivative of a loss function corresponding to each sample; the encryption module is used for carrying out joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample; the sending module is used for sending the encrypted derivative corresponding to the sample to a data party, wherein the encrypted derivative corresponding to the sample is used for generating an encrypted derivative aggregate value; and the updating module is used for receiving the encrypted derivative aggregate value sent by the data party and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
The training device for the federal model can jointly encrypt the first derivative and the second derivative of the sample to obtain the encrypted derivative corresponding to the sample, send the encrypted derivative corresponding to the sample to the data party, and further receive the encrypted derivative aggregate value sent by the data party and further update the parameters of the federal model according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
In addition, the training device of the federal model according to the above embodiment of the present application may further have the following additional technical features:
in one embodiment of the present application, the encryption module includes: an encoding unit configured to encode the first derivative and the second derivative of the sample, respectively; the splicing unit is used for splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample; and the encryption unit is used for encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
In one embodiment of the present application, the coding unit is specifically configured to: acquiring a preset coding strategy, wherein the preset coding strategy comprises preset symbol bits, preset coding bits and a preset coding mode; obtaining target sign positions of the first derivative and the second derivative from the preset sign positions; and respectively encoding the first derivative and the second derivative according to the preset encoding mode based on the target sign bit and the preset encoding bit number.
In one embodiment of the present application, the training device of the federal model further includes: and the clipping module is used for carrying out gradient clipping on the derivative.
In one embodiment of the present application, the clipping module is specifically configured to: if the derivative is greater than the upper limit value of the preset gradient range, converting the derivative into the upper limit value; alternatively, if the derivative is less than the lower limit of the preset gradient range, the derivative is converted to the lower limit.
In one embodiment of the present application, the splicing unit is specifically configured to: splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples; and adding a preset identification bit at a preset position of the candidate splicing derivative corresponding to the sample, and generating the splicing derivative corresponding to the sample.
In one embodiment of the present application, the preset position includes at least one of a front end position of the candidate spliced derivative, a spliced position of the encoded first and second derivatives, and a tail end position of the candidate spliced derivative.
In one embodiment of the present application, the update module is specifically configured to: decrypting the encrypted derivative aggregate value, and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value; parameters of the federal model are further updated based on the derivative aggregate values.
In one embodiment of the present application, the federal model is a federal safety promotion tree model.
Another federal model training apparatus is provided according to an embodiment of the fourth aspect of the present application, including: the training module is used for carrying out federal model training by the cooperative service party based on the sample characteristic information; the receiving module is used for receiving the encrypted derivative corresponding to the sample sent by the service party; the generation module is used for generating an encrypted derivative aggregate value corresponding to a plurality of samples according to the encrypted derivatives corresponding to the plurality of samples; and the sending module is used for sending the encrypted derivative aggregate value to the service party, wherein the encrypted derivative aggregate value is used for updating the parameters of the federal model.
According to the training device of the federation model, the encryption derivative corresponding to the sample sent by the service party is received, the encryption derivative aggregate value corresponding to the plurality of samples is generated according to the encryption derivative corresponding to the plurality of samples, and the encryption derivative aggregate value is sent to the service party so that the service party can update the parameters of the federation model according to the encryption derivative aggregate value. Therefore, the encrypted derivative aggregate value can be generated according to the encrypted derivative, and only the encrypted derivative aggregate value is required to be sent to a business party, so that the transmission times of intermediate results can be reduced, the communication burden in the modeling process is effectively lightened, and the training efficiency of the federal model is improved.
An embodiment of a fifth aspect of the present application proposes an electronic device, including: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the training method of the federal model according to the embodiment of the first aspect and realize the training method of the federal model according to the embodiment of the second aspect.
According to the electronic equipment, the processor executes the computer program stored in the memory, the first derivative and the second derivative of the sample can be encrypted in a combined mode to obtain the encrypted derivative corresponding to the sample, the encrypted derivative corresponding to the sample is sent to the data side, the encrypted derivative aggregate value sent by the data side can be received, and parameters of the federal model are further updated according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
An embodiment of a sixth aspect of the present application proposes a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for training a federal model according to an embodiment of the first aspect, and implements a method for training a federal model according to an embodiment of the second aspect.
The computer readable storage medium of the embodiment of the application, through storing a computer program and executing the computer program by a processor, can jointly encrypt the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample, and send the encrypted derivative corresponding to the sample to a data party, and can also receive an encrypted derivative aggregate value sent by the data party, and further update parameters of the federal model according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method of training a federal model according to one embodiment of the present application;
FIG. 2 is a flow chart of a method for training a federal model to jointly encrypt at least a first derivative of a sample to obtain an encrypted derivative corresponding to the sample according to an embodiment of the present application;
FIG. 3 is a schematic diagram of stitching the encoded first and second derivatives in a training method of a federal model according to an embodiment of the present application;
FIG. 4 is a flow chart of a method of training a federal model according to another embodiment of the present application;
FIG. 5 is a schematic structural view of a federal model exercise device according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a federal model exercise device according to another embodiment of the present application; and
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a federal model training method, a federal model training device, an electronic device, and a storage medium according to embodiments of the present application with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method of training a federal model according to one embodiment of the present application.
As shown in fig. 1, a federal model training method according to an embodiment of the present application includes:
s101, training a federation model by a collaborative data party based on sample label information and sample characteristic information to obtain an intermediate result of the federation model, wherein the intermediate result of the federation model comprises a first derivative and a second derivative of a loss function corresponding to each sample.
It should be noted that, the execution body of the federal model training method in the embodiment of the present application may be a business side, and the federal model training device in the embodiment of the present application may be configured in any electronic device, so that the electronic device may execute the federal model training method in the embodiment of the present application. The electronic device may be a personal computer (Personal Computer, abbreviated as PC), a cloud device, a mobile device, etc., and the mobile device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, a vehicle-mounted device, etc. with various hardware devices including an operating system, a touch screen, and/or a display screen.
In the embodiment of the application, the business party and the data party can cooperatively perform training of the federal model. The business side has sample tag information, and the data side does not have sample tag information. Both the service party and the data party have sample characteristic information, and the sample characteristic information of the service party and the data party are characteristic information of different dimensions for the same sample, for example, the service party has characteristic information of g dimensions for n samples, and the data party has characteristic information of h dimensions for n samples.
In the embodiment of the application, the business side can perform training of the federal model based on the sample label information and cooperate with the data side to obtain an intermediate result of the federal model. Wherein the intermediate result of the federal model includes a first derivative and a second derivative of the loss function for each sample.
In one embodiment, training of the federation model may be performed based on sample tag information and sample feature information of the business party itself, and sample feature information of the data party, to obtain intermediate results of the federation model.
In one embodiment, the predicted value of each sample may be calculated based on the current federal model, and the first derivative and the second derivative of the loss function corresponding to each sample may be obtained based on the predicted value of each sample and the tag information.
It should be noted that, in the embodiment of the present application, the number of data parties is not limited too much, and the number of data parties may be one or more. The type of federal model is not so limited and may be, for example, a federal security promotion tree model.
S102, carrying out joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample.
In the embodiment of the application, the first derivative and the second derivative of the sample can be encrypted in a combined way to obtain the encrypted derivative corresponding to the sample. That is, the method performs joint encryption on the first derivative and the second derivative of each sample, and the number of encrypted derivatives corresponding to each sample is one.
In the embodiment of the present application, the encryption method is not limited too much, and may be, for example, addition homomorphic encryption.
And S103, sending the encrypted derivative corresponding to the sample to a data party, wherein the encrypted derivative corresponding to the sample is used for generating an encrypted derivative aggregate value.
In the embodiment of the application, the service party can send the encrypted derivative corresponding to the sample to the data party, namely, in each training process of the federal model, only one encrypted derivative needs to be sent to the data party for each sample, and the transmission times are less.
In the embodiment of the application, the data party can generate the encrypted derivative aggregate value corresponding to the plurality of samples according to the encrypted derivatives corresponding to the plurality of samples in the data party.
In one embodiment, the data party has characteristic information for multiple dimensions of the sample, the samples can be grouped according to the sample characteristic information of any dimension to obtain multiple groups of sample characteristic information of any dimension, and an encrypted derivative aggregate value corresponding to each group of sample characteristic information of any dimension is obtained according to the received encrypted derivatives corresponding to the multiple samples.
For example, assuming that the data party has feature information of an age dimension for 100 samples, the samples may be grouped according to the feature information of the age dimension to obtain multiple groups of sample feature information of the age dimension, and assuming that 10 groups of sample feature information of the age dimension are obtained, an encrypted derivative aggregate value corresponding to each group of sample feature information may be obtained according to an encrypted derivative of the sample corresponding to each group of sample feature information.
S104, receiving the encrypted derivative aggregate value sent by the data party, and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
In the embodiment of the application, the encrypted derivative aggregate value sent by the data party can be received, and the parameters of the federal model are further updated according to the encrypted derivative aggregate value.
In one embodiment, the received encrypted derivative aggregate value may be decrypted to obtain a first derivative aggregate value and a second derivative aggregate value, and parameters of the federal model may be further updated based on the first derivative aggregate value and the second derivative aggregate value.
In one embodiment, when the federal model is a federal security promotion tree model, further updating parameters of the federal model according to the encrypted derivative aggregation value, wherein the parameters can include any leaf node of the federal security promotion tree model, the leaf node corresponds to the dimension one by one, the metric score of each group of sample characteristic information of the corresponding dimension in the data party is calculated according to the encrypted derivative aggregation value, if the maximum metric score of the corresponding dimension is smaller than a preset threshold value, the splitting of the leaf node is not performed, otherwise, if the maximum metric score of the corresponding dimension is greater than or equal to the preset threshold value, the splitting of the leaf node is performed according to the grouped sample characteristic information corresponding to the maximum metric score of the corresponding dimension, and the left leaf node and the right leaf node of the leaf node are obtained.
In one embodiment, the training of the federation model may be ended in response to meeting the end condition of the model training, and the federation model obtained by the last training is used as the trained federation model. The end condition of the model training may be set according to practical situations, for example, but not limited to, that the model precision reaches a preset threshold, that the training frequency reaches a preset frequency threshold, and the like, which is not limited too much.
In summary, according to the training method of the federation model in the embodiment of the application, the first derivative and the second derivative of the sample can be encrypted in a combined mode to obtain the encrypted derivative corresponding to the sample, the encrypted derivative corresponding to the sample is sent to the data side, the encrypted derivative aggregate value sent by the data side can be received, and parameters of the federation model are further updated according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
On the basis of any of the above embodiments, as shown in fig. 2, the performing joint encryption on the first derivative and the second derivative of the sample in step S102 to obtain the encrypted derivative corresponding to the sample may include:
s201, coding the first derivative and the second derivative of the sample respectively.
In embodiments of the present application, the first and second derivatives of the samples may be encoded separately. It will be appreciated that the encoding strategy may be pre-established, with the first and second derivatives of the samples being encoded separately according to the pre-set encoding strategy. The preset encoding strategy can be set according to actual conditions, and is not limited too much.
In one embodiment, encoding the first derivative and the second derivative of the sample respectively may include obtaining a preset encoding strategy, where the preset encoding strategy includes a preset sign bit, a preset encoding bit number, and a preset encoding mode, obtaining target sign bits of the first derivative and the second derivative from the preset sign bit, and encoding the first derivative and the second derivative respectively according to the preset encoding mode based on the target sign bit and the preset encoding bit number. The preset sign bit may be set to 00 or 11, where the preset sign bit 00 represents a positive number, the preset sign bit 11 represents a negative number, the preset coding bit may be set to 8, and the preset coding mode may include a complement.
For example, the first derivative may be encoded to obtain a first derivative 0001010101, where 00 is a predetermined sign, and the second derivative may be encoded to obtain a second derivative 0001010101, where 00 is a predetermined sign.
S202, splicing the encoded first derivative and second derivative to obtain a spliced derivative corresponding to the sample.
In the embodiment of the application, the encoded first derivative and second derivative can be spliced to obtain the spliced derivative corresponding to the sample.
In one embodiment, splicing the encoded first derivative and the encoded second derivative to obtain spliced derivatives corresponding to the samples may include splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples, and adding a preset identification bit at a preset position of the candidate spliced derivatives corresponding to the samples to generate spliced derivatives corresponding to the samples.
The preset identification position and the preset position can be set according to practical situations, for example, the preset identification position can be set to be 00, and the preset position comprises at least one of a front end position of a candidate spliced derivative, a spliced position of encoded first-order derivative and second-order derivative, and a tail end position of the candidate spliced derivative.
For example, as shown in fig. 3, if the encoded first derivative and second derivative are 0001010101, the encoded first derivative and second derivative are spliced to obtain corresponding candidate spliced derivatives 00010101010001010101, and preset identification bits 00 may be added at the front end position, the splicing position and the tail end position of the candidate spliced derivatives, so that the generated spliced derivatives 00000101010100000101010100.
S203, encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
The relevant content of step S203 may refer to the above embodiment, and will not be described here again.
Therefore, the method can respectively encode the first derivative and the second derivative of the sample, splice the encoded first derivative and second derivative to obtain spliced derivatives corresponding to the sample, and encrypt the spliced derivatives corresponding to the sample to obtain encrypted derivatives corresponding to the sample.
On the basis of any of the above embodiments, before the first derivative and the second derivative of the sample are jointly encrypted in step S102, gradient clipping is further included on the derivative, so that the derivative is prevented from being too small or too large, which is helpful for improving the reliability of encryption.
In one embodiment, gradient clipping the derivative may include converting the derivative to an upper value if the derivative is greater than the upper value of the predetermined gradient range, indicating that the derivative is too large; alternatively, if the derivative is less than the lower limit of the preset gradient range, indicating that the derivative is too small, the derivative is converted to the lower limit. The preset gradient range can be set according to practical situations, and is not limited too much. Thus, the method can ensure that the derivative is within a preset gradient range.
On the basis of any of the foregoing embodiments, further updating the parameters of the federal model according to the encrypted derivative aggregate value in step S104 may include decrypting the encrypted derivative aggregate value and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value, and further updating the parameters of the federal model based on the derivative aggregate value.
It may be appreciated that encoding the first and second derivatives of the samples and encrypting the encoded first and second derivatives to obtain an encrypted derivative may first decrypt the encrypted derivative aggregate value after receiving the encrypted derivative aggregate value and then decode the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value, i.e., the derivative aggregate value is a decimal value, and further update parameters of the federal model based on the derivative aggregate value. Wherein the derivative aggregate value may include a first derivative aggregate value and a second derivative aggregate value.
Therefore, the method can sequentially decrypt and decode the encrypted derivative aggregate value to obtain a corresponding derivative aggregate value, and further update the parameters of the federal model based on the derivative aggregate value.
FIG. 4 is a flow chart of a method of training a federal model according to another embodiment of the present application.
As shown in fig. 4, a federal model training method according to an embodiment of the present application includes:
s401, training the federal model by the cooperative service party based on the sample characteristic information.
It should be noted that, the execution main body of the federal model training method in the embodiment of the present application may be a data side, and the federal model training device in the embodiment of the present application may be configured in any electronic device, so that the electronic device may execute the federal model training method in the embodiment of the present application. The electronic device may be a personal computer (Personal Computer, abbreviated as PC), a cloud device, a mobile device, etc., and the mobile device may be a mobile phone, a tablet computer, a personal digital assistant, a wearable device, a vehicle-mounted device, etc. with various hardware devices including an operating system, a touch screen, and/or a display screen.
In the embodiment of the application, the business party and the data party can cooperatively perform training of the federal model. The business side has sample tag information, and the data side does not have sample tag information. Both the service party and the data party have sample characteristic information, and the sample characteristic information of the service party and the data party are characteristic information of different dimensions for the same sample, for example, the service party has characteristic information of g dimensions for n samples, and the data party has characteristic information of h dimensions for n samples.
In the embodiment of the application, the data party can perform federal model training in cooperation with the service party based on the sample characteristic information, so that the service party obtains an intermediate result of the federal model. Wherein the intermediate result of the federal model includes a first derivative and a second derivative of the loss function for each sample.
In one embodiment, training of the federation model may be performed based on sample tag information and sample feature information of the business party itself, and sample feature information of the data party, to obtain intermediate results of the federation model.
It should be noted that, in the embodiment of the present application, the number of data parties is not limited too much, and the number of data parties may be one or more. The type of federal model is not so limited and may be, for example, a federal security promotion tree model.
S402, receiving an encrypted derivative corresponding to a sample sent by a service party.
In the embodiment of the application, the service party can perform joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample, and can send the encrypted derivative corresponding to the sample to the data party. Accordingly, the data party can receive the encrypted derivative corresponding to the sample sent by the service party.
In the embodiment of the present application, the number of encrypted derivatives corresponding to each sample is one.
S403, generating encrypted derivative aggregate values corresponding to the plurality of samples according to the encrypted derivatives corresponding to the plurality of samples.
In the embodiment of the application, the data party can generate the encrypted derivative aggregate value corresponding to the plurality of samples according to the encrypted derivatives corresponding to the plurality of samples.
In one embodiment, the data party has characteristic information for multiple dimensions of the sample, the samples can be grouped according to the sample characteristic information of any dimension to obtain multiple groups of sample characteristic information of any dimension, and an encrypted derivative aggregate value corresponding to each group of sample characteristic information of any dimension is obtained according to the received encrypted derivatives corresponding to the multiple samples.
For example, assuming that the data party has feature information of an age dimension for 100 samples, the samples may be grouped according to the feature information of the age dimension to obtain multiple groups of sample feature information of the age dimension, and assuming that 10 groups of sample feature information of the age dimension are obtained, an encrypted derivative aggregate value corresponding to each group of sample feature information may be obtained according to an encrypted derivative of the sample corresponding to each group of sample feature information.
S404, sending the encrypted derivative aggregate value to a service party, wherein the encrypted derivative aggregate value is used for updating parameters of the federal model.
In the embodiment of the application, the data party can send the encrypted derivative aggregate value to the service party, and compared with the data party which sends the encrypted first derivative aggregate value and the encrypted second derivative aggregate value to the service party in the related art, the data party only needs to send the encrypted derivative aggregate value to the service party, so that the transmission times are less.
According to the training method of the federation model, the encrypted derivative corresponding to the sample sent by the service party is received, the encrypted derivative aggregate value corresponding to the plurality of samples is generated according to the encrypted derivative corresponding to the plurality of samples, and the encrypted derivative aggregate value is sent to the service party so that the service party can update the parameters of the federation model according to the encrypted derivative aggregate value. Therefore, the encrypted derivative aggregate value can be generated according to the encrypted derivative, and only the encrypted derivative aggregate value is required to be sent to a business party, so that the transmission times of intermediate results can be reduced, the communication burden in the modeling process is effectively lightened, and the training efficiency of the federal model is improved.
Corresponding to the above-mentioned federal model training method provided by the embodiments of fig. 1 to 3, the present disclosure further provides a federal model training apparatus, and since the federal model training apparatus provided by the embodiments of the present disclosure corresponds to the federal model training method provided by the embodiments of fig. 1 to 3, the implementation of the federal model training method is also applicable to the federal model training apparatus provided by the embodiments of the present disclosure, which are not described in detail in the embodiments of the present disclosure.
FIG. 5 is a schematic structural view of a federal model exercise device according to one embodiment of the present application.
As shown in fig. 5, a federal model training apparatus 100 according to an embodiment of the present application may include: the device comprises an acquisition module 110, an encryption module 120, a transmission module 130 and an updating module 140.
The obtaining module 110 is configured to perform training of a federal model in coordination with a data party based on sample tag information and sample feature information, so as to obtain an intermediate result of the federal model, where the intermediate result of the federal model includes a first derivative and a second derivative of a loss function corresponding to each sample;
an encryption module 120, configured to jointly encrypt the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample;
a sending module 130, configured to send an encrypted derivative corresponding to the sample to a data party, where the encrypted derivative corresponding to the sample is used to generate an encrypted derivative aggregate value;
and the updating module 140 is used for receiving the encrypted derivative aggregate value sent by the data party and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
In one embodiment of the present application, the encryption module 120 includes: an encoding unit configured to encode the first derivative and the second derivative of the sample, respectively; the splicing unit is used for splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample; and the encryption unit is used for encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
In one embodiment of the present application, the coding unit is specifically configured to: acquiring a preset coding strategy, wherein the preset coding strategy comprises preset symbol bits, preset coding bits and a preset coding mode; obtaining target sign positions of the first derivative and the second derivative from the preset sign positions; and respectively encoding the first derivative and the second derivative according to the preset encoding mode based on the target sign bit and the preset encoding bit number.
In one embodiment of the present application, the training device 100 of the federal model further includes: and the clipping module is used for carrying out gradient clipping on the derivative.
In one embodiment of the present application, the clipping module is specifically configured to: if the derivative is greater than the upper limit value of the preset gradient range, converting the derivative into the upper limit value; alternatively, if the derivative is less than the lower limit of the preset gradient range, the derivative is converted to the lower limit.
In one embodiment of the present application, the splicing unit is specifically configured to: splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples; and adding a preset identification bit at a preset position of the candidate splicing derivative corresponding to the sample, and generating the splicing derivative corresponding to the sample.
In one embodiment of the present application, the preset position includes at least one of a front end position of the candidate spliced derivative, a spliced position of the encoded first and second derivatives, and a tail end position of the candidate spliced derivative.
In one embodiment of the present application, the updating module 140 is specifically configured to: decrypting the encrypted derivative aggregate value, and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value; parameters of the federal model are further updated based on the derivative aggregate values.
In one embodiment of the present application, the federal model is a federal safety promotion tree model.
The training device for the federal model can jointly encrypt the first derivative and the second derivative of the sample to obtain the encrypted derivative corresponding to the sample, send the encrypted derivative corresponding to the sample to the data party, and further receive the encrypted derivative aggregate value sent by the data party and further update the parameters of the federal model according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
Corresponding to the above-mentioned training method of the federal model provided by the embodiment of fig. 4, the present disclosure further provides a training device of the federal model, and since the training device of the federal model provided by the embodiment of the present disclosure corresponds to the above-mentioned training method of the federal model provided by the embodiment of fig. 4, an implementation manner of the training method of the federal model is also applicable to the training device of the federal model provided by the embodiment of the present disclosure, which is not described in detail in the embodiment of the present disclosure.
Fig. 6 is a schematic structural view of a federal model exercise device according to another embodiment of the present application.
As shown in fig. 6, a federal model training apparatus 200 according to an embodiment of the present application may include: training module 210, receiving module 220, generating module 230, and transmitting module 240.
The training module 210 is configured to perform training of the federal model in coordination with the service party based on the sample feature information;
a receiving module 220, configured to receive an encrypted derivative corresponding to the sample sent by the service party;
a generating module 230, configured to generate an encrypted derivative aggregate value corresponding to a plurality of samples according to encrypted derivatives corresponding to the plurality of samples;
a sending module 240, configured to send the encrypted derivative aggregate value to the service party, where the encrypted derivative aggregate value is used to update a parameter of the federal model.
According to the training device of the federation model, the encryption derivative corresponding to the sample sent by the service party is received, the encryption derivative aggregate value corresponding to the plurality of samples is generated according to the encryption derivative corresponding to the plurality of samples, and the encryption derivative aggregate value is sent to the service party so that the service party can update the parameters of the federation model according to the encryption derivative aggregate value. Therefore, the encrypted derivative aggregate value can be generated according to the encrypted derivative, and only the encrypted derivative aggregate value is required to be sent to a business party, so that the transmission times of intermediate results can be reduced, the communication burden in the modeling process is effectively lightened, and the training efficiency of the federal model is improved.
In order to implement the above embodiment, as shown in fig. 7, the present application further proposes an electronic device 300, including: memory 310, processor 320, and a computer program stored on memory 310 and executable on processor 320, when executing the program, implements a federal model training method as set forth in the foregoing embodiments of the present application.
According to the electronic equipment, the processor executes the computer program stored in the memory, the first derivative and the second derivative of the sample can be encrypted in a combined mode to obtain the encrypted derivative corresponding to the sample, the encrypted derivative corresponding to the sample is sent to the data side, the encrypted derivative aggregate value sent by the data side can be received, and parameters of the federal model are further updated according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
To achieve the foregoing embodiments, the present application further proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of training a federal model as proposed in the foregoing embodiments of the present application.
The computer readable storage medium of the embodiment of the application, through storing a computer program and executing the computer program by a processor, can jointly encrypt the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample, and send the encrypted derivative corresponding to the sample to a data party, and can also receive an encrypted derivative aggregate value sent by the data party, and further update parameters of the federal model according to the encrypted derivative aggregate value. Therefore, the first derivative and the second derivative of the sample are subjected to joint encryption, so that the encryption times of the intermediate result can be reduced, the calculation complexity of the modeling process can be reduced, the transmission times of the intermediate result can be reduced, the communication burden of the modeling process can be effectively reduced, and the training efficiency of the federal model can be improved.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (22)

1. A method of federal model training for a business party, the method comprising:
based on sample label information and sample characteristic information, training a federal model by a cooperative data party to obtain an intermediate result of the federal model, wherein the intermediate result of the federal model comprises a first derivative and a second derivative of a loss function corresponding to each sample;
performing joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample;
sending the encrypted derivative corresponding to the sample to a data party, wherein the encrypted derivative corresponding to the sample is used for generating an encrypted derivative aggregate value;
and receiving the encrypted derivative aggregate value sent by the data party, and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
2. The method of claim 1, wherein the jointly encrypting the first derivative and the second derivative of the sample to obtain the encrypted derivative corresponding to the sample comprises:
encoding the first derivative and the second derivative of the sample, respectively;
splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample;
encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
3. The method of claim 2, wherein the encoding the first derivative and the second derivative of the samples, respectively, comprises:
acquiring a preset coding strategy, wherein the preset coding strategy comprises preset symbol bits, preset coding bits and a preset coding mode;
obtaining target sign positions of the first derivative and the second derivative from the preset sign positions;
and respectively encoding the first derivative and the second derivative according to the preset encoding mode based on the target sign bit and the preset encoding bit number.
4. The method of claim 1, wherein prior to jointly encrypting the first derivative and the second derivative of the sample, further comprising:
Gradient clipping is performed on the derivative.
5. The method of claim 4, wherein the gradient clipping of the derivative comprises:
if the derivative is greater than the upper limit value of the preset gradient range, converting the derivative into the upper limit value; or,
and if the derivative is smaller than the lower limit value of the preset gradient range, converting the derivative into the lower limit value.
6. The method according to claim 2, wherein the splicing the encoded first derivative and the second derivative to obtain the spliced derivative corresponding to the sample includes:
splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples;
and adding a preset identification bit at a preset position of the candidate splicing derivative corresponding to the sample, and generating the splicing derivative corresponding to the sample.
7. The method of claim 6, wherein the preset position comprises at least one of a front position of the candidate spliced derivative, a spliced position of the encoded first and second derivatives, and a tail position of the candidate spliced derivative.
8. The method of claim 2, wherein the further updating parameters of the federal model based on the encrypted derivative aggregate values comprises:
decrypting the encrypted derivative aggregate value, and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value;
parameters of the federal model are further updated based on the derivative aggregate values.
9. The method of any one of claims 1-8, wherein the federal model is a federal safety promotion tree model.
10. A method of training a federal model for use on a data side, the method comprising:
training of a federal model based on sample feature information in coordination with a business party to perform the method of any of claims 1-9;
receiving an encrypted derivative corresponding to the sample sent by the service party;
generating an encrypted derivative aggregate value corresponding to a plurality of samples according to the encrypted derivatives corresponding to the plurality of samples;
and sending the encrypted derivative aggregate value to the service party, wherein the encrypted derivative aggregate value is used for updating parameters of the federal model.
11. A federal model training apparatus, comprising:
the acquisition module is used for training the federal model according to the sample label information and the sample characteristic information in cooperation with the data party so as to acquire an intermediate result of the federal model, wherein the intermediate result of the federal model comprises a first derivative and a second derivative of a loss function corresponding to each sample;
the encryption module is used for carrying out joint encryption on the first derivative and the second derivative of the sample to obtain an encrypted derivative corresponding to the sample;
the sending module is used for sending the encrypted derivative corresponding to the sample to a data party, wherein the encrypted derivative corresponding to the sample is used for generating an encrypted derivative aggregate value;
and the updating module is used for receiving the encrypted derivative aggregate value sent by the data party and further updating the parameters of the federal model according to the encrypted derivative aggregate value.
12. The apparatus of claim 11, wherein the encryption module comprises:
an encoding unit configured to encode the first derivative and the second derivative of the sample, respectively;
the splicing unit is used for splicing the encoded first derivative and the encoded second derivative to obtain a spliced derivative corresponding to the sample;
And the encryption unit is used for encrypting the spliced derivative corresponding to the sample to obtain the encrypted derivative corresponding to the sample.
13. The apparatus according to claim 12, characterized in that said coding unit is specifically configured to:
acquiring a preset coding strategy, wherein the preset coding strategy comprises preset symbol bits, preset coding bits and a preset coding mode;
obtaining target sign positions of the first derivative and the second derivative from the preset sign positions;
and respectively encoding the first derivative and the second derivative according to the preset encoding mode based on the target sign bit and the preset encoding bit number.
14. The apparatus as recited in claim 11, further comprising: and the clipping module is used for carrying out gradient clipping on the derivative.
15. The device according to claim 14, wherein the clipping module is specifically configured to:
if the derivative is greater than the upper limit value of the preset gradient range, converting the derivative into the upper limit value; or,
and if the derivative is smaller than the lower limit value of the preset gradient range, converting the derivative into the lower limit value.
16. The device according to claim 12, characterized in that said splicing unit is specifically configured to:
splicing the encoded first derivative and the encoded second derivative to obtain candidate spliced derivatives corresponding to the samples;
and adding a preset identification bit at a preset position of the candidate splicing derivative corresponding to the sample, and generating the splicing derivative corresponding to the sample.
17. The apparatus of claim 16, wherein the preset position comprises at least one of a leading position of the candidate spliced derivative, a spliced position of the encoded first and second derivatives, and a trailing position of the candidate spliced derivative.
18. The apparatus of claim 12, wherein the updating module is specifically configured to:
decrypting the encrypted derivative aggregate value, and decoding the decrypted encrypted derivative aggregate value to obtain a corresponding derivative aggregate value;
parameters of the federal model are further updated based on the derivative aggregate values.
19. The apparatus of any one of claims 11-18, wherein the federal model is a federal safety promotion tree model.
20. A federal model training apparatus, comprising:
A training module for training the federal model in coordination with a business party based on sample feature information, the business party to perform the method of any of claims 1-9;
the receiving module is used for receiving the encrypted derivative corresponding to the sample sent by the service party;
the generation module is used for generating an encrypted derivative aggregate value corresponding to a plurality of samples according to the encrypted derivatives corresponding to the plurality of samples;
and the sending module is used for sending the encrypted derivative aggregate value to the service party, wherein the encrypted derivative aggregate value is used for updating the parameters of the federal model.
21. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a federal model training method according to any one of claims 1-10 when the program is executed.
22. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a federal model training method according to any of claims 1-10.
CN202110839499.5A 2021-07-23 2021-07-23 Training method and device of federal model and electronic equipment Active CN113592097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110839499.5A CN113592097B (en) 2021-07-23 2021-07-23 Training method and device of federal model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110839499.5A CN113592097B (en) 2021-07-23 2021-07-23 Training method and device of federal model and electronic equipment

Publications (2)

Publication Number Publication Date
CN113592097A CN113592097A (en) 2021-11-02
CN113592097B true CN113592097B (en) 2024-02-06

Family

ID=78249367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110839499.5A Active CN113592097B (en) 2021-07-23 2021-07-23 Training method and device of federal model and electronic equipment

Country Status (1)

Country Link
CN (1) CN113592097B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239863B (en) * 2022-02-24 2022-05-20 腾讯科技(深圳)有限公司 Training method of machine learning model, prediction method and device thereof, and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472745A (en) * 2019-08-06 2019-11-19 深圳前海微众银行股份有限公司 Information transferring method and device in a kind of federal study
CN110688419A (en) * 2019-10-09 2020-01-14 京东城市(南京)科技有限公司 Federated modeling system and federated modeling method
CN111523681A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Global feature importance representation method and device, electronic equipment and storage medium
CN112733454A (en) * 2021-01-13 2021-04-30 新智数字科技有限公司 Equipment predictive maintenance method and device based on joint learning
CN113051239A (en) * 2021-03-26 2021-06-29 北京沃东天骏信息技术有限公司 Data sharing method, use method of model applying data sharing method and related equipment
EP3627759B1 (en) * 2017-08-01 2021-07-14 Advanced New Technologies Co., Ltd. Method and apparatus for encrypting data, method and apparatus for training machine learning model, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12045716B2 (en) * 2019-09-19 2024-07-23 Lucinity ehf Federated learning system and method for detecting financial crime behavior across participating entities

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3627759B1 (en) * 2017-08-01 2021-07-14 Advanced New Technologies Co., Ltd. Method and apparatus for encrypting data, method and apparatus for training machine learning model, and electronic device
CN110472745A (en) * 2019-08-06 2019-11-19 深圳前海微众银行股份有限公司 Information transferring method and device in a kind of federal study
CN110688419A (en) * 2019-10-09 2020-01-14 京东城市(南京)科技有限公司 Federated modeling system and federated modeling method
CN111523681A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Global feature importance representation method and device, electronic equipment and storage medium
CN112733454A (en) * 2021-01-13 2021-04-30 新智数字科技有限公司 Equipment predictive maintenance method and device based on joint learning
CN113051239A (en) * 2021-03-26 2021-06-29 北京沃东天骏信息技术有限公司 Data sharing method, use method of model applying data sharing method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
联盟学习在生物医学大数据隐私保护中的原理与应用;窦佐超;陈峰;邓杰仁;陈如梵;郑灏;孙琪;谢康;沈百荣;王爽;;医学信息学杂志(05);全文 *

Also Published As

Publication number Publication date
CN113592097A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN110289947A (en) Data transmit consistency desired result method, apparatus, computer equipment and storage medium
CN114648130B (en) Longitudinal federal learning method, device, electronic equipment and storage medium
CN111371544B (en) Prediction method and device based on homomorphic encryption, electronic equipment and storage medium
CN113592097B (en) Training method and device of federal model and electronic equipment
CN105162588A (en) Media file encryption/decryption methods and device
CN112668046A (en) Feature interleaving method, apparatus, computer-readable storage medium, and program product
CN111934873A (en) Bidding file encryption and decryption method and device
CN112149174A (en) Model training method, device, equipment and medium
CN113132394A (en) Request processing system, method and device, storage medium and electronic equipment
CN117978361A (en) Cloud-based privacy computing method and device, electronic equipment and readable medium
CN113704818A (en) Key management method and device for encrypted data storage system and terminal equipment
US9876644B2 (en) Authenticating data packet based on hash image of the data packet in erasure coding-based data transmission
CN113537512A (en) Model training method, device, system, equipment and medium based on federal learning
CN112182112A (en) Block chain based distributed data dynamic storage method and electronic equipment
CN110427768B (en) Private key management method and system
CN114189331B (en) Key storage and reading method, device, equipment and storage medium
CN111970237A (en) Encryption and decryption method, system and medium based on water depth measurement data
CN115906128A (en) Character string processing method, device, equipment and medium
CN115277197B (en) Model ownership verification method, electronic device, medium and program product
CN112149141A (en) Model training method, device, equipment and medium
CN116156072A (en) Steganographic image generation method, steganographic information extraction method and related devices
CN116306905A (en) Semi-supervised non-independent co-distributed federal learning distillation method and device
CN113162628B (en) Data encoding method, data decoding method, terminal and storage medium
CN114448597A (en) Data processing method and device, computer equipment and storage medium
CN110517045B (en) Block chain data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant