CN109871702A - Federal model training method, system, equipment and computer readable storage medium - Google Patents
Federal model training method, system, equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN109871702A CN109871702A CN201910121269.8A CN201910121269A CN109871702A CN 109871702 A CN109871702 A CN 109871702A CN 201910121269 A CN201910121269 A CN 201910121269A CN 109871702 A CN109871702 A CN 109871702A
- Authority
- CN
- China
- Prior art keywords
- model
- parameter
- world
- model parameter
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 158
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000006116 polymerization reaction Methods 0.000 claims abstract description 30
- 238000012360 testing method Methods 0.000 claims description 117
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000001514 detection method Methods 0.000 claims description 14
- 239000012141 concentrate Substances 0.000 claims description 11
- 230000003252 repetitive effect Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 16
- 230000000875 corresponding effect Effects 0.000 description 97
- 238000010586 diagram Methods 0.000 description 11
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of federal model training method, system, equipment and computer readable storage mediums, the method comprising the steps of: after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset acquisition rule, the corresponding weight coefficient of each model parameter is obtained;According to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the second world model;Detect whether the second world model restrains;If detecting, the second world model is in convergence state, and the second world model is determined as to the final result of federal model training, and issues the second world model of parameter encryption to multiple client terminals.The present invention realizes cooperation terminal and is updated to obtain new world model according to the model parameter of each client terminal and its weight coefficient, improves the prediction effect of federal model.
Description
Technical field
The present invention relates to machine learning techniques field more particularly to a kind of federal model training method, system, equipment and meters
Calculation machine readable storage medium storing program for executing.
Background technique
Federal model is the machine learning model built using technique algorithm encryption, multiple federations in federal learning system
Client does not have to provide one's own side's data in model training, but according to the world model and visitor that the parameter that end issues encrypts that cooperate
The data set at family end trains local model, and returns to local model parameter and update world model for the polymerization of cooperation end, more
World model after new re-issues client, moves in circles, until convergence.Federation's study is handed over by parameter under encryption mechanism
The mode changed protects client data privacy, and the local model itself of client data and client not will do it transmission, local
Data will not be guessed that federal model has ensured data-privacy while higher degree keeps data integrity by counter.
Currently, cooperation end is returned according to multiple client local model parameter polymerization update world model when, only pair
The model parameter of multiple client does simple average, and the model parameter after being averaged is issued to visitor as new world model's parameter
Repetitive exercise is continued at family end, however, in hands-on, each client is due to the difference of its training data, the local that trains
The estimated performance of model be also it is irregular, the effect that the polymerization of existing simple average will lead to world model is paid no attention to
Think.
Summary of the invention
The main purpose of the present invention is to provide a kind of federal model training method, system, equipment and computer-readable deposit
Storage media, it is intended to solve existing cooperation end to the model parameter at multiple Federated client ends using the polymerization methods of simple average come
The undesirable technical problem of federal model effect caused by update world model.
To achieve the above object, the present invention provides a kind of federal model training method, the federal model training method packet
Include step:
It is regular according to preset acquisition after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively,
Obtain the corresponding weight coefficient of each model parameter;Wherein, the model parameter is that client terminal is issued according to cooperation terminal
First world model of parameter encryption carries out what federal model training obtained, and the weight coefficient is based on the model parameter pair
What the forecasting accuracy for the prediction model answered determined;
According to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the second world model;
Detect whether second world model restrains;
If detecting, second world model is in convergence state, and second world model is determined as federal mould
The final result of type training, and the second world model of parameter encryption is issued to the multiple client terminal.
Optionally, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to default
Acquisition rule, the step of obtaining each model parameter corresponding weight coefficient includes:
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset test sample
Collection, tests and obtains the prediction error rate of the corresponding prediction model of each model parameter;
The prediction error rate and preset calculation formula based on each prediction model, calculate separately to obtain each model
The corresponding weight coefficient of parameter.
Optionally, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to default
Test sample collection, test and include: the step of obtaining the prediction error rate of the corresponding prediction model of each model parameter
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, preset test sample is concentrated
Multiple test samples be input in the corresponding prediction model of the model parameter and predicted, obtain the prediction model and be directed to
The predicted value of each test sample;
According to multiple predicted values, the quantity that the test sample concentrates the test sample of prediction result mistake is obtained;
The quantity of the test sample of the prediction result mistake and the test sample are concentrated into whole test sample quantity
Ratio be determined as the prediction error rate of the pre- prediction model.
Optionally, described according to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains second
The step of world model includes:
The corresponding weight coefficient of each model parameter is multiplied respectively, the result after obtaining multiple multiplications;
By the results added after the multiple multiplication, and it will add up the model parameter that result is determined as the second world model,
Obtain second world model.
Optionally, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to default
Acquisition rule, before the step of obtaining each model parameter corresponding weight coefficient further include:
The first world model of parameter encryption is sent respectively to multiple client terminals;
Receive the model parameter that the multiple client terminal is sent respectively;
Wherein, for the client terminal after receiving first world model that cooperation terminal issues, the client is whole
The first training sample set is predicted according to first world model to obtain predicted value, and according to the predicted value pair at end
First training sample set is sampled to obtain the second training sample set, and the client terminal is based on second training sample
Collection training first world model, obtains the model parameter after training.
Optionally, after the step of whether detection second world model restrains further include:
If detecting, second world model is in not converged state, issues the second world model point of parameter encryption
Not to multiple client terminals so that second world model that is issued respectively according to cooperation terminal of the multiple client terminal after
Continue repetitive exercise to return to model parameter to the cooperation terminal.
Optionally, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to default
Acquisition rule, before the step of obtaining each model parameter corresponding weight coefficient further include:
Receive the model parameter that multiple client terminals are sent respectively;
It is regular according to preset acquisition after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively,
The step of obtaining each model parameter corresponding weight coefficient include:
Receive the weight coefficient corresponding with the model parameter that the multiple client terminal is sent respectively;Wherein, described
Multiple client terminals are tested respectively according to preset test sample collection and obtain the pre- of the corresponding prediction model of the model parameter
Error rate is surveyed, and according to the prediction error rate and preset calculation formula, the corresponding weight of the model parameter is calculated
Coefficient.
In addition, the present invention also proposes a kind of federal model training system, the system comprises cooperation terminal and respectively with institute
Multiple client terminals of cooperation terminal communication connection are stated, the cooperation terminal includes:
Module is obtained, for after receiving the model parameter that multiple client terminals are sent respectively, according to preset acquisition
Rule obtains the corresponding weight coefficient of each model parameter;Wherein, the model parameter is client terminal according under cooperation terminal
First world model of the parameter encryption of hair carries out what federal model training obtained, and the weight coefficient is joined based on the model
What the forecasting accuracy of the corresponding prediction model of number determined;
It polymerize update module, for polymerizeing according to multiple model parameters and the corresponding weight coefficient of each model parameter
To the second world model;
Detection module, for detecting whether second world model restrains;
Determining module, for detecting that second world model is in convergence state in the detection module, then by institute
The final result that the second world model is determined as federal model training is stated, and issues the second world model of parameter encryption to described
Multiple client terminals.
Optionally, the acquisition module includes:
Test cell, for after receiving the model parameter that multiple client terminals are sent respectively, according to preset test
Sample set is tested and obtains the prediction error rate of the corresponding prediction model of each model parameter;
Computing unit, for based on each prediction model the prediction error rate and preset calculation formula, count respectively
Calculation obtains the corresponding weight coefficient of each model parameter.
Optionally, the test cell includes:
Subelement is tested, for after receiving the model parameter that multiple client terminals are sent respectively, by preset test
Multiple test samples in sample set are input in the corresponding prediction model of the model parameter and are predicted, obtain the prediction
Model is directed to the predicted value of each test sample;
Subelement is obtained, for the test sample being obtained and concentrating prediction result mistake according to multiple predicted values
The quantity of test sample;
Determine subelement, it is complete for concentrating the quantity of the test sample of the prediction result mistake and the test sample
The ratio of portion's test sample quantity is determined as the prediction error rate of the pre- prediction model.
Optionally, the polymerization update module includes:
Multiply processing unit, for the corresponding weight coefficient of each model parameter to be multiplied respectively, obtains multiple multiplications
Result afterwards;
Updating unit for by the results added after the multiple multiplication, and will add up result and be determined as the second global mould
The model parameter of type obtains second world model.
Optionally, the cooperation terminal further include:
First issues module, for sending the first world model of parameter encryption respectively to multiple client terminals;
Receiving module, the model parameter sent respectively for receiving the multiple client terminal;
Wherein, the client terminal receive described first issue first world model that module issues after, institute
Client terminal is stated to predict the first training sample set according to first world model to obtain predicted value, and according to described
Predicted value samples first training sample set to obtain the second training sample set, and the client terminal is based on described second
Training sample set training first world model, obtains the model parameter after training.
Optionally, the cooperation terminal further include:
Second issues module, for detecting that second world model is in not converged state in the detection module,
The second world model of parameter encryption is then issued respectively to multiple client terminals, so that the multiple client terminal is respectively according to institute
It states second and issues second world model that module issues and continue repetitive exercise to return to model parameter to the cooperation terminal.
Optionally, the acquisition module, is also used to receive that the multiple client terminal sends respectively joins with the model
The corresponding weight coefficient of number;Wherein, the multiple client terminal is tested and is obtained described respectively according to preset test sample collection
The prediction error rate of the corresponding prediction model of model parameter, and according to the prediction error rate and preset calculation formula, it calculates
Obtain the corresponding weight coefficient of the model parameter.
In addition, to achieve the above object, the present invention also proposes a kind of federal model training equipment, the federal model training
Equipment includes the federal model training that memory, processor and being stored in can be run on the memory and on the processor
Program, the federal model training program realize the step of federal model training method as described above when being executed by the processor
Suddenly.
In addition, to achieve the above object, the present invention also proposes a kind of computer readable storage medium, described computer-readable
Federal model training program is stored on storage medium, the federal model training program realizes institute as above when being executed by processor
The step of federal model training method stated.
The present invention by when cooperate terminal receive the model parameter that multiple client terminals are sent respectively after, according to preset
Rule is obtained, the corresponding weight coefficient of each model parameter is obtained;Wherein, the model parameter is that client terminal is whole according to cooperation
The first world model of the parameter issued encryption is held to carry out what federal model training obtained, the weight coefficient is based on the mould
What the forecasting accuracy of the corresponding prediction model of shape parameter determined;According to multiple model parameters and the corresponding power of each model parameter
Weight coefficient, polymerization obtain the second world model;Detect whether second world model restrains;If detecting, described second is global
Model is in convergence state, then second world model is determined as to the final result of federal model training, and issues parameter
Second world model of encryption is to the multiple client terminal;Cooperation terminal is returned according to federal multi-party client as a result,
When model parameter polymerize world model, simple average is not done to multiple model parameters, in conjunction with the power of each model parameter
Coefficient is weighed to update to obtain new world model, which is the forecasting accuracy according to each client terminal training pattern
Determining, the prediction effect of federal model is improved, avoids existing cooperation end to the model parameter at multiple Federated client ends
Federal model effect caused by world model is updated using the polymerization methods of simple average pay no attention to think over a problem.
Detailed description of the invention
Fig. 1 is the structural schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of federal model training method first embodiment of the present invention;
Fig. 3 is the flow diagram of federal model training method second embodiment of the present invention;
Fig. 4 is the flow diagram of federal model training method 3rd embodiment of the present invention;
Fig. 5 is the flow diagram of federal model training method fourth embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is the structural schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
It should be noted that Fig. 1 can be the structural schematic diagram of the hardware running environment of federal model training equipment.This hair
Bright embodiment federal model training equipment can be PC, the terminal devices such as portable computer.
As shown in Figure 1, federal model training equipment may include: processor 1001, such as CPU, network interface 1004,
User interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing between these components
Connection communication.User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional
User interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include standard
Wireline interface, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable
Memory (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned
The storage device of processor 1001.
It will be understood by those skilled in the art that the training device structure of federal model shown in Fig. 1 is not constituted to federation
The restriction of model training equipment may include perhaps combining certain components or different than illustrating more or fewer components
Component layout.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium
Believe module, Subscriber Interface Module SIM and federal model training program.Wherein, operating system is to manage and control federal model training
The program of device hardware and software resource supports the operation of federal model training program and other softwares or program.
In federal model training equipment shown in Fig. 1, user interface 1003 is mainly used for connecting client terminal etc., and each
A terminal carries out data communication;Network interface 1004 is mainly used for connecting background server, and it is logical to carry out data with background server
Letter;And processor 1001 can be used for calling the federal model training program stored in memory 1005, and execute following operation:
It is regular according to preset acquisition after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively,
Obtain the corresponding weight coefficient of each model parameter;Wherein, the model parameter is that client terminal is issued according to cooperation terminal
First world model of parameter encryption carries out what federal model training obtained, and the weight coefficient is based on the model parameter pair
What the forecasting accuracy for the prediction model answered determined;
According to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the second world model;
Detect whether second world model restrains;
If detecting, second world model is in convergence state, and second world model is determined as federal mould
The final result of type training, and the second world model of parameter encryption is issued to the multiple client terminal.
Further, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to pre-
If acquisition rule, the step of obtaining each model parameter corresponding weight coefficient includes:
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset test sample
Collection, tests and obtains the prediction error rate of the corresponding prediction model of each model parameter;
The prediction error rate and preset calculation formula based on each prediction model, calculate separately to obtain each model
The corresponding weight coefficient of parameter.
Further, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to pre-
If test sample collection, test and include: the step of obtaining the prediction error rate of the corresponding prediction model of each model parameter
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, preset test sample is concentrated
Multiple test samples be input in the corresponding prediction model of the model parameter and predicted, obtain the prediction model and be directed to
The predicted value of each test sample;
According to multiple predicted values, the quantity that the test sample concentrates the test sample of prediction result mistake is obtained;
The quantity of the test sample of the prediction result mistake and the test sample are concentrated into whole test sample quantity
Ratio be determined as the prediction error rate of the pre- prediction model.
Further, described according to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the
The step of two world models includes:
The corresponding weight coefficient of each model parameter is multiplied respectively, the result after obtaining multiple multiplications;
By the results added after the multiple multiplication, and it will add up the model parameter that result is determined as the second world model,
Obtain second world model.
Further, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to pre-
If acquisition rule, before the step of obtaining each model parameter corresponding weight coefficient, processor 1001 can be also used for adjusting
With the federal model training program stored in memory 1005, and execute following steps:
The first world model of parameter encryption is sent respectively to multiple client terminals;
Receive the model parameter that the multiple client terminal is sent respectively;
Wherein, for the client terminal after receiving first world model that cooperation terminal issues, the client is whole
The first training sample set is predicted according to first world model to obtain predicted value, and according to the predicted value pair at end
First training sample set is sampled to obtain the second training sample set, and the client terminal is based on second training sample
Collection training first world model, obtains the model parameter after training.
Further, after the step of whether detection second world model restrains, processor 1001 can be with
For calling the federal model training program stored in memory 1005, and execute following steps:
If detecting, second world model is in not converged state, issues the second world model point of parameter encryption
Not to multiple client terminals so that second world model that is issued respectively according to cooperation terminal of the multiple client terminal after
Continue repetitive exercise to return to model parameter to the cooperation terminal.
Further, described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to pre-
If acquisition rule, before the step of obtaining each model parameter corresponding weight coefficient, processor 1001 can be also used for adjusting
With the federal model training program stored in memory 1005, and execute following steps:
Receive the model parameter that multiple client terminals are sent respectively;
It is regular according to preset acquisition after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively,
The step of obtaining each model parameter corresponding weight coefficient include:
Receive the weight coefficient corresponding with the model parameter that the multiple client terminal is sent respectively;Wherein, described
Multiple client terminals are tested respectively according to preset test sample collection and obtain the pre- of the corresponding prediction model of the model parameter
Error rate is surveyed, and according to the prediction error rate and preset calculation formula, the corresponding weight of the model parameter is calculated
Coefficient.
Based on above-mentioned structure, each embodiment of federal model training method is proposed.
It is the flow diagram of federal model training method first embodiment of the present invention referring to Fig. 2, Fig. 2.
The embodiment of the invention provides the embodiments of federal model training method, it should be noted that although in flow chart
In show logical order, but in some cases, shown or described step can be executed with the sequence for being different from herein
Suddenly.
Federal model training method includes:
Step S100, after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset
Rule is obtained, the corresponding weight coefficient of each model parameter is obtained;
Wherein, the first world model of the model parameter parameter encryption that be client terminal issue according to cooperation terminal into
The training of row federal model obtains, and the weight coefficient is the forecasting accuracy based on the corresponding prediction model of the model parameter
Determining.
Federal model is the machine learning model built using technique algorithm encryption, and federal learning system is in order to guarantee federation
The confidentiality of multi-party client data in the training process carries out encryption training, study system, federation by third party's terminal that cooperates
Multiple Federated client ends in system do not have to provide one's own side's data in model training, but are encrypted according to the parameter that cooperation end issues
World model and the data set of client local train local model, and return to local model parameter for the polymerization of cooperation end more
New world model, updated world model re-issue client, move in circles, until convergence.Federation's study is by adding
The mode that parameter exchanges under close mechanism protects client data privacy, and the local model itself of client data and client will not
It is transmitted, local data will not be guessed by counter, can ensure data-privacy while higher degree keeps data integrity.
But existing cooperation end is returned according to multiple client local model parameter polymerization update world model when,
Simple average only is done to the model parameter of multiple client, the model parameter after being averaged is as under new world model's parameter
It is sent to client and continues repetitive exercise, however, each client is trained due to the difference of its training data in hands-on
Local model estimated performance be also it is irregular, the polymerization of simple average in the prior art will lead to global mould
The effect of type is undesirable, more if the predictablity rate of the local model of each client terminal differs greatly in federal learning system
The polymerization of a parameter simple average can reduce the modelling effect of the high client of predictablity rate of wherein local model, global mould
Type, that is, finally obtained federal model effect is undesirable.
In the present embodiment, cooperation terminal is encrypted using parameter of the preset Encryption Algorithm to the first world model, and is issued
Multiple client terminals of first world model of parameter encryption into federal learning system, wherein preset this reality of Encryption Algorithm
It applies example to be not particularly limited, can be rivest, shamir, adelman etc., the first world model is that the present embodiment waits for training federation mould
Type completes the world model obtained after interative computation several times.
Further, client terminal is after the first world model for receiving the parameter encryption that cooperation terminal issues, each
Client terminal is trained first world model according to its local training sample data to obtain its respective local model,
It should be noted that the training sample classification of multiple client obeys independent same distribution in the present embodiment, client terminal is incited somebody to action
To the model parameter of local model be back to cooperation terminal.
It is regular according to preset acquisition after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively,
The corresponding weight coefficient of each model parameter is obtained, the weight coefficient is based on the corresponding prediction model of the model parameter
What forecasting accuracy determined, as an implementation, cooperation terminal storage has test sample collection, the test that test sample is concentrated
The training sample of sample and multiple client terminals all has identical characteristic dimension, sends respectively receiving multiple client terminals
Model parameter after, cooperation terminal uses test sample collection, and the prediction for testing the corresponding prediction model of each model parameter is accurate
Property, specifically, multiple test samples that test sample is concentrated are input in the prediction model of each client terminal passback, are obtained
The prediction model concentrates the prediction result of multiple test samples to the test sample, filters out the number of the test sample of prediction error
Amount removes the test sample with the quantity of the test sample of the prediction error filtered out and concentrates the sum of test sample to get to working as
It is corresponding pre- to obtain the model parameter that each client terminal is sent using same method for the prediction error rate of preceding prediction model
Survey the prediction error rate of model.
Further, determine that the model parameter participates in world model's polymerization according to the prediction error rate under each model parameter
When weight coefficient, the weight coefficient of the prediction error rate under each model parameter and the model parameter is negatively correlated, i.e., model is joined
The prediction error rate of the corresponding prediction model of number is smaller, then the weight coefficient of the model parameter is bigger, the present embodiment cooperation terminal
When being polymerize according to the model parameter that multiple client terminals are sent, the model high to forecasting accuracy increases the power of its model parameter
Weight, the model low to forecasting accuracy are reduced the weight of its model parameter, ensure that with the new world model that this is updated
The growth of each client terminal modelling effect.As an implementation, the calculating of weight coefficient can be according to calculation formulaIt is calculated, wherein εiFor the prediction error rate of the prediction model of i-th of client terminal, αiIt is i-th
The corresponding weight coefficient of the model parameter of the prediction model of client terminal, i are the integer greater than zero, and cooperation terminal gets every
The corresponding weight coefficient of a model parameter.
It should be noted that in other embodiments, multiple client terminals in federal learning system can be stored with
Identical test sample collection, the training sample of test sample and multiple client terminals which concentrates all have identical
Characteristic dimension, the corresponding weight coefficient of the model parameter of each client terminal can be client terminal according to the test being locally stored
Sample set tests the prediction error rate of its prediction model, and then obtains the corresponding weight coefficient of its model parameter, client terminal hair
While sending model parameter, it is whole for cooperation that the corresponding weight coefficient of the model parameter being calculated also is sent to cooperation terminal together
End polymerization, the present embodiment are not particularly limited herein, and further, the calculation method of weight coefficient is also not necessarily limited to the present embodiment institute
Corresponding computation rule can be arranged in other embodiments in the calculation method stated according to demand.
Step S200, according to multiple model parameters and the corresponding weight coefficient of each model parameter, it is complete that polymerization obtains second
Office's model;
In the present embodiment, the weight system of the prediction error rate and the model parameter of the corresponding prediction model of each model parameter
Number is negatively correlated, i.e. the weight coefficient of the prediction smaller then model parameter of error rate is bigger, the more big then model ginseng of prediction error rate
Several weight coefficients is smaller, multiplies its corresponding weight coefficient, then the multiple models that will multiply weight coefficient to each model parameter
Parameter, which is added, obtains the parameter of new world model, obtains new world model i.e. the second world model.
It is high to forecasting accuracy when the present embodiment cooperation terminal polymerize according to the model parameter that multiple client terminals are sent
Model increases the weight of its model parameter, and the model low to forecasting accuracy reduces the weight of its model parameter, is updated with this
To new world model ensure that the growth of each client terminal modelling effect, avoid existing cooperation end to multiple federations
Federal model effect is paid no attention to caused by the model parameter of client updates world model using the polymerization methods of simple average
It thinks over a problem.
Step S300, detects whether second world model restrains;
In the present embodiment, as an implementation, cooperation terminal is damaged according to the loss function of the second world model
Mistake value judges whether the second world model restrains according to penalty values, and specifically, cooperation terminal storage has under the first world model
First-loss value, cooperation terminal obtain the second penalty values according to the loss function of the second world model, calculate first-loss value and
Difference between second penalty values, and judge whether the difference is less than or equal to preset threshold, if the difference is less than or waits
In preset threshold, it is determined that second world model is in convergence state, and federal model training is completed, when hands-on, in advance
If threshold value can carry out sets itself according to the demand of user, the present embodiment is not particularly limited preset threshold.
Step S400, if detecting, second world model is in convergence state, and second world model is true
It is set to the final result of federal model training, and issues the second world model of parameter encryption to the multiple client terminal.
If detecting, the second world model is in convergence state, and federal model training is completed, the second world model, that is, true
It is set to the final result of federal model training, the second world model that cooperation terminal issues parameter encryption is whole to the multiple client
End, i.e. under the premise of not having to provide one's own side's data, the effect for realizing local model increases multiple client terminals, ensures data
While privacy, forecasting accuracy is improved.
The present embodiment is by after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to default
Acquisition rule, obtain the corresponding weight coefficient of each model parameter;Wherein, the model parameter is client terminal according to cooperation
First world model of the parameter encryption that terminal issues carries out what federal model training obtained, and the weight coefficient is based on described
What the forecasting accuracy of the corresponding prediction model of model parameter determined;It is corresponding according to multiple model parameters and each model parameter
Weight coefficient, polymerization obtain the second world model;Detect whether second world model restrains;If detecting, described second is complete
Office's model is in convergence state, then second world model is determined as to the final result of federal model training, and issues ginseng
Second world models of number encryption are to the multiple client terminal;Cooperation terminal is returned according to federal multi-party client as a result,
Model parameter polymerize world model when, simple average is not done to multiple model parameters, in conjunction with each model parameter
Weight coefficient updates to obtain new world model, which is accurate according to the prediction of each client terminal training pattern
Property determine, improve the prediction effect of federal model, avoid existing cooperation end and the model at multiple Federated client ends is joined
Polymerization methods of the number using simple average, which are paid no attention to updating federal model effect caused by world model, to think over a problem.
Further, federal model training method second embodiment of the present invention is proposed.
It is the flow diagram of federal model training method second embodiment of the present invention referring to Fig. 3, Fig. 3, is based on above-mentioned Fig. 2
Shown in embodiment, in the present embodiment, step S100, when cooperation terminal receives the model ginseng that multiple client terminals are sent respectively
After number, according to preset acquisition rule, the step of obtaining each model parameter corresponding weight coefficient, includes:
Step S101, after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset
Test sample collection tests and obtains the prediction error rate of the corresponding prediction model of each model parameter;
Step S102, the prediction error rate and preset calculation formula based on each prediction model, calculates separately
To the corresponding weight coefficient of each model parameter.
Specifically, in the present embodiment, cooperation terminal storage has test sample collection, and it includes multiple tests that test sample, which is concentrated,
The local training sample of sample, multiple test samples and multiple client terminals all has identical characteristic dimension, and cooperation terminal will
Multiple test samples that test sample is concentrated are input in the prediction model of each client terminal passback, obtain the prediction model pair
The test sample concentrates the prediction result of multiple test samples, and screening obtains the quantity of the test sample of prediction result mistake, uses
The quantity of the test sample of prediction result mistake arrives current predictive model except the sum of test sample concentration test sample
Prediction error rate further using same method, obtain the corresponding prediction of model parameter that each client terminal is sent
The prediction error rate of model.
In the present embodiment, preset calculation formula are as follows:Wherein, εiFor i-th client terminal
The prediction error rate of prediction model, αiFor the corresponding weight coefficient of model parameter of the prediction model of i-th of client terminal, i is
Integer greater than zero;After the prediction error rate for the prediction model that each client terminal is obtained by calculation in cooperation terminal, respectively will
Each prediction error rate substitutes into calculation formula calculating, and obtained result is the corresponding weight coefficient of each model parameter.
Further, above-mentioned embodiment shown in Fig. 2, in the present embodiment, step S200, according to multiple model parameters are based on
And the corresponding weight coefficient of each model parameter, the step of polymerization obtains the second world model, include:
The corresponding weight coefficient of each model parameter is multiplied by step S201 respectively, the knot after obtaining multiple multiplications
Fruit;
Step S202 by the results added after the multiple multiplication, and will add up result and be determined as the second world model
Model parameter obtains second world model.
In the present embodiment, its corresponding weight coefficient, then the multiple moulds that will multiply weight coefficient are multiplied to each model parameter
Shape parameter, which is added, obtains the parameter of new world model, obtains new world model i.e. the second world model.
It is high to forecasting accuracy when the present embodiment cooperation terminal polymerize according to the model parameter that multiple client terminals are sent
Model increases the weight of its model parameter, and the model low to forecasting accuracy reduces the weight of its model parameter, is updated with this
To new world model ensure that the growth of each client terminal modelling effect, avoid existing cooperation end to multiple federations
Federal model effect is paid no attention to caused by the model parameter of client updates world model using the polymerization methods of simple average
It thinks over a problem.
Further, federal model training method 3rd embodiment of the present invention is proposed.
It is the flow diagram of federal model training method 3rd embodiment of the present invention referring to Fig. 4, Fig. 4, is based on above-mentioned Fig. 2
Shown in embodiment, in the present embodiment, step S100, when cooperation terminal receives the model ginseng that multiple client terminals are sent respectively
After number, according to preset acquisition rule, before the step of obtaining each model parameter corresponding weight coefficient further include:
Step S110 sends the first world model of parameter encryption respectively to multiple client terminals;
Step S120 receives the model parameter that the multiple client terminal is sent respectively;
Wherein, for the client terminal after receiving first world model that cooperation terminal issues, the client is whole
The first training sample set is predicted according to first world model to obtain predicted value, and according to the predicted value pair at end
First training sample set is sampled to obtain the second training sample set, and the client terminal is based on second training sample
Collection training first world model, obtains the model parameter after training.
In the present embodiment, as an implementation, it is assumed that federal model to be trained has been completed the iteration of kth time
Operation simultaneously obtains the first world model modelk, wherein k is the integer greater than zero, the present embodiment federal model training method tool
Body includes the following steps;
Step a: cooperation terminal sends the first world model model of parameter encryptionkRespectively to each Federated client end
End;
B: i-th client terminal of step receives modelk, use modelkTo local training sample set XiIt is predicted, root
It is predicted that result is by XiIn training sample be divided into two set: the sample data set (x of prediction error1,x2,...,xn) and it is pre-
Survey correct sample data set (y1,y2,...,ym), wherein n indicates XiThe sample size of middle prediction error, m indicate XiMiddle prediction
Correct sample size, n can be equal to m, and the present embodiment is not particularly limited, therefore is had:
Xi=(x1,x2,...,xn)∪(y1,y2,...,ym) andIt sets up;
I-th of client terminal is in training modelkBefore, first to XiIt is sampled, specifically chooses the prediction error
Sample data set (x1,x2,...,xn), and correct sample data set (y is predicted from described1,y2,...,ym) in extraction section sample
This (y1,y2,...,yk), k < m, to constitute the training dataset Y after samplingi, i.e. Yi=(x1,x2,...,xn)∪(y1,
y2,...,yk), k < n, using training dataset YiTo modelkIt is trained, obtains new local prediction model after training
C: i-th client terminal of step sends the local prediction model obtained after training to the terminal that cooperates, and the terminal that cooperates will
Multiple test samples that the test sample of cooperation terminal storage is concentrated are input in the prediction model of i-th of client terminal passback,
The prediction result that the prediction model concentrates multiple test samples to the test sample is obtained, the test sample of prediction error is filtered out
Quantity, with the quantity of the test sample of the prediction error filtered out remove the test sample concentrate test sample sum to get
To the prediction error rate of the prediction model of i-th of client terminal
Step d: cooperation terminal is by the prediction error rate of the prediction model for i-th of client terminal being calculatedIt substitutes into
Calculation formulaIn, the corresponding weight coefficient of model parameter of i-th of client terminal is calculated
Step e: the cooperation terminal model parameter sent according to each client terminal and each model being calculated
The corresponding weight coefficient of parameter, polymerization update the first world model modelkObtain the second world model modelk+1, whereinQ is the total quantity of the present embodiment Federated client terminal, cooperation terminal detection
modelk+1Whether restrain, if convergence, by modelk+1As the final training result of the present embodiment federal model, and will
modelk+1Model parameter encryption be issued to each client terminal.
If cooperation terminal detects modelk+1Not converged, then repeat the above steps a- step e, until federal model is restrained.
When the first world model training for the parameter encryption that the present embodiment client terminal is issued according to cooperation terminal, client is whole
It holds and is predicted first according to local first training sample set of first world model to client terminal to obtain predicted value,
And first training sample set is sampled according to the predicted value to obtain the second training sample set, the client terminal base
In second training sample set training, first world model, the model parameter is obtained after training and returns the model
Parameter realizes the sample data for prediction error, its weight is improved in next iteration, optimize this to the terminal that cooperates
The performance of ground training pattern is to improve the quality for the model parameter that each client terminal is sent to cooperation terminal, to improve
World model, that is, the present embodiment federal model forecasting accuracy.
Further, federal model training method fourth embodiment of the present invention is proposed.
It is the flow diagram of federal model training method fourth embodiment of the present invention referring to Fig. 5, Fig. 5, based on shown in Fig. 2
Embodiment, in the present embodiment, step S300, after detecting the step of whether second world model restrains further include:
Step S500, if detecting, second world model is in not converged state, issues the second of parameter encryption
World model respectively to multiple client terminals so that the multiple client terminal issued respectively according to cooperation terminal described second
World model continues repetitive exercise to return to model parameter to the cooperation terminal.
In the present embodiment, if detecting, second world model is in not converged state, issues the of parameter encryption
Two world models are respectively to multiple client terminals, when cooperation terminal receives the model parameter that multiple client terminals are sent respectively
Afterwards, according to preset acquisition rule, the corresponding weight coefficient of each model parameter is obtained;Wherein, the model parameter is client
Second world model of the parameter encryption that terminal is issued according to cooperation terminal carries out what federal model training obtained, the weight system
Number is the forecasting accuracy determination based on the corresponding prediction model of the model parameter;According to multiple model parameters and each mould
The corresponding weight coefficient of shape parameter, polymerization obtain third world model;Detect whether the third world model restrains;If detection
It is in convergence state to the third world model, then the third world model is determined as to the most termination of federal model training
Fruit, and issue parameter encryption third world model to the multiple client terminal, if detecting at the third world model
In not converged state, the third world model of parameter encryption is issued respectively to multiple client terminals, it is any of the above-described to repeat the present invention
The step of embodiment, continues training until model is restrained.
Further, the 5th embodiment of federal model training method of the present invention is proposed.
Based on embodiment shown in Fig. 2, in the present embodiment, step S100, when cooperation terminal receives multiple client terminals
After the model parameter sent respectively, according to preset acquisition rule, the step of obtaining each model parameter corresponding weight coefficient
It further comprises the steps of: before
Receive the model parameter that multiple client terminals are sent respectively;
Step S100, after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset
The step of obtaining rule, obtaining each model parameter corresponding weight coefficient include:
Receive the weight coefficient corresponding with the model parameter that the multiple client terminal is sent respectively;Wherein, described
Multiple client terminals are tested respectively according to preset test sample collection and obtain the pre- of the corresponding prediction model of the model parameter
Error rate is surveyed, and according to the prediction error rate and preset calculation formula, the corresponding weight of the model parameter is calculated
Coefficient.
In the present embodiment, as an implementation, the meter of the corresponding weight coefficient of the model parameter of each client terminal
It is carried out respectively in each client terminal at last, multiple client terminals in federal learning system are stored with identical test specimens
The training sample of this collection, test sample and multiple client terminals which concentrates all has identical characteristic dimension, often
The corresponding weight coefficient of the model parameter of a client terminal can be client terminal and be tested according to the test sample collection being locally stored
The prediction error rate of its prediction model, and then the corresponding weight coefficient of its model parameter is obtained, client terminal transmission pattern parameter
While, the corresponding weight coefficient of the model parameter being calculated also is sent to cooperation terminal together and polymerize to obtain for cooperation terminal
World model.
In addition, the embodiment of the present invention also proposes a kind of federal model training system, the system comprises cooperation terminal and divide
The multiple client terminals not communicated to connect with the terminal that cooperates, the cooperation terminal include:
Module is obtained, for after receiving the model parameter that multiple client terminals are sent respectively, according to preset acquisition
Rule obtains the corresponding weight coefficient of each model parameter;Wherein, the model parameter is client terminal according under cooperation terminal
First world model of the parameter encryption of hair carries out what federal model training obtained, and the weight coefficient is joined based on the model
What the forecasting accuracy of the corresponding prediction model of number determined;
It polymerize update module, for polymerizeing according to multiple model parameters and the corresponding weight coefficient of each model parameter
To the second world model;
Detection module, for detecting whether second world model restrains;
Determining module, for detecting that second world model is in convergence state in the detection module, then by institute
The final result that the second world model is determined as federal model training is stated, and issues the second world model of parameter encryption to described
Multiple client terminals.
Preferably, the acquisition module includes:
Test cell, for after receiving the model parameter that multiple client terminals are sent respectively, according to preset test
Sample set is tested and obtains the prediction error rate of the corresponding prediction model of each model parameter;
Computing unit, for based on each prediction model the prediction error rate and preset calculation formula, count respectively
Calculation obtains the corresponding weight coefficient of each model parameter.
Preferably, the test cell includes:
Subelement is tested, for after receiving the model parameter that multiple client terminals are sent respectively, by preset test
Multiple test samples in sample set are input in the corresponding prediction model of the model parameter and are predicted, obtain the prediction
Model is directed to the predicted value of each test sample;
Subelement is obtained, for the test sample being obtained and concentrating prediction result mistake according to multiple predicted values
The quantity of test sample;
Determine subelement, it is complete for concentrating the quantity of the test sample of the prediction result mistake and the test sample
The ratio of portion's test sample quantity is determined as the prediction error rate of the pre- prediction model.
Preferably, the preset calculation formula are as follows:Wherein, εiFor the pre- of i-th client terminal
Survey the prediction error rate of model, αiFor the corresponding weight coefficient of model parameter of the prediction model of i-th of client terminal, i
For the integer greater than zero.
Preferably, the polymerization update module includes:
Multiply processing unit, for the corresponding weight coefficient of each model parameter to be multiplied respectively, obtains multiple multiplications
Result afterwards;
Updating unit for by the results added after the multiple multiplication, and will add up result and be determined as the second global mould
The model parameter of type obtains second world model.
Preferably, the cooperation terminal further include:
First issues module, for sending the first world model of parameter encryption respectively to multiple client terminals;
Receiving module, the model parameter sent respectively for receiving the multiple client terminal;
Wherein, the client terminal receive described first issue first world model that module issues after, institute
Client terminal is stated to predict the first training sample set according to first world model to obtain predicted value, and according to described
Predicted value samples first training sample set to obtain the second training sample set, and the client terminal is based on described second
Training sample set training first world model, obtains the model parameter after training.
Preferably, the cooperation terminal further include:
Second issues module, for detecting that second world model is in not converged state in the detection module,
The second world model of parameter encryption is then issued respectively to multiple client terminals, so that the multiple client terminal is respectively according to institute
It states second and issues second world model that module issues and continue repetitive exercise to return to model parameter to the cooperation terminal.
Preferably, the acquisition module, is also used to receive that the multiple client terminal sends respectively joins with the model
The corresponding weight coefficient of number;Wherein, the multiple client terminal is tested and is obtained described respectively according to preset test sample collection
The prediction error rate of the corresponding prediction model of model parameter, and according to the prediction error rate and preset calculation formula, it calculates
Obtain the corresponding weight coefficient of the model parameter.
Federal model training system specific embodiment of the present invention and above-mentioned each embodiment of federal model training method are basic
Identical, details are not described herein.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
On be stored with federal model training program, reward as described above is realized when the federal model training program is executed by processor
The step of sending method.
Computer readable storage medium specific embodiment of the present invention and each embodiment base of above-mentioned federal model training method
This is identical, and details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes
Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (16)
1. a kind of federal model training method, which is characterized in that the federal model training method the following steps are included:
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset acquisition rule, obtain
The corresponding weight coefficient of each model parameter;Wherein, the model parameter is the parameter that client terminal is issued according to cooperation terminal
First world model of encryption carries out what federal model training obtained, and the weight coefficient is corresponding based on the model parameter
What the forecasting accuracy of prediction model determined;
According to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the second world model;
Detect whether second world model restrains;
If detecting, second world model is in convergence state, and second world model is determined as federal model instruction
Experienced final result, and the second world model of parameter encryption is issued to the multiple client terminal.
2. federal model training method as described in claim 1, which is characterized in that described when cooperation terminal receives multiple visitors
After the model parameter that family terminal is sent respectively, according to preset acquisition rule, the corresponding weight coefficient of each model parameter is obtained
The step of include:
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, according to preset test sample collection, survey
It tries and obtains the prediction error rate of the corresponding prediction model of each model parameter;
The prediction error rate and preset calculation formula based on each prediction model, calculate separately to obtain each model parameter
Corresponding weight coefficient.
3. federal model training method as claimed in claim 2, which is characterized in that described when cooperation terminal receives multiple visitors
After the model parameter that family terminal is sent respectively, according to preset test sample collection, tests and to obtain each model parameter corresponding
The step of prediction error rate of prediction model includes:
After the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, preset test sample is concentrated more
A test sample is input in the corresponding prediction model of the model parameter and is predicted, obtains the prediction model for each
The predicted value of the test sample;
According to multiple predicted values, the quantity that the test sample concentrates the test sample of prediction result mistake is obtained;
The quantity of the test sample of the prediction result mistake and the test sample are concentrated to the ratio of whole test sample quantity
Value is determined as the prediction error rate of the pre- prediction model.
4. federal model training method as described in claim 1, which is characterized in that described according to multiple model parameters and each
The corresponding weight coefficient of model parameter, polymerizeing the step of obtaining the second world model includes:
The corresponding weight coefficient of each model parameter is multiplied respectively, the result after obtaining multiple multiplications;
It by the results added after the multiple multiplication, and will add up the model parameter that result is determined as the second world model, obtain
Second world model.
5. such as federal model training method of any of claims 1-4, which is characterized in that described when cooperation terminal connects
After receiving the model parameter that multiple client terminals are sent respectively, according to preset acquisition rule, it is corresponding to obtain each model parameter
Weight coefficient the step of before further include:
The first world model of parameter encryption is sent respectively to multiple client terminals;
Receive the model parameter that the multiple client terminal is sent respectively;
Wherein, the client terminal is after receiving first world model that cooperation terminal issues, the client terminal root
The first training sample set is predicted according to first world model to obtain predicted value, and according to the predicted value to described
First training sample set is sampled to obtain the second training sample set, and the client terminal is assembled for training based on second training sample
Practice first world model, obtains the model parameter after training.
6. federal model training method as described in claim 1, which is characterized in that the detection second world model is
After the step of no convergence further include:
If detecting, second world model is in not converged state, issues the second world model of parameter encryption respectively extremely
Multiple client terminals, so that the multiple client terminal continues to change according to second world model that cooperation terminal issues respectively
In generation, training was to return to model parameter to the cooperation terminal.
7. federal model training method as described in claim 1, which is characterized in that described when cooperation terminal receives multiple visitors
After the model parameter that family terminal is sent respectively, according to preset acquisition rule, the corresponding weight coefficient of each model parameter is obtained
The step of before further include:
Receive the model parameter that multiple client terminals are sent respectively;
It is described after the terminal that cooperates receives the model parameter that multiple client terminals are sent respectively, it is regular according to preset acquisitions,
The step of obtaining each model parameter corresponding weight coefficient include:
Receive the weight coefficient corresponding with the model parameter that the multiple client terminal is sent respectively;Wherein, the multiple
Client terminal is tested respectively according to preset test sample collection and the prediction for obtaining the corresponding prediction model of the model parameter misses
Rate, and according to the prediction error rate and preset calculation formula, the corresponding weight coefficient of the model parameter is calculated.
8. a kind of federal model training system, which is characterized in that cooperate eventually the system comprises cooperation terminal and respectively with described
Multiple client terminals of communication connection are held, the cooperation terminal includes:
Module is obtained, it is regular according to preset acquisitions for after receiving the model parameter that multiple client terminals are sent respectively,
Obtain the corresponding weight coefficient of each model parameter;Wherein, the model parameter is that client terminal is issued according to cooperation terminal
First world model of parameter encryption carries out what federal model training obtained, and the weight coefficient is based on the model parameter pair
What the forecasting accuracy for the prediction model answered determined;
It polymerize update module, for according to multiple model parameters and the corresponding weight coefficient of each model parameter, polymerization obtains the
Two world models;
Detection module, for detecting whether second world model restrains;
Determining module, for detecting that second world model is in convergence state in the detection module, then by described
Two world models are determined as the final result of federal model training, and issue the second world model of parameter encryption to the multiple
Client terminal.
9. federal model training system as claimed in claim 8, which is characterized in that the acquisition module includes:
Test cell, for after receiving the model parameter that multiple client terminals are sent respectively, according to preset test sample
Collection, tests and obtains the prediction error rate of the corresponding prediction model of each model parameter;
Computing unit, for based on each prediction model the prediction error rate and preset calculation formula, calculate separately
To the corresponding weight coefficient of each model parameter.
10. federal model training system as claimed in claim 9, which is characterized in that the test cell includes:
Subelement is tested, for after receiving the model parameter that multiple client terminals are sent respectively, by preset test sample
The multiple test samples concentrated, which are input in the corresponding prediction model of the model parameter, to be predicted, the prediction model is obtained
For the predicted value of each test sample;
Subelement is obtained, for obtaining the test that the test sample concentrates prediction result mistake according to multiple predicted values
The quantity of sample;
It determines subelement, is all surveyed for concentrating the quantity of the test sample of the prediction result mistake and the test sample
The ratio of examination sample size is determined as the prediction error rate of the pre- prediction model.
11. federal model training system as claimed in claim 8, which is characterized in that the polymerization update module includes:
Multiply processing unit, for the corresponding weight coefficient of each model parameter to be multiplied respectively, after obtaining multiple multiplications
As a result;
Updating unit for by the results added after the multiple multiplication, and will add up result and be determined as the second world model
Model parameter obtains second world model.
12. the federal model training system as described in any one of claim 8-11, which is characterized in that the cooperation terminal is also
Include:
First issues module, for sending the first world model of parameter encryption respectively to multiple client terminals;
Receiving module, the model parameter sent respectively for receiving the multiple client terminal;
Wherein, the client terminal receive described first issue first world model that module issues after, the visitor
Family terminal is predicted the first training sample set according to first world model to obtain predicted value, and according to the prediction
Value samples first training sample set to obtain the second training sample set, and the client terminal is based on second training
Sample set training first world model, obtains the model parameter after training.
13. federal model training system as claimed in claim 8, which is characterized in that the cooperation terminal further include:
Second issues module, for detecting that second world model is in not converged state in the detection module, then under
The second world model of parameter encryption is sent out respectively to multiple client terminals, so that the multiple client terminal is respectively according to described the
Two, which issue second world model that module issues, continues repetitive exercise to return to model parameter to the cooperation terminal.
14. federal model training system as claimed in claim 8, which is characterized in that the acquisition module is also used to receive institute
State the weight coefficient corresponding with the model parameter that multiple client terminals are sent respectively;Wherein, the multiple client terminal point
Not according to preset test sample collection, the prediction error rate of the corresponding prediction model of the model parameter, and root are tested and obtained
According to the prediction error rate and preset calculation formula, the corresponding weight coefficient of the model parameter is calculated.
15. a kind of federal model training equipment, which is characterized in that the federal model training equipment includes memory, processor
Be stored in the federal model training program that can be run on the memory and on the processor, the federal model training
The step of federal model training method as described in any one of claims 1 to 7 is realized when program is executed by the processor.
16. a kind of computer readable storage medium, which is characterized in that be stored with federal mould on the computer readable storage medium
Type training program is realized as described in any one of claims 1 to 7 when the federal model training program is executed by processor
The step of federal model training method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910121269.8A CN109871702A (en) | 2019-02-18 | 2019-02-18 | Federal model training method, system, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910121269.8A CN109871702A (en) | 2019-02-18 | 2019-02-18 | Federal model training method, system, equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109871702A true CN109871702A (en) | 2019-06-11 |
Family
ID=66918815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910121269.8A Pending CN109871702A (en) | 2019-02-18 | 2019-02-18 | Federal model training method, system, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109871702A (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334544A (en) * | 2019-06-26 | 2019-10-15 | 深圳前海微众银行股份有限公司 | Federal model degeneration processing method, device, federal training system and storage medium |
CN110378487A (en) * | 2019-07-18 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Laterally model parameter verification method, device, equipment and medium in federal study |
CN110380917A (en) * | 2019-08-26 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Control method, device, terminal device and the storage medium of federal learning system |
CN110378749A (en) * | 2019-07-25 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Appraisal procedure, device, terminal device and the storage medium of user data similitude |
CN110443416A (en) * | 2019-07-30 | 2019-11-12 | 卓尔智联(武汉)研究院有限公司 | Federal model building device, method and readable storage medium storing program for executing based on shared data |
CN110442457A (en) * | 2019-08-12 | 2019-11-12 | 北京大学深圳研究生院 | Model training method, device and server based on federation's study |
CN110503207A (en) * | 2019-08-28 | 2019-11-26 | 深圳前海微众银行股份有限公司 | Federation's study credit management method, device, equipment and readable storage medium storing program for executing |
CN110601814A (en) * | 2019-09-24 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Federal learning data encryption method, device, equipment and readable storage medium |
CN110632554A (en) * | 2019-09-20 | 2019-12-31 | 深圳前海微众银行股份有限公司 | Indoor positioning method, device, terminal equipment and medium based on federal learning |
CN110674528A (en) * | 2019-09-20 | 2020-01-10 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
CN110766169A (en) * | 2019-10-31 | 2020-02-07 | 深圳前海微众银行股份有限公司 | Transfer training optimization method and device for reinforcement learning, terminal and storage medium |
CN110782042A (en) * | 2019-10-29 | 2020-02-11 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for combining horizontal federation and vertical federation |
CN110838069A (en) * | 2019-10-15 | 2020-02-25 | 支付宝(杭州)信息技术有限公司 | Data processing method, device and system |
CN110929260A (en) * | 2019-11-29 | 2020-03-27 | 杭州安恒信息技术股份有限公司 | Malicious software detection method, device, server and readable storage medium |
CN110992936A (en) * | 2019-12-06 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Method and apparatus for model training using private data |
CN111027715A (en) * | 2019-12-11 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Monte Carlo-based federated learning model training method and device |
CN111241582A (en) * | 2020-01-10 | 2020-06-05 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN111275207A (en) * | 2020-02-10 | 2020-06-12 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
CN111340150A (en) * | 2020-05-22 | 2020-06-26 | 支付宝(杭州)信息技术有限公司 | Method and device for training first classification model |
CN111369042A (en) * | 2020-02-27 | 2020-07-03 | 山东大学 | Wireless service flow prediction method based on weighted federal learning |
CN111382875A (en) * | 2020-03-06 | 2020-07-07 | 深圳前海微众银行股份有限公司 | Federal model parameter determination method, device, equipment and storage medium |
CN111460511A (en) * | 2020-04-17 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Federal learning and virtual object distribution method and device based on privacy protection |
CN111461329A (en) * | 2020-04-08 | 2020-07-28 | 中国银行股份有限公司 | Model training method, device, equipment and readable storage medium |
CN111477290A (en) * | 2020-03-05 | 2020-07-31 | 上海交通大学 | Federal learning and image classification method, system and terminal for protecting user privacy |
CN111882133A (en) * | 2020-08-03 | 2020-11-03 | 重庆大学 | Prediction-based federated learning communication optimization method and system |
CN111932367A (en) * | 2020-08-13 | 2020-11-13 | 中国银行股份有限公司 | Pre-credit evaluation method and device |
CN112001452A (en) * | 2020-08-27 | 2020-11-27 | 深圳前海微众银行股份有限公司 | Feature selection method, device, equipment and readable storage medium |
CN112149171A (en) * | 2020-10-27 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for training federal neural network model |
CN112183757A (en) * | 2019-07-04 | 2021-01-05 | 创新先进技术有限公司 | Model training method, device and system |
CN112214342A (en) * | 2020-09-14 | 2021-01-12 | 德清阿尔法创新研究院 | Efficient error data detection method in federated learning scene |
CN112217706A (en) * | 2020-12-02 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and storage medium |
CN112261137A (en) * | 2020-10-22 | 2021-01-22 | 江苏禹空间科技有限公司 | Model training method and system based on joint learning |
CN112347754A (en) * | 2019-08-09 | 2021-02-09 | 国际商业机器公司 | Building a Joint learning framework |
WO2021022707A1 (en) * | 2019-08-06 | 2021-02-11 | 深圳前海微众银行股份有限公司 | Hybrid federated learning method and architecture |
CN112465043A (en) * | 2020-12-02 | 2021-03-09 | 平安科技(深圳)有限公司 | Model training method, device and equipment |
CN112656401A (en) * | 2019-10-15 | 2021-04-16 | 梅州市青塘实业有限公司 | Intelligent monitoring method, device and equipment |
CN112749812A (en) * | 2019-10-29 | 2021-05-04 | 华为技术有限公司 | Joint learning system, training result aggregation method and equipment |
CN112819177A (en) * | 2021-01-26 | 2021-05-18 | 支付宝(杭州)信息技术有限公司 | Personalized privacy protection learning method, device and equipment |
CN112885337A (en) * | 2021-01-29 | 2021-06-01 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and storage medium |
CN112989929A (en) * | 2021-02-04 | 2021-06-18 | 支付宝(杭州)信息技术有限公司 | Target user identification method and device and electronic equipment |
CN113077060A (en) * | 2021-03-30 | 2021-07-06 | 中国科学院计算技术研究所 | Federal learning system and method aiming at edge cloud cooperation |
CN113095513A (en) * | 2021-04-25 | 2021-07-09 | 中山大学 | Double-layer fair federal learning method, device and storage medium |
CN113128701A (en) * | 2021-04-07 | 2021-07-16 | 中国科学院计算技术研究所 | Sample sparsity-oriented federal learning method and system |
CN113268758A (en) * | 2021-06-17 | 2021-08-17 | 上海万向区块链股份公司 | Data sharing system, method, medium and device based on federal learning |
CN113282933A (en) * | 2020-07-17 | 2021-08-20 | 中兴通讯股份有限公司 | Federal learning method, device and system, electronic equipment and storage medium |
CN113361721A (en) * | 2021-06-29 | 2021-09-07 | 北京百度网讯科技有限公司 | Model training method, model training device, electronic device, storage medium, and program product |
WO2021179196A1 (en) * | 2020-03-11 | 2021-09-16 | Oppo广东移动通信有限公司 | Federated learning-based model training method, electronic device, and storage medium |
CN113470806A (en) * | 2020-03-31 | 2021-10-01 | 中移(成都)信息通信科技有限公司 | Method, device and equipment for determining disease detection model and computer storage medium |
CN113642737A (en) * | 2021-08-12 | 2021-11-12 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
WO2021227069A1 (en) * | 2020-05-15 | 2021-11-18 | Oppo广东移动通信有限公司 | Model updating method and apparatus, and communication device |
CN113688855A (en) * | 2020-05-19 | 2021-11-23 | 华为技术有限公司 | Data processing method, federal learning training method, related device and equipment |
CN113705823A (en) * | 2020-05-22 | 2021-11-26 | 华为技术有限公司 | Model training method based on federal learning and electronic equipment |
WO2021244081A1 (en) * | 2020-06-02 | 2021-12-09 | Huawei Technologies Co., Ltd. | Methods and systems for horizontal federated learning using non-iid data |
CN113850396A (en) * | 2021-09-28 | 2021-12-28 | 北京邮电大学 | Privacy enhanced federal decision method, device, system and storage medium |
WO2022001941A1 (en) * | 2020-06-28 | 2022-01-06 | 中兴通讯股份有限公司 | Network element management method, network management system, independent computing node, computer device, and storage medium |
CN114912581A (en) * | 2022-05-07 | 2022-08-16 | 奇安信科技集团股份有限公司 | Training method and device for detection model, electronic equipment and storage medium |
CN115526339A (en) * | 2022-11-03 | 2022-12-27 | 中国电信股份有限公司 | Federal learning method and device, electronic equipment and computer readable storage medium |
WO2023024378A1 (en) * | 2021-08-25 | 2023-03-02 | 深圳前海微众银行股份有限公司 | Multi-agent model training method, apparatus, electronic device, storage medium and program product |
CN117251276A (en) * | 2023-11-20 | 2023-12-19 | 清华大学 | Flexible scheduling method and device for collaborative learning platform |
WO2023241042A1 (en) * | 2022-06-13 | 2023-12-21 | 中兴通讯股份有限公司 | Fault prediction method and apparatus, and electronic device and storage medium |
WO2024026583A1 (en) * | 2022-07-30 | 2024-02-08 | 华为技术有限公司 | Communication method and communication apparatus |
CN112214342B (en) * | 2020-09-14 | 2024-05-24 | 德清阿尔法创新研究院 | Efficient error data detection method in federal learning scene |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160147943A1 (en) * | 2014-11-21 | 2016-05-26 | Argo Data Resource Corporation | Semantic Address Parsing Using a Graphical Discriminative Probabilistic Model |
CN107133805A (en) * | 2017-05-09 | 2017-09-05 | 北京小度信息科技有限公司 | Method of adjustment, device and the equipment of user's cheating category forecasting Model Parameter |
CN107229518A (en) * | 2016-03-26 | 2017-10-03 | 阿里巴巴集团控股有限公司 | A kind of distributed type assemblies training method and device |
CN109102157A (en) * | 2018-07-11 | 2018-12-28 | 交通银行股份有限公司 | A kind of bank's work order worksheet processing method and system based on deep learning |
CN109165725A (en) * | 2018-08-10 | 2019-01-08 | 深圳前海微众银行股份有限公司 | Neural network federation modeling method, equipment and storage medium based on transfer learning |
CN109284313A (en) * | 2018-08-10 | 2019-01-29 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on semi-supervised learning |
-
2019
- 2019-02-18 CN CN201910121269.8A patent/CN109871702A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160147943A1 (en) * | 2014-11-21 | 2016-05-26 | Argo Data Resource Corporation | Semantic Address Parsing Using a Graphical Discriminative Probabilistic Model |
CN107229518A (en) * | 2016-03-26 | 2017-10-03 | 阿里巴巴集团控股有限公司 | A kind of distributed type assemblies training method and device |
CN107133805A (en) * | 2017-05-09 | 2017-09-05 | 北京小度信息科技有限公司 | Method of adjustment, device and the equipment of user's cheating category forecasting Model Parameter |
CN109102157A (en) * | 2018-07-11 | 2018-12-28 | 交通银行股份有限公司 | A kind of bank's work order worksheet processing method and system based on deep learning |
CN109165725A (en) * | 2018-08-10 | 2019-01-08 | 深圳前海微众银行股份有限公司 | Neural network federation modeling method, equipment and storage medium based on transfer learning |
CN109284313A (en) * | 2018-08-10 | 2019-01-29 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on semi-supervised learning |
Non-Patent Citations (1)
Title |
---|
黄浩 等: "汉语语音识别中基于区分性权重训练的声调集成方法", 《声学学报》, vol. 33, no. 1, pages 1 - 8 * |
Cited By (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334544A (en) * | 2019-06-26 | 2019-10-15 | 深圳前海微众银行股份有限公司 | Federal model degeneration processing method, device, federal training system and storage medium |
CN110334544B (en) * | 2019-06-26 | 2023-07-25 | 深圳前海微众银行股份有限公司 | Federal model degradation processing method and device, federal training system and storage medium |
CN112183757B (en) * | 2019-07-04 | 2023-10-27 | 创新先进技术有限公司 | Model training method, device and system |
CN112183757A (en) * | 2019-07-04 | 2021-01-05 | 创新先进技术有限公司 | Model training method, device and system |
CN110378487A (en) * | 2019-07-18 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Laterally model parameter verification method, device, equipment and medium in federal study |
CN110378487B (en) * | 2019-07-18 | 2021-02-26 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for verifying model parameters in horizontal federal learning |
CN110378749A (en) * | 2019-07-25 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Appraisal procedure, device, terminal device and the storage medium of user data similitude |
CN110378749B (en) * | 2019-07-25 | 2023-09-26 | 深圳前海微众银行股份有限公司 | Client similarity evaluation method and device, terminal equipment and storage medium |
CN110443416A (en) * | 2019-07-30 | 2019-11-12 | 卓尔智联(武汉)研究院有限公司 | Federal model building device, method and readable storage medium storing program for executing based on shared data |
WO2021022707A1 (en) * | 2019-08-06 | 2021-02-11 | 深圳前海微众银行股份有限公司 | Hybrid federated learning method and architecture |
CN112347754A (en) * | 2019-08-09 | 2021-02-09 | 国际商业机器公司 | Building a Joint learning framework |
CN110442457A (en) * | 2019-08-12 | 2019-11-12 | 北京大学深圳研究生院 | Model training method, device and server based on federation's study |
CN110380917A (en) * | 2019-08-26 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Control method, device, terminal device and the storage medium of federal learning system |
CN110503207A (en) * | 2019-08-28 | 2019-11-26 | 深圳前海微众银行股份有限公司 | Federation's study credit management method, device, equipment and readable storage medium storing program for executing |
CN110674528A (en) * | 2019-09-20 | 2020-01-10 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
CN110632554A (en) * | 2019-09-20 | 2019-12-31 | 深圳前海微众银行股份有限公司 | Indoor positioning method, device, terminal equipment and medium based on federal learning |
CN110674528B (en) * | 2019-09-20 | 2024-04-09 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
CN110601814A (en) * | 2019-09-24 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Federal learning data encryption method, device, equipment and readable storage medium |
CN112656401B (en) * | 2019-10-15 | 2023-08-22 | 梅州市青塘实业有限公司 | Intelligent monitoring method, device and equipment |
CN110838069A (en) * | 2019-10-15 | 2020-02-25 | 支付宝(杭州)信息技术有限公司 | Data processing method, device and system |
CN112656401A (en) * | 2019-10-15 | 2021-04-16 | 梅州市青塘实业有限公司 | Intelligent monitoring method, device and equipment |
CN110782042A (en) * | 2019-10-29 | 2020-02-11 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for combining horizontal federation and vertical federation |
WO2021083276A1 (en) * | 2019-10-29 | 2021-05-06 | 深圳前海微众银行股份有限公司 | Method, device, and apparatus for combining horizontal federation and vertical federation, and medium |
CN112749812A (en) * | 2019-10-29 | 2021-05-04 | 华为技术有限公司 | Joint learning system, training result aggregation method and equipment |
CN110766169A (en) * | 2019-10-31 | 2020-02-07 | 深圳前海微众银行股份有限公司 | Transfer training optimization method and device for reinforcement learning, terminal and storage medium |
CN110929260A (en) * | 2019-11-29 | 2020-03-27 | 杭州安恒信息技术股份有限公司 | Malicious software detection method, device, server and readable storage medium |
CN110992936A (en) * | 2019-12-06 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Method and apparatus for model training using private data |
CN111027715A (en) * | 2019-12-11 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Monte Carlo-based federated learning model training method and device |
CN111027715B (en) * | 2019-12-11 | 2021-04-02 | 支付宝(杭州)信息技术有限公司 | Monte Carlo-based federated learning model training method and device |
CN111241582A (en) * | 2020-01-10 | 2020-06-05 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN111241582B (en) * | 2020-01-10 | 2022-06-10 | 鹏城实验室 | Data privacy protection method and device and computer readable storage medium |
CN111275207B (en) * | 2020-02-10 | 2024-04-30 | 深圳前海微众银行股份有限公司 | Semi-supervision-based transverse federal learning optimization method, equipment and storage medium |
CN111275207A (en) * | 2020-02-10 | 2020-06-12 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
CN111369042A (en) * | 2020-02-27 | 2020-07-03 | 山东大学 | Wireless service flow prediction method based on weighted federal learning |
CN111477290B (en) * | 2020-03-05 | 2023-10-31 | 上海交通大学 | Federal learning and image classification method, system and terminal for protecting user privacy |
CN111477290A (en) * | 2020-03-05 | 2020-07-31 | 上海交通大学 | Federal learning and image classification method, system and terminal for protecting user privacy |
CN111382875A (en) * | 2020-03-06 | 2020-07-07 | 深圳前海微众银行股份有限公司 | Federal model parameter determination method, device, equipment and storage medium |
WO2021179196A1 (en) * | 2020-03-11 | 2021-09-16 | Oppo广东移动通信有限公司 | Federated learning-based model training method, electronic device, and storage medium |
CN113470806A (en) * | 2020-03-31 | 2021-10-01 | 中移(成都)信息通信科技有限公司 | Method, device and equipment for determining disease detection model and computer storage medium |
CN113470806B (en) * | 2020-03-31 | 2024-05-24 | 中移(成都)信息通信科技有限公司 | Method, device, equipment and computer storage medium for determining disease detection model |
CN111461329B (en) * | 2020-04-08 | 2024-01-23 | 中国银行股份有限公司 | Model training method, device, equipment and readable storage medium |
CN111461329A (en) * | 2020-04-08 | 2020-07-28 | 中国银行股份有限公司 | Model training method, device, equipment and readable storage medium |
CN111460511B (en) * | 2020-04-17 | 2023-05-02 | 支付宝(杭州)信息技术有限公司 | Federal learning and virtual object distribution method and device based on privacy protection |
CN111460511A (en) * | 2020-04-17 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Federal learning and virtual object distribution method and device based on privacy protection |
WO2021227069A1 (en) * | 2020-05-15 | 2021-11-18 | Oppo广东移动通信有限公司 | Model updating method and apparatus, and communication device |
CN113688855B (en) * | 2020-05-19 | 2023-07-28 | 华为技术有限公司 | Data processing method, federal learning training method, related device and equipment |
CN113688855A (en) * | 2020-05-19 | 2021-11-23 | 华为技术有限公司 | Data processing method, federal learning training method, related device and equipment |
CN113705823A (en) * | 2020-05-22 | 2021-11-26 | 华为技术有限公司 | Model training method based on federal learning and electronic equipment |
CN111340150A (en) * | 2020-05-22 | 2020-06-26 | 支付宝(杭州)信息技术有限公司 | Method and device for training first classification model |
WO2021244081A1 (en) * | 2020-06-02 | 2021-12-09 | Huawei Technologies Co., Ltd. | Methods and systems for horizontal federated learning using non-iid data |
US11715044B2 (en) | 2020-06-02 | 2023-08-01 | Huawei Cloud Computing Technologies Co., Ltd. | Methods and systems for horizontal federated learning using non-IID data |
WO2022001941A1 (en) * | 2020-06-28 | 2022-01-06 | 中兴通讯股份有限公司 | Network element management method, network management system, independent computing node, computer device, and storage medium |
CN113282933A (en) * | 2020-07-17 | 2021-08-20 | 中兴通讯股份有限公司 | Federal learning method, device and system, electronic equipment and storage medium |
CN113282933B (en) * | 2020-07-17 | 2022-03-01 | 中兴通讯股份有限公司 | Federal learning method, device and system, electronic equipment and storage medium |
CN111882133A (en) * | 2020-08-03 | 2020-11-03 | 重庆大学 | Prediction-based federated learning communication optimization method and system |
CN111882133B (en) * | 2020-08-03 | 2022-02-01 | 重庆大学 | Prediction-based federated learning communication optimization method and system |
CN111932367A (en) * | 2020-08-13 | 2020-11-13 | 中国银行股份有限公司 | Pre-credit evaluation method and device |
CN112001452A (en) * | 2020-08-27 | 2020-11-27 | 深圳前海微众银行股份有限公司 | Feature selection method, device, equipment and readable storage medium |
CN112214342B (en) * | 2020-09-14 | 2024-05-24 | 德清阿尔法创新研究院 | Efficient error data detection method in federal learning scene |
CN112214342A (en) * | 2020-09-14 | 2021-01-12 | 德清阿尔法创新研究院 | Efficient error data detection method in federated learning scene |
CN112261137B (en) * | 2020-10-22 | 2022-06-14 | 无锡禹空间智能科技有限公司 | Model training method and system based on joint learning |
CN112261137A (en) * | 2020-10-22 | 2021-01-22 | 江苏禹空间科技有限公司 | Model training method and system based on joint learning |
WO2022089256A1 (en) * | 2020-10-27 | 2022-05-05 | 腾讯科技(深圳)有限公司 | Method, apparatus and device for training federated neural network model, and computer program product and computer-readable storage medium |
CN112149171A (en) * | 2020-10-27 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for training federal neural network model |
WO2022116725A1 (en) * | 2020-12-02 | 2022-06-09 | 腾讯科技(深圳)有限公司 | Data processing method, apparatus, device, and storage medium |
CN112465043B (en) * | 2020-12-02 | 2024-05-14 | 平安科技(深圳)有限公司 | Model training method, device and equipment |
CN112217706A (en) * | 2020-12-02 | 2021-01-12 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and storage medium |
CN112465043A (en) * | 2020-12-02 | 2021-03-09 | 平安科技(深圳)有限公司 | Model training method, device and equipment |
CN112819177A (en) * | 2021-01-26 | 2021-05-18 | 支付宝(杭州)信息技术有限公司 | Personalized privacy protection learning method, device and equipment |
CN112885337A (en) * | 2021-01-29 | 2021-06-01 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and storage medium |
CN112989929A (en) * | 2021-02-04 | 2021-06-18 | 支付宝(杭州)信息技术有限公司 | Target user identification method and device and electronic equipment |
CN113077060A (en) * | 2021-03-30 | 2021-07-06 | 中国科学院计算技术研究所 | Federal learning system and method aiming at edge cloud cooperation |
CN113128701A (en) * | 2021-04-07 | 2021-07-16 | 中国科学院计算技术研究所 | Sample sparsity-oriented federal learning method and system |
CN113095513A (en) * | 2021-04-25 | 2021-07-09 | 中山大学 | Double-layer fair federal learning method, device and storage medium |
CN113268758B (en) * | 2021-06-17 | 2022-11-04 | 上海万向区块链股份公司 | Data sharing system, method, medium and device based on federal learning |
CN113268758A (en) * | 2021-06-17 | 2021-08-17 | 上海万向区块链股份公司 | Data sharing system, method, medium and device based on federal learning |
CN113361721B (en) * | 2021-06-29 | 2023-07-18 | 北京百度网讯科技有限公司 | Model training method, device, electronic equipment, storage medium and program product |
CN113361721A (en) * | 2021-06-29 | 2021-09-07 | 北京百度网讯科技有限公司 | Model training method, model training device, electronic device, storage medium, and program product |
CN113642737B (en) * | 2021-08-12 | 2024-03-05 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
CN113642737A (en) * | 2021-08-12 | 2021-11-12 | 广域铭岛数字科技有限公司 | Federal learning method and system based on automobile user data |
WO2023024378A1 (en) * | 2021-08-25 | 2023-03-02 | 深圳前海微众银行股份有限公司 | Multi-agent model training method, apparatus, electronic device, storage medium and program product |
CN113850396B (en) * | 2021-09-28 | 2022-04-19 | 北京邮电大学 | Privacy enhanced federal decision method, device, system and storage medium |
CN113850396A (en) * | 2021-09-28 | 2021-12-28 | 北京邮电大学 | Privacy enhanced federal decision method, device, system and storage medium |
CN114912581A (en) * | 2022-05-07 | 2022-08-16 | 奇安信科技集团股份有限公司 | Training method and device for detection model, electronic equipment and storage medium |
WO2023241042A1 (en) * | 2022-06-13 | 2023-12-21 | 中兴通讯股份有限公司 | Fault prediction method and apparatus, and electronic device and storage medium |
WO2024026583A1 (en) * | 2022-07-30 | 2024-02-08 | 华为技术有限公司 | Communication method and communication apparatus |
CN115526339B (en) * | 2022-11-03 | 2024-05-17 | 中国电信股份有限公司 | Federal learning method, federal learning device, electronic apparatus, and computer-readable storage medium |
CN115526339A (en) * | 2022-11-03 | 2022-12-27 | 中国电信股份有限公司 | Federal learning method and device, electronic equipment and computer readable storage medium |
CN117251276B (en) * | 2023-11-20 | 2024-02-09 | 清华大学 | Flexible scheduling method and device for collaborative learning platform |
CN117251276A (en) * | 2023-11-20 | 2023-12-19 | 清华大学 | Flexible scheduling method and device for collaborative learning platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871702A (en) | Federal model training method, system, equipment and computer readable storage medium | |
CN103853786B (en) | The optimization method and system of database parameter | |
CN109902186A (en) | Method and apparatus for generating neural network | |
CN106803799B (en) | Performance test method and device | |
CN110378488A (en) | Federal training method, device, training terminal and the storage medium of client variation | |
CN106844781A (en) | The method and device of data processing | |
CN107230381A (en) | Recommend method, server and client in a kind of parking stall | |
CN109388674A (en) | Data processing method, device, equipment and readable storage medium storing program for executing | |
CN107992595A (en) | A kind of learning Content recommends method, apparatus and smart machine | |
CN106556877B (en) | A kind of earth magnetism Tonghua method and device | |
CN110058936A (en) | For determining the method, equipment and computer program product of the stock number of dedicated processes resource | |
CN109710507A (en) | A kind of method and apparatus of automatic test | |
CN111984544B (en) | Device performance test method and device, electronic device and storage medium | |
CN110019382A (en) | User's cohesion index determines method, apparatus, storage medium and electronic equipment | |
CN107360026A (en) | Distributed message performance of middle piece is predicted and modeling method | |
CN105790866B (en) | Base station rankings method and device | |
Xiao et al. | Incentive mechanism design for federated learning: A two-stage stackelberg game approach | |
CN106059829A (en) | Hidden markov-based network utilization ratio sensing method | |
CN109521444A (en) | A kind of fitting of crustal movement GPS horizontal velocity field adaptive least square estimates algorithm | |
CN112001786A (en) | Client credit card limit configuration method and device based on knowledge graph | |
CN105956159A (en) | Algorithm for evaluating comprehensive efficiency of objective method of image quality | |
CN109903100A (en) | A kind of customer churn prediction technique, device and readable storage medium storing program for executing | |
CN108287928A (en) | A kind of space attribute prediction technique based on local weighted linear regression | |
CN110413722A (en) | Address choice method, apparatus and non-transient storage medium | |
CN101986608B (en) | Method for evaluating heterogeneous overlay network load balance degree |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |