CN110263908A - Federal learning model training method, equipment, system and storage medium - Google Patents
Federal learning model training method, equipment, system and storage medium Download PDFInfo
- Publication number
- CN110263908A CN110263908A CN201910538946.6A CN201910538946A CN110263908A CN 110263908 A CN110263908 A CN 110263908A CN 201910538946 A CN201910538946 A CN 201910538946A CN 110263908 A CN110263908 A CN 110263908A
- Authority
- CN
- China
- Prior art keywords
- model
- equipment
- time
- model parameter
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 245
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000004048 modification Effects 0.000 claims abstract description 318
- 238000012986 modification Methods 0.000 claims abstract description 318
- 230000005540 biological transmission Effects 0.000 claims description 90
- 230000004927 fusion Effects 0.000 claims description 24
- 241000208340 Araliaceae Species 0.000 claims description 19
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 19
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 19
- 235000008434 ginseng Nutrition 0.000 claims description 19
- 238000004891 communication Methods 0.000 claims description 18
- 239000000284 extract Substances 0.000 claims description 7
- 230000008569 process Effects 0.000 description 17
- 238000013528 artificial neural network Methods 0.000 description 6
- 239000003550 marker Substances 0.000 description 6
- 230000009467 reduction Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000005611 electricity Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000002045 lasting effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of federal learning model training method, equipment, system and storage mediums, this method comprises: sending the model modification request message for carrying conjunctive model parameter and waiting time to each participation equipment;It receives each participation equipment and the model parameter update that the local training of training pattern progress is obtained and sent when determining that having time participates in this model modification is treated according to local data and conjunctive model parameter;The update of each model parameter is handled to obtain newest conjunctive model parameter;Each participation equipment is counted to the participation state of this model modification, the waiting time of next model modification is obtained according to statistical result;The waiting time that newest conjunctive model parameter and next model modification are carried in the model modification request message of next model modification, until when training pattern is in convergence state using newest conjunctive model parameter as the final argument to training pattern.The present invention realizes the training time for taking into account federal learning model well and quality.
Description
Technical field
The present invention relates to technical field of data processing more particularly to a kind of federal learning model training methods, equipment, system
And storage medium.
Background technique
With the development of artificial intelligence, people are to solve the problems, such as data silo, propose the concept of " federation's study ", make
Federal both sides are obtained in the case where not having to provide one's own side's data, model training can also be carried out and obtain model parameter, and can be kept away
The problem of exempting from data-privacy leakage.
At present laterally in the model parameter renewal process of federal study, each participation equipment only uses what oneself locally possessed
Data carry out training pattern, and update to equipment transmission pattern parameter is coordinated, and coordinate equipment and set what is received from different participations
Standby model parameter update is merged, and the update of fused model parameter is distributed to each participation equipment again, realizes one
Subparameter updates.But the communication bandwidth and time delay different due to each participation equipment, and possess different data amount and calculating
The reasons such as ability, model parameter is updated the Time Inconsistency for being sent to coordination equipment by each equipment that participates in, if coordinating equipment etc.
All model parameter updates for participating in equipment and sending to be received, need to wait for a long time, dramatically increase federal learning model instruction
Practice the time.
It waits always to avoid coordinating equipment, it is specified that the model that at least N number of participant to be received such as coordinator sends at present
Parameter update, but this mode, which causes coordination equipment always to receive fixed a part, participates in the model parameter of equipment transmission more
Newly, federal learning model can not be constructed using all or most of contribution for participating in equipment, i.e., can not taken into account well
The training time of federal learning model and model quality.
Summary of the invention
The main purpose of the present invention is to provide a kind of federal learning model training method, equipment, system and storage medium,
Aim to solve the problem that the training time that federal learning model laterally in federal learning model training process, can not be taken into account well at present and
The problem of model quality.
To achieve the above object, the present invention provides a kind of federal learning model training method, federation's learning model instruction
To practice method to be applied to coordinate equipment, the coordination equipment and multiple participation equipment communicate to connect,
It is described federation learning model training method the following steps are included:
Request message is updated to each participation equipment transmission pattern, carries this connection in the model modification request message
The conjunctive model parameter and waiting time that nation's learning model updates;
Receive each model parameter update for participating in equipment and sending, wherein the model parameter is updated by each ginseng
It treats the local training of training pattern progress according to local data and the conjunctive model parameter with equipment to obtain, and the model
Parameter is updated to transmitted when each participation equipment determines that having time participates in this model modification according to the waiting time;
Progress fusion treatment is updated to each model parameter and obtains newest conjunctive model parameter;
Each equipment that participates in of statistics is updated to the participation state of this model modification according to each model parameter, according to
Statistical result adjusts the waiting time and obtains the waiting time of next model modification;
Carried in the model modification request message of next model modification the newest conjunctive model parameter and it is described under
It is the waiting time of secondary model modification, described when training pattern is in convergence state until detecting, by the newest joint
Model parameter is as the final argument to training pattern.
Optionally, described that each equipment that participates in of statistics is updated to the ginseng of this model modification according to each model parameter
Include: with the step of state
Time label in each model parameter update is extracted;
When extracting label at the first time, according to the quantity that the first time marks, determine to this model modification
Participation state be having time participate in state the participation equipment the first quantity, wherein the participations equipment send
The model parameter that ground training obtains carries the first time label when updating.
Optionally, after the step of time label in each model parameter update extracts, further includes:
When extracting the second time label, according to the quantity that second time marks, determine to this model modification
Participation state for no time participate in state the participation equipment the second quantity, wherein the participation equipment is according to institute
It states waiting time determination and participates in this model modification without the time, but having time transmission pattern parameter updates, and does not send out continuously recently
When the number for sending model parameter to update is greater than preset times, carried out when by the conjunctive model parameter and last time model modification local
The preceding model parameter that training obtains, which updates, carries out local fusion, obtains the model parameter and updates, and is sending the model ginseng
Number carries the second time label when updating.
Optionally, described that the step that the waiting time obtains the waiting time of next model modification is adjusted according to statistical result
Suddenly include:
When the statistical result is first quantity, judge first quantity whether less than the first preset quantity;
If first quantity is less than first preset quantity, increasing the waiting time obtains next model modification
Waiting time;
If first quantity is not less than first preset quantity, it is pre- to judge whether first quantity is greater than second
If quantity, wherein first preset quantity is less than second preset quantity;
If first quantity is greater than second preset quantity, reducing the waiting time obtains next model modification
Waiting time.
To achieve the above object, the present invention also provides a kind of federal learning model training methods, which is characterized in that described
Nation's learning model training method is applied to participate in equipment, the participation equipment and coordination equipment communication connection, the federal study
Model training method the following steps are included:
The model modification request message that the coordination equipment is sent is received, obtains this from the model modification request message
The conjunctive model parameter and waiting time that secondary federation's learning model updates;
Training pattern, which is treated, according to the local data for participating in equipment and the conjunctive model parameter carries out local training,
Obtain the update of the first model parameter;
Determine whether that having time participates in this model modification according to the waiting time;
If it is determined that having time participates in this model modification, then first model parameter update is sent to the coordination and set
It is standby, or first model parameter update for carrying label at the first time is sent to the coordination equipment.
Optionally, the model parameter updates the update sequence that message sending time He this model modification are also carried in request
Number, it is described to determine whether that the step of having time participates in this model modification includes: according to the waiting time
The message sending time and the update serial number are obtained from the model modification request message;
It is true according to the message sending time, the receiving time for updating serial number and receiving the model modification request
Determine network delay;
According to the waiting time, estimates and duration and the network delay is locally trained to judge whether that having time carries out locally
It trains and sends model parameter update;
If it is determined that having time carries out local training and sends model parameter update, it is determined that having time participates in this model more
Newly;
If it is determined that carrying out local training without the time and sending model parameter update, it is determined that the no time participates in this model more
Newly.
Optionally, described if it is determined that carrying out local training without the time and sending model parameter updating, it is determined that no time ginseng
After the step of this model modification, further includes:
Judge whether that having time transmission pattern parameter updates according to the waiting time and the network delay;
If it is determined that having time transmission pattern parameter updates, then the number that continuous not sent model parameter updates recently is obtained;
When detecting that the number is greater than preset times, carried out when by the conjunctive model parameter with last time model modification
The preceding model parameter that local training obtains, which updates, carries out local fusion, obtains the update of the second model parameter;
Second model parameter that the second time label will be carried is sent to the coordination equipment, or will carry described the
Second model parameter of two times label is sent to the coordination equipment after being encrypted according to predetermined encryption algorithm.
Optionally, described the step of first model parameter update is sent to the coordination equipment, includes:
First model parameter is updated and is encrypted according to predetermined encryption algorithm, and by encrypted first mould
Shape parameter is sent to the coordination equipment.
Optionally, described if it is determined that having time participates in this model modification, then first model parameter is updated and is sent
Include: to the step of coordination equipment
If it is determined that having time participates in this model modification, then whether judgement currently meets default transmission condition;
When meeting default transmission condition, first model parameter update is sent to the coordination equipment.
To achieve the above object, the present invention also provides a kind of equipment, the equipment is to coordinate equipment, and the equipment includes:
The federal learning model training journey that memory, processor and being stored in can be run on the memory and on the processor
Sequence, federation's learning model training program realize federal learning model training side as described above when being executed by the processor
The step of method.
To achieve the above object, the present invention also provides a kind of equipment, the equipment is to participate in equipment, and the equipment includes:
The federal learning model training journey that memory, processor and being stored in can be run on the memory and on the processor
Sequence, federation's learning model training program realize federal learning model training side as described above when being executed by the processor
The step of method.
To achieve the above object, the present invention also provides a kind of federal learning model training system, federation's learning models
Training system includes: at least one coordination equipment as described above and at least one participation equipment as described above.
In addition, to achieve the above object, the present invention also proposes a kind of computer readable storage medium, described computer-readable
Federal learning model training program is stored on storage medium, federation's learning model training program is real when being executed by processor
Now the step of federal learning model training method as described above.
In the present invention, coordinates equipment to each participation equipment transmission pattern and update request message, and disappear in model modification request
The conjunctive model parameter and waiting time that this federal learning model updates are carried in breath;Receive the model that each participation equipment is sent
Parameter updates, wherein model parameter, which is updated, treats training pattern according to local data and conjunctive model parameter by each participation equipment
It carries out local training to obtain, and model parameter is updated to each participation equipment and determines that having time participates in this mould according to waiting time
Type is transmitted when updating;Progress fusion treatment is updated to each model parameter and obtains newest conjunctive model parameter;According to each model
Parameter updates each equipment that participates in of statistics to the participation state of this model modification, is obtained down according to statistical result adjustment waiting time
The waiting time of secondary model modification;Newest conjunctive model parameter is carried in the model modification request message of next model modification
With the waiting time of next model modification, until detecting when training pattern is in convergence state, by newest conjunctive model
Parameter is as the final argument to training pattern.The present invention adjusts the waiting time of each model modification by dynamic, realizes
Coordinate equipment and effectively dynamically adjust the federal learning model training time according to equipment local training duration is participated in, is subtracting as far as possible
While few federation's learning model training time, the quality of federal learning model is improved as best one can, is taken into account well to realize
The training time of federal learning model and quality.
Detailed description of the invention
Fig. 1 is the structural schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of the federal learning model training method first embodiment of the present invention;
Fig. 3 is the message sending time schematic diagram of a scenario that the federal learning model training method embodiment of the present invention is related to.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
It should be noted that equipment of the embodiment of the present invention is to coordinate equipment, coordinating equipment can be smart phone, personal meter
The equipment such as calculation machine and server, are not particularly limited herein.
As shown in Figure 1, the equipment may include: processor 1001, such as CPU, network interface 1004, user interface
1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing the connection communication between these components.
User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user interface
1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include that the wired of standard connects
Mouth, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
It will be understood by those skilled in the art that device structure shown in Fig. 1 does not constitute the restriction to equipment, can wrap
It includes than illustrating more or fewer components, perhaps combines certain components or different component layouts.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium
Believe module, Subscriber Interface Module SIM and federal learning model training program.Wherein, operating system is to manage and control device hardware
With the program of software resource, the operation of federal learning model training program and other softwares or program is supported.
In equipment shown in Fig. 1, user interface 1003 is mainly used for carrying out data communication with client;Network interface
1004 are mainly used for establishing communication connection with each participation equipment;And processor 1001 can be used for calling and store in memory 1005
Federal learning model training program, and execute following operation:
Request message is updated to each participation equipment transmission pattern, carries this connection in the model modification request message
The conjunctive model parameter and waiting time that nation's learning model updates;
Receive each model parameter update for participating in equipment and sending, wherein the model parameter is updated by each ginseng
It treats the local training of training pattern progress according to local data and the conjunctive model parameter with equipment to obtain, and the model
Parameter is updated to transmitted when each participation equipment determines that having time participates in this model modification according to the waiting time;
Progress fusion treatment is updated to each model parameter and obtains newest conjunctive model parameter;
Each equipment that participates in of statistics is updated to the participation state of this model modification according to each model parameter, according to
Statistical result adjusts the waiting time and obtains the waiting time of next model modification;
Carried in the model modification request message of next model modification the newest conjunctive model parameter and it is described under
It is the waiting time of secondary model modification, described when training pattern is in convergence state until detecting, by the newest joint
Model parameter is as the final argument to training pattern.
Further, described updated according to each model parameter counts each participation equipment to this model modification
The step of participation state includes:
Time label in each model parameter update is extracted;
When extracting label at the first time, according to the quantity that the first time marks, determine to this model modification
Participation state be having time participate in state the participation equipment the first quantity, wherein the participations equipment send
The model parameter that ground training obtains carries the first time label when updating.
Further, after the step of time label in each model parameter update extracts, processing
Device 1001 can be also used for calling the federal learning model training program stored in memory 1005, and execute following steps:
When extracting the second time label, according to the quantity that second time marks, determine to this model modification
Participation state for no time participate in state the participation equipment the second quantity, wherein the participation equipment is according to institute
It states waiting time determination and participates in this model modification without the time, but having time transmission pattern parameter updates, and does not send out continuously recently
When the number for sending model parameter to update is greater than preset times, carried out when by the conjunctive model parameter and last time model modification local
The preceding model parameter that training obtains, which updates, carries out local fusion, obtains the model parameter and updates, and is sending the model ginseng
Number carries the second time label when updating.
Further, described the waiting time is adjusted according to statistical result to obtain the waiting time of next model modification
Step includes:
When the statistical result is first quantity, judge first quantity whether less than the first preset quantity;
If first quantity is less than first preset quantity, increasing the waiting time obtains next model modification
Waiting time;
If first quantity is not less than first preset quantity, it is pre- to judge whether first quantity is greater than second
If quantity, wherein first preset quantity is less than second preset quantity;
If first quantity is greater than second preset quantity, reducing the waiting time obtains next model modification
Waiting time.
In addition, the embodiment of the present invention also proposes a kind of participation equipment, the participation equipment include: memory, processor and
The federal learning model training program that is stored on the memory and can run on the processor, the federal study mould
The step of federal learning model training method as described below is realized when type training program is executed by the processor.
In addition, the embodiment of the present invention also proposes a kind of federal learning model training system, federation's learning model training
System include at least one it is as described above coordinate equipment, at least one participate in equipment as described above.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, connection is stored on the storage medium
Nation's learning model training program, federation's learning model training program realize federal as described below when being executed by processor
The step of practising model training method.
The present invention coordinates equipment, participates in each of equipment, federal learning model training system and computer readable storage medium
Embodiment can refer to each embodiment of federal learning model training method of the invention, and details are not described herein again.
Based on above-mentioned structure, each embodiment of federal learning model training method is proposed.
It is the flow diagram of the federal learning model training method first embodiment of the present invention referring to Fig. 2, Fig. 2.
The embodiment of the invention provides the embodiments of federal learning model training method, it should be noted that although flowing
Logical order is shown in journey figure, but in some cases, it can be to be different from shown or described by sequence execution herein
The step of.
First embodiment of the invention federation learning model training method is applied to coordinate equipment, coordinates equipment and multiple participations
Equipment communication connection, the embodiment of the present invention coordinates equipment and participation equipment can be smart phone, personal computer and server
Etc. equipment, the training of federal learning model can be supported by participating in equipment, be not particularly limited herein.In the present embodiment, federal to learn
Practising model training method includes:
Step S10 updates request message to each participation equipment transmission pattern, takes in the model modification request message
The conjunctive model parameter updated with this federal learning model and waiting time;
With the development of artificial intelligence, people are to solve the problems, such as data silo, propose the concept of " federation's study ", make
Federal each side is obtained in the case where not having to provide one's own side's data, model training can also be carried out and obtain model parameter, and can be kept away
The problem of exempting from data-privacy leakage.
Laterally federal study refers to that in two datasets (can be the local number of the participation equipment in the embodiment of the present invention
According to) user characteristics overlapping it is more, and user overlapping it is less in the case where, data set according to transverse direction (i.e. user's dimension) cut
Point, and take out that two parties feature is identical and part data that user is not exactly the same are trained.This method is called cross
Learn to federation.Than the bank if any two different regions, their user group respectively from respective place area, mutually
Intersection very little, still, their business is much like, therefore, the user characteristics of record be it is identical, lateral federation can be used
Learn to help Liang Jia bank to construct conjunctive model.
At present laterally in a model parameter renewal process of federal study, each participation equipment only uses oneself and locally gathers around
Some data (local data) carry out training federation learning model, and update to equipment transmission pattern parameter is coordinated, and coordinating equipment will
What is received is merged from the different model parameter updates for participating in equipment, and the update of fused model parameter is distributed again
To each participation equipment, realize that primary parameter updates.It is updated by multiple parameter, until detecting federal learning model convergence,
Just terminate training and obtain final model parameter, that is, completes the training process of federal learning model.But primary parameter more
During new, due to each participation equipment different communication bandwidth and time delay, and possess different data amount and computing capability etc.
Reason, model parameter is updated the Time Inconsistency for being sent to coordination equipment by each equipment that participates in, if it is to be received to coordinate equipment etc.
All model parameter updates for participating in equipment and sending, need to wait for a long time, dramatically increase the federal learning model training time.
It waits always to avoid coordinating equipment, it is specified that the model that at least N number of participant to be received such as coordinator sends at present
Parameter update, but this mode, which causes coordination equipment always to receive fixed a part, participates in the model parameter of equipment transmission more
Newly, federal learning model can not be constructed using most of contribution for participating in equipment, i.e., can not takes into account federal study well
The training time of model and model quality.
In order to solve this problem, each embodiment of the federal learning model training method of the present invention is proposed.
In the present embodiment, coordinate equipment and each participation equipment can by shaking hands, authentication pre-establish communication connection.
In a model parameter renewal process, request message is updated to each participation equipment transmission pattern firstly, coordinating equipment, and in mould
Type, which updates, carries the conjunctive model parameter that this federal learning model updates (hereinafter referred to as this model modification) in request message
And waiting time.Wherein, coordinating equipment can be in a manner of point-to-point communication, individually send mould to each participation equipment respectively
Type updates request message, can also be asked by way of multicast, multicast or broadcast to the transmission pattern update simultaneously of each participation equipment
Seek message.Model modification refers to the process that primary federal learning model parameter updates.Conjunctive model parameter can be federation
The parameter of learning model, for example, the weight parameter or conjunctive model parameter that connect between the node of neural network are also possible to
The gradient information of federal learning model, for example, the gradient information in neural network gradient descent algorithm, gradient information can be ladder
Angle value or compressed gradient value.The waiting time of this model modification refers to that coordinating equipment waits participation equipment transmission pattern parameter
The duration of update can be during coordinating this model modification that equipment determines, from transmission pattern updates request message, arrive
It no longer receives and participates in the duration that the model parameter that equipment is sent updates.
It should be noted that coordinating equipment in mould when first time model modification in federal learning model training process
Type updates the conjunctive model parameter carried in request message and waiting time, can be pre-set, i.e., initial joint mould
Shape parameter and initial waiting time;In second and subsequent model modification, carried in model modification request message
Conjunctive model parameter and waiting time are got according to the result of last time model modification.
Each equipment that participates in receives the model modification request message for coordinating equipment transmission, therefrom obtains the connection of this model modification
Mold shape parameter and waiting time.
Step S20 is received and each described participated in the model parameter that equipment is sent and update, wherein the model parameter update by
Each participation equipment is treated the local training of training pattern progress according to local data and the conjunctive model parameter and is obtained, and
When the model parameter is updated to each participation equipment and determines that having time participates in this model modification according to the waiting time
It is transmitted;
Coordinate equipment after transmission pattern parameter updating request message, enter wait state, wait state it is lasting when
Long is the waiting time of this model modification, in the wait state, receives each model parameter for participating in equipment transmission and updates.
Each participation equipment is extracted the conjunctive model parameter of this model modification and is waited from model modification request message
To duration, the local training of training pattern progress is treated according to the local data and conjunctive model parameter that participate in equipment and obtains model ginseng
Number updates, and federal learning model to be trained is referred to training pattern, and model parameter update is the update of parameter to the joint model,
Such as the weight parameter of updated neural network.
Each participation equipment determines whether that having time participates in this model modification according to waiting time.Specifically, coordination is set
Preparation send the time of model modification request message and participation equipment to receive the time between the time of model modification request message
Difference, and participate in equipment transmission pattern parameter update time and coordinate equipment receive model parameter update time between
Time difference, referred to as network delay.
In the negligible situation of network delay, participating in equipment can be by the local training duration estimated and waiting
Duration is compared, and judges whether waiting time is greater than local training duration, if more than, it is determined that having time participates in this model
It updates, if being not more than, it is determined that the no time participates in this model modification;In the case where needing to consider network delay, participate in setting
It is standby to be compared the network delay estimated and local training duration with waiting time, judge whether waiting time is greater than
Network delay+local training duration, if more than, it is determined that having time participates in this model modification, if being not more than, it is determined that without when
Between participate in this model modification.
When determining that having time participates in this model modification, equipment is participated in by the model parameter that local training obtains and updates hair
Give coordination equipment.It should be noted that each equipment that participates in can carry out model parameter update for the safety for guaranteeing data
The update of encrypted model parameter is sent to coordination equipment by encryption;Coordinate equipment and participates in equipment mutual trust, anti-stopping leak
Reveal data under third-party scene, the mode of encryption can be traditional cipher mode, such as secret sharing, at this point, coordinating to set
It is standby to need first to be decrypted when receiving the update of encrypted model parameter, it is updated and is carried out using the model parameter after decryption
Subsequent calculating;In the case where coordinating equipment and participating in the mutually mistrustful scene of equipment, the mode of encryption can be homomorphic encryption algorithm
(Homomorphic Encryption), at this point, coordination equipment can be directly updated using encrypted model parameter after carrying out
It is continuous to calculate, the result after calculating is returned into participation equipment, equipment is participated in and is first decrypted, is then carrying out subsequent calculating.It needs
It is noted that encrypting if participating in equipment to model parameter update, the local training duration estimated also is needed plus encryption
Time.
Since the time and network delay that each participation equipment progress is locally trained are different, each participation equipment is simultaneously
It is not that all having time participates in this model modification, therefore, in a model modification, is updated to equipment transmission pattern parameter is coordinated
Participation equipment may be to participate in a part of all participation equipment of federal learning model training.
Step S30 updates progress fusion treatment to each model parameter and obtains newest conjunctive model parameter;
Coordinate equipment after receiving the model parameter that each participation equipment is sent and updating, to each model parameter received
It updates progress fusion treatment and obtains newest conjunctive model parameter.The mode of fusion treatment, which can be, updates each model parameter
It is weighted and averaged, obtains newest conjunctive model parameter.Each model parameter updates corresponding weight, can be set in advance
It sets, is also possible to update a model parameter, calculate its corresponding data volume for participating in equipment and being possessed, account for all transmissions
The ratio for the middle data volume that the participation equipment that model parameter updates is possessed updates corresponding weight as the model parameter.
Step S40 updates each participation for participating in equipment to this model modification of statistics according to each model parameter
State adjusts the waiting time according to statistical result and obtains the waiting time of next model modification;
Coordinate equipment to be updated according to each model parameter received, counts each equipment that participates in the ginseng of this model modification
With state.Wherein, participating in equipment can be having time participation state to the participation state of this model modification or participates in without the time
State, having time state indicate that participating in equipment having time participates in this model modification, and no time state indicates to participate in equipment
The no time participates in this model modification.
Coordinating device statistics participation state can be the number that statistics participation state is the participation equipment that having time participates in state
Amount, or statistics participation state participate in the quantity of the participation equipment of state for no time, or both count.
In the present embodiment, it participates in equipment and is divided into two kinds, one is send what the model parameter that local training obtains updated
Equipment is participated in, one is the participation equipment that not transmission pattern parameter updates, transmission indicates what having time participated in, does not send
Indicate what no time participated in, then coordinating equipment can be the quantity for updating the model parameter received as having time participation
The quantity of the participation equipment of this model modification is also possible to subtract the model parameter received with all quantity for participating in equipment
The quantity of update participates in the quantity of the participation equipment of this model modification as no time, wherein all participation equipment refer to
All participation equipment of communication connection are established with coordination equipment in advance.
Coordinate equipment to be adjusted to obtain next time on the basis of the waiting time of this model modification according to statistical result
The waiting time of model modification.Specific adjustable strategies can be, when having time participates in the participation equipment of this model modification
When quantity is more, waiting time is reduced, because local instruction can be completed within this waiting time by mostly participating in equipment
Practice and send model parameter update, then can reduce the waiting time of next model modification so that next model modification when
Between shorten, to reduce the training time of federal learning model;When having time participates in the number of the participation equipment of this model modification
When measuring less, increase waiting time, all has little time to complete local instruction within this waiting time because mostly participating in equipment
Practice and send model parameter update, then can increase the waiting time of next model modification, so that participating in this mould without the time
The participation equipment that type updates can be participated in next model modification with having time, the participation equipment always fixed is avoided to participate in model more
Newly, to improve the model quality of federal learning model.
Specifically, coordinating equipment can be one preset quantity of setting, when detecting that having time participates in this model modification
The quantity of participation equipment when being greater than the preset quantity, determine having time participate in the quantity of the participation equipment of this model modification compared with
It is more, when being less than or equal to the preset quantity, determine that having time participates in the negligible amounts of the participation equipment of this model modification.Increase
The mode of waiting time can be waiting time obtaining the waiting time of next model modification plus preset increment, presets
Increment can be a fixed increment, determine when needing to increase waiting time every time, in addition the fixed increment, is also possible to pass
The increment of increasing such as adds 2 milliseconds, second plus 4 milliseconds for the first time.The mode for reducing waiting time then can be similar.
In addition, the mode for increasing waiting time is also possible that participating in equipment participates in this model modification without the time determining
When, the message that this model modification is participated in without the time is sent to coordination equipment, and carry the locally training estimated in the message
Time coordinates equipment when determination will increase waiting time, the local training duration sent according to each participation equipment received,
It determines the waiting time of model modification next time, specifically, can be the average value for calculating the local training duration received, will put down
Waiting time of the mean value as next model modification.
Step S50 carries the newest conjunctive model parameter in the model modification request message of next model modification
With the waiting time of the next model modification, described when training pattern is in convergence state until detecting, general is described most
New conjunctive model parameter is as the final argument to training pattern.
Coordinate equipment after the waiting time for obtaining newest conjunctive model parameter and next model modification, in next mould
The waiting time of the newest conjunctive model parameter and next model modification is carried in the model modification request message that type updates, and
The model modification request message of next time model modification is sent to each participation equipment, to start model modification next time.
Circulation detects when training pattern be in convergence state until coordinating equipment, terminates trained no longer to carry out next time that model is more
Newly, using newest conjunctive model parameter as the final argument to training pattern, that is, the training of federal learning model is completed.
Coordinate equipment detection and can be the newest conjunctive model of calculating to the mode whether training pattern is in convergence state
The difference of parameter and last conjunctive model parameter, if less than one preset value of difference, it is determined that be in and receive to training pattern
State is held back, if being not less than the preset value, it is determined that be in not converged state to training pattern;It is also possible to judgment models update
Whether number reaches preset times, if reaching preset times, it is determined that is in convergence state to training pattern;It can also be judgement
Training duration no can be greater than preset duration, if more than preset duration, it is determined that be in convergence state to training pattern.Wherein, in advance
If value, preset times and preset duration can be configured as needed.
In the present embodiment, request message is updated to each participation equipment transmission pattern by coordinating equipment, and in model modification
The conjunctive model parameter and waiting time that this federal learning model updates are carried in request message;Each participation equipment is received to send
Model parameter update, wherein model parameter, which is updated, by each participation equipment treats instruction according to local data and conjunctive model parameter
Practice the local training of model progress to obtain, and model parameter is updated to each participation equipment and determines that having time participates according to waiting time
It is transmitted when this model modification;Progress fusion treatment is updated to each model parameter and obtains newest conjunctive model parameter;According to
Each model parameter updates each equipment that participates in of statistics to the participation state of this model modification, adjusts waiting time according to statistical result
Obtain the waiting time of next model modification;Newest joint mould is carried in the model modification request message of next model modification
The waiting time of shape parameter and next model modification will be newest until detecting when training pattern is in convergence state
Shape parameter is molded as the final argument to training pattern, realizes and coordinates equipment effectively according to participation equipment local training duration
Dynamically adjust the federal learning model training time, while reducing the federal learning model training time as far as possible, as best one can
The quality of federal learning model is improved, to realize training time and the quality for taking into account federal learning model well.
Further, it is based on above-mentioned first embodiment, proposes the federal learning model training method second embodiment of the present invention,
In federal learning model training method second embodiment of the invention, described updated according to each model parameter counts each described
Participate in equipment includes: to the step of participation state of this model modification
Step A10 extracts the time label in each model parameter update;
Coordinate equipment receive it is each participation equipment send model parameter update after, to each model parameter update in when
Between mark and extract.Each equipment that participates in can be taken when to the update of equipment transmission pattern parameter is coordinated in model parameter update
It is marked with the time, for marking participation equipment, whether having time participates in this model modification to time label, such as passes through one and compare
Special marker marks, and 1 expression having time participates in this model modification, 0 indicates that no time participates in this model modification.
Step A20, according to the quantity that the first time marks, is determined to this when extracting label at the first time
The participation state of model modification is the first quantity of the participation equipment that having time participates in state, wherein the participation equipment
The first time label is carried when sending the model parameter that local training obtains and updating.
It is determining to this model according to the quantity of first time label when coordination equipment, which is extracted, to be marked at the first time
The participation state of update is the first quantity of the participation equipment of having time state.Participation state to this model modification is sometimes
Between state indicate in this model modification that having time carries out local training and simultaneously sends model parameter update, i.e. having time is joined
With this model modification.Equipment is participated in when sending the model parameter update that local training obtains, is taken in model parameter update
Band marks at the first time, indicates that having time carries out local training and transmission pattern parameter updates with first time label, for example, the
One time label can be marker being denoted as 1.
When coordination equipment extracts label at the first time from model parameter update, it is corresponding to illustrate that the model parameter updates
It participates in equipment having time and participates in this model modification, coordinate the quantity that device statistics mark at the first time, marked according to first time
The quantity of note determines that having time participates in the first quantity of the participation equipment of this model modification.Specifically, when can be with first
Between the quantity that marks participated in as having time this model participation equipment the first quantity;Considering that participating in equipment has sent this
The model parameter that ground training obtains updates, but since network cause coordinates the scene that equipment is not received by, coordinates equipment also
The quantity for not receiving the participation equipment of model parameter update can be counted, it (can multiplied by preset ratio by the quantity not received
Preset as the case may be) it is converted, estimation actually has sent model parameter update but coordinates equipment and do not receive
The quantity of the participation equipment arrived obtains having time and participates in this model more on this basis plus the quantity of label at the first time
First quantity of new participation equipment.
In the present embodiment, by participating in equipment when sending the model parameter update that local training obtains, join in model
Number carries in updating for indicating that having time participates in the first time label of this model modification, facilitates and coordinates equipment according to first
Time label determines the first quantity to the participation equipment that the participation state of this model modification is having time state, so that coordinating
Equipment can accurately judge that having time carries out the first quantity of the participation equipment of local training and the update of transmission pattern parameter, from
And the determination of the waiting time of next model modification is carried out according to the first quantity.
Further, after step A10, further includes:
Step A30, according to the quantity that second time marks, is determined to this when extracting the second time label
The participation state of model modification participates in the second quantity of the participation equipment of state for no time, wherein the participation equipment
This model modification is participated in without the time determining according to the waiting time, but having time transmission pattern parameter updates, and recently
When the number that continuous not sent model parameter updates is greater than preset times, when by the conjunctive model parameter and last time model modification
It carries out the preceding model parameter that local training obtains and updates the local fusion of progress, obtain the model parameter and update, and sending institute
It states and carries the second time label when model parameter updates.
Coordinate equipment to extract the time label in the update of each model parameter, when extracting the second time label,
The participation that the quantity that can be marked according to the second time determines that the participation state to this model modification participates in state for no time is set
The second standby quantity.Not having is indicated in this model modification for no time state to the participation state of this model modification
Time completes local training and sends model parameter update, i.e., participates in this model modification without the time.
Equipment is participated in when extracting waiting time from model modification request message, can be determined whether according to waiting time
Having time carries out local training and sends model parameter update, that is, determines whether that having time participates in this model modification.It is such as above-mentioned
Described in embodiment, when participation equipment determines that having time participates in this model modification, the model parameter locally trained and obtained is sent more
Newly, and label at the first time is carried.This model modification is participated in without the time if participating in equipment and determining, judges whether that having time is sent out
It send model parameter to update, specifically, can determine whether the network delay estimated is less than the waiting time of this model modification, if small
In, it is determined that having time transmission pattern parameter updates, if being not less than, it is determined that no time transmission pattern parameter updates.In determination
When no time transmission pattern parameter updates, participating in equipment, transmission pattern parameter does not update, but still carries out local training, and will train
Obtained model parameter update is saved, in case using when next model modification.
When determining that having time transmission pattern parameter updates, participating in equipment acquisition, continuous not sent model parameter updates recently
Number, judge whether to be greater than preset times, if more than preset times, then will carry out local training when last time model modification and obtain
Preceding model parameter update and carry out local with the conjunctive model parameter that receives and merge, obtain this model parameter update, and
It carries the second time label and is sent to coordination equipment, the second time, label was for indicating that no time participated in this model modification, example
Such as, the second time label, which can be, is denoted as 0 for marker.
Wherein, the record number that continuous not sent model parameter updates recently in equipment is participated in, here not sent model ginseng
Number updates the model parameter for referring to that not sent local training obtains and updates, also not sent locally to merge obtained model parameter more
Newly.Preset times are configured according to specific needs, it is therefore an objective to prevent from participating in equipment continuous several times not transmission pattern parameter
It updates, is such as set as 1 time, to prevent from participating in equipment in the case where last time not having the update of transmission pattern parameter, this is not also sent
Model parameter updates.The mode locally merged can be by preceding model parameter update X1 and conjunctive model parameter X2 be weighted it is flat
, weight can be is configured as the case may be, is gathered around as the weight of preceding model parameter update can be the participation equipment
Some data volumes account for the ratio a for participating in the data volume that all participation equipment that federal learning model is trained possess, conjunctive model ginseng
Several weights can be (1-a), then locally merges obtained model parameter and be updated to X1*a+X2 (1-a).It should be noted that
The time that the mode locally merged obtains model parameter update is extremely short, can be ignored.
Equipment is participated in when determining that obtained model parameter update is locally merged in transmission, local training is still carried out and obtains mould
Shape parameter is updated and is saved, in case using when next model modification.
It should be noted that in the present embodiment, during a model modification, participate in equipment and be divided into three kinds, the first
It is the participation equipment for sending the model parameter that local training obtains and updating, second is to send locally to merge obtained model parameter
The participation equipment of update, the third is the participation equipment that not sent model parameter updates.Wherein, the first is that having time participates in this
The participation equipment of secondary model modification, second and the third are the participation equipment for participating in this model modification no time.It is such as above-mentioned
Described in embodiment, coordinating equipment can determine that the first participates in the quantity of equipment according to the quantity of first time label, i.e., to this
The participation state of model modification is the first quantity of the participation equipment that having time participates in state.
In the present embodiment, the number that the model parameter received updates can be subtracted with the sum for participating in equipment by coordinating equipment
Measure the number that the quantity to get participating in the quantity of equipment to the third, and counting the second time label participates in equipment as second
Second is added with the first quantity for participating in equipment to get the participation equipment of this model modification is participated in no time by amount
Quantity to get to the second quantity to the participations state of this model modification for the participation equipment of participation of no time state.It needs
If be noted that above-mentioned first determination of amount process using conversion by the way of, the second determination of amount equally uses
The mode of conversion.
In the present embodiment, this model modification is participated in without the time determining by participating in equipment, but having time sends mould
Shape parameter updates, and on when also not sent model parameter updates several times, carry out local fusion and obtain model parameter update, carry
Second time label is sent to coordination equipment, can avoid participating in equipment because multiple not sent model parameter update due to is coordinated equipment
It is considered as and goes offline, do not receive the data of participation equipment transmission, also avoids participating in equipment re-starting authentication due to wave because going offline
Take the time for participating in equipment and coordinating equipment, it can also be by setting preset times to 1 time, to avoid equipment is participated in twice in succession
Not transmission pattern parameter more news.Also, coordinate equipment can be updated according to model parameter in the second time label distinguish
It locally merges the model parameter that obtained model parameter updates and local training obtains to update, to accurately determine having time ginseng
With the first quantity of the participation equipment of this model modification, and participate in without the time this model modification participation equipment second
Quantity, to carry out the determination of the waiting time of next model modification according to the first quantity or the second quantity.
Further, it is based on above-mentioned second embodiment, proposes the federal learning model training method 3rd embodiment of the present invention,
It is described to be obtained according to the statistical result adjustment waiting time in federal learning model training method 3rd embodiment of the invention
The step of waiting time of next model modification includes:
Step B10 judges whether first quantity is pre- less than first when the statistical result is first quantity
If quantity;
When coordinate device statistics be participation state to this model modification is the participation equipment that having time participates in state
The first quantity when, judge the first quantity whether less than the first preset quantity.Wherein, the first preset quantity is according to specific needs
It is configured, so that indicating that having time participates in the participation of this model modification when the first quantity is less than first preset quantity
Number of devices is less, namely all busy local training of progress within the waiting time of this model modification of most of participation equipment
And send model parameter update.
Step B20, if first quantity is less than first preset quantity, increasing the waiting time obtains next time
The waiting time of model modification;
If coordinating equipment determines that the first quantity less than the first preset quantity, increases the waiting time of this model modification,
Obtain the waiting time of next model modification.Wherein, increased on the waiting time of this model modification, increment can be
Preset fixed increment such as increases by 2 milliseconds, is also possible to preset incremental increment, and such as first time determines to need to increase and wait
When duration, increases by 2 milliseconds, when second of determination needs to increase waiting time, increase by 4 milliseconds.First quantity is default less than first
When quantity, illustrates that most of participation equipment are all busy within the waiting time of this model modification and carry out local training and send
Model parameter updates, and therefore, coordinates equipment using most of equipment local datas that participate in federal learning model to enable
Trained contribution improves the model quality of federal learning model, and the waiting time of next model modification can be increased by coordinating equipment,
So that more equipment having times that participate in carry out local training and send model parameter update when next model modification.
Step B30 judges whether first quantity is big if first quantity is not less than first preset quantity
In the second preset quantity, wherein first preset quantity is less than second preset quantity;
If coordinating equipment determines that not less than the first preset quantity, it is pre- to judge whether the first quantity is greater than second for the first quantity
If quantity.Wherein, the second preset quantity can also be configured according to specific needs, but what the second preset quantity should be arranged
Greater than the first preset quantity, so that indicating the most of equipment that participates at this when the first quantity is greater than the second preset quantity
In the waiting time of model modification, all having time completes local training and sends model parameter update.
Step B40, if first quantity is greater than second preset quantity, reducing the waiting time obtains next time
The waiting time of model modification.
If coordinating equipment determines that the first quantity is greater than the second preset quantity, reduce the waiting time of this model modification,
Obtain the waiting time of next model modification.Wherein, reduced on the waiting time of this model modification and above-mentioned increase
Waiting time is similar, and reduction amount can be the reduction amount for presetting fixation, is also possible to preset incremental reduction amount.First quantity
When greater than the second preset quantity, illustrate that all having time carries out this to most of participation equipment within the waiting time of this model modification
Ground training simultaneously sends model parameter update, has been able to participate in the distich of equipment local data using most of at this point, coordinating equipment
The contribution of nation's learning model training, on this basis, waiting time can heuristically be reduced by coordinating equipment, to determine whether to subtract
When few waiting time, the most of equipment that participates in still is able to the local training of having time progress and sends model parameter update, from
And while guaranteeing the model quality of federal learning model, shorten the training time of federal learning model as much as possible, improves
Training effectiveness, to realize model quality and the training time for taking into account federal learning model.Due to being heuristically to reduce to wait
Duration, therefore reduced preferred embodiment can be and be reduced according to fixed reduction amount, and reduction amount should be smaller.
If coordinating equipment determines that the first quantity not less than the first preset quantity and no more than the second preset quantity, can protect
It is constant to hold waiting time.It should be noted that in one embodiment, the first preset quantity can also be with the second preset quantity phase
Deng.
Further, when coordinate device statistics be the participation state to this model modification be no time participation state
When participating in the second quantity of equipment, according to the adjustable strategies similar with above-mentioned B10~B40 step, coordinating equipment can also be by the
Two quantity are compared with preset quantity, so that increasing when most of participation equipment participate in this model modification without the time
Add waiting time, when the participation equipment that no time participates in this model modification is less, waiting time is reduced, to realize above-mentioned
Take into account the model quality of federal learning model and the effect of training time.
Further, in one embodiment, due to the incipient stage in federal learning model training process, conjunctive model
Parameter is random initializtion, and the time that participation equipment is locally trained may be all long, so when coordination equipment can be by waiting
Length takes the larger value;And in the later stage of federal learning model training process, to the close convergence of training pattern, conjunctive model parameter
Change not too large, participating in time for locally training of equipment may be all shorter, so coordinate equipment can will take waiting time compared with
Small value.
Further, in one embodiment, it is in training pattern close to convergence state if coordinating equipment and detecting, but
Continuously several times model parameter update in all do not receive the model parameters that some participation equipment are sent and update, then coordinating equipment can be with
Judge whether the data volume of the local data of the participation equipment is greater than preset data amount, if more than, then it can increase waiting time,
It, can if being not more than preset data amount with the contribution using the biggish participation equipment of data volume to the training of federal learning model
Not increase waiting time, that is, the participation equipment is abandoned, to guarantee the training effectiveness of federal learning model.
Further, it is based on above-mentioned first, second, and third embodiment, proposes the federal learning model training method of the present invention
Fourth embodiment, in the present embodiment, it is described federation learning model training method be applied to participate in equipment, the participations equipment and
Coordinate equipment communication connection, it is described federation learning model training method the following steps are included:
Step C10 receives the model modification request message coordinating equipment and sending, from the model modification request message
It is middle to obtain the conjunctive model parameter and waiting time that this federal learning model updates;
In the present embodiment, coordinate equipment and each participation equipment can by shaking hands, authentication pre-established communication link
It connects.In a model parameter renewal process, request message is updated to each participation equipment transmission pattern firstly, coordinating equipment, and
The conjunctive model that this federal learning model updates (hereinafter referred to as this model modification) is carried in model modification request message
Parameter and waiting time.Wherein, coordinating equipment can be in a manner of point-to-point communication, individually send out to each participation equipment respectively
Send model modification request message, can also by way of multicast, multicast or broadcast to each participation equipment simultaneously transmission pattern more
New request message.Model modification refers to the process that primary federal learning model parameter updates.Conjunctive model parameter can be
The parameter of federal learning model, for example, the weight parameter or conjunctive model parameter that connect between the node of neural network can also
To be the gradient information of federal learning model, for example, the gradient information in neural network gradient descent algorithm, gradient information can be with
It is gradient value or compressed gradient value.The waiting time of this model modification refers to that coordinating equipment waits participation equipment transmission pattern
The duration that parameter updates can be during coordinating this model modification that equipment determines, update request message from transmission pattern
It rises, to the duration for no longer receiving the model parameter update for participating in equipment transmission.
It should be noted that coordinating equipment in mould when first time model modification in federal learning model training process
Type updates the conjunctive model parameter carried in request message and waiting time, can be pre-set, i.e., initial joint mould
Shape parameter and initial waiting time;In second and subsequent model modification, carried in model modification request message
Conjunctive model parameter and waiting time are got according to the result of last time model modification.
It participates in equipment and receives the model modification request message for coordinating equipment transmission, and therefrom obtain the connection of this model modification
Mold shape parameter and waiting time.
Step C20 treats training pattern according to the local data for participating in equipment and the conjunctive model parameter and carries out
Local training obtains the update of the first model parameter;
Coordinate equipment after transmission pattern parameter updating request message, enter wait state, wait state it is lasting when
Long is the waiting time of this model modification, in the wait state, receives each model parameter for participating in equipment transmission and updates.
When participation equipment is extracted the conjunctive model parameter of this model modification and is waited from model modification request message
It is long, the local training of training pattern progress is treated according to the local data and conjunctive model parameter that participate in equipment and obtains the first model ginseng
Number updates, and refers to federal learning model to be trained to training pattern, and the update of the first model parameter is parameter to the joint model
It updates, such as the weight parameter of updated neural network.
Step C30 determines whether that having time participates in this model modification according to the waiting time;
It participates in equipment and determines whether that having time participates in this model modification according to waiting time.Specifically, equipment will be coordinated
Transmission pattern, which updates the time of request message and participates in equipment, receives the time difference between the time of model modification request message,
And participate in equipment transmission pattern parameter update time and coordinate equipment receive model parameter update time between when
Between poor, referred to as network delay.In the negligible situation of network delay, the local training that equipment can will be estimated is participated in
Duration is compared with waiting time, judges whether waiting time is greater than local training duration, if more than, it is determined that having time ginseng
With this model modification, if being not more than, it is determined that the no time participates in this model modification;Needing the case where considering network delay
Under, participating in equipment can be compared the network delay estimated and local training duration with waiting time, when judging to wait
It is long whether to be greater than network delay+local training duration, if more than, it is determined that having time participates in this model modification, if being not more than,
It then determines and participates in this model modification without the time.
It should be noted that participating in that a network delay can be preset in equipment, which can be the ginseng
Data usually are sent between coordination equipment with device statistics and receive the time difference of data, according to putting down for calculating multiple time difference
The network delay that the mode of mean value is estimated.Equipment is participated in first time model modification, the calculation that local training needs can be calculated
The step number of method estimates local training duration according to step number, as local training duration is estimated, in subsequent model modification, i.e.,
The time that can actually spend according to local training is carried out when last time model modification estimates that this model modification carries out locally training
Time specifically gradually restrains according to training pattern, each to participate in the equipment rule that locally training duration gradually decreases, can be with
It carried out subtracting a time quantum, the local for this model modification estimated on the time that local training is actually spent in last time
Training duration.
Step C40, however, it is determined that having time participates in this model modification, then first model parameter update is sent to institute
Coordination equipment is stated, or first model parameter update for carrying label at the first time is sent to the coordination equipment.
If it is determined that having time participates in this model modification, then participates in equipment and local is trained to the first obtained model parameter more
It is newly sent to coordination equipment, or the first model parameter update for carrying label at the first time is sent to coordination equipment.Its
In, label is for indicating that participating in equipment having time participates in this model modification, can pass through the mark of a bit at the first time
Position identifies, and such as marker is set as 1.
In the present embodiment, when determination participates in this model modification without the time, participating in equipment can not transmission pattern ginseng
Number is updated to equipment is coordinated, and is not involved in this model modification.
Further, the step of first model parameter update being sent to the coordination equipment can include:
First model parameter is updated and is encrypted according to predetermined encryption algorithm by step a, and will be encrypted described
First model parameter is sent to the coordination equipment.
For the safety for guaranteeing data, participating in equipment can be carried out the update of the first model parameter according to predetermined encryption algorithm
The update of encrypted first model parameter is sent to coordination equipment by encryption;Coordinating equipment and the mutual trust of participation equipment, preventing
Only for leak data under third-party scene, predetermined encryption algorithm can be traditional Encryption Algorithm, such as secret sharing, at this point,
Coordinate equipment when receiving the update of encrypted first model parameter, needs first to be decrypted, using the first mould after decryption
Shape parameter, which updates, carries out subsequent calculating;In the case where coordinating equipment and participating in the mutually mistrustful scene of equipment, predetermined encryption algorithm can
To be homomorphic encryption algorithm (Homomorphic Encryption), at this point, encrypted the can directly be utilized by coordinating equipment
One model parameter, which updates, carries out subsequent calculating, and the result after calculating is returned to participation equipment, participates in equipment and is first decrypted, so
Carry out subsequent calculating again afterwards.The update of the first model parameter is encrypted it should be noted that if participating in equipment, then the sheet estimated
Training duration in ground also needs the time plus encryption.
Since the time and network delay that each participation equipment progress is locally trained are different, each participation equipment is simultaneously
It is not that all having time participates in this model modification, therefore, in a model modification, sends the first model parameter to equipment is coordinated
The participation equipment of update may be to participate in a part of all participation equipment of federal learning model training.
Coordinate equipment to update in the model parameter for receiving each participation equipment transmission and (herein refer to the first model parameter to update)
Afterwards, progress fusion treatment is updated to each model parameter received and obtains newest conjunctive model parameter.The side of fusion treatment
Formula, which can be, to be weighted and averaged the update of each model parameter to obtain newest conjunctive model parameter.Each model parameter updates
Corresponding weight can be configured in advance, be also possible to update a model parameter, calculated its corresponding participation equipment
The data volume possessed, the ratio for the middle data volume that the participation equipment that Zhan Suoyou transmission pattern parameter updates is possessed, as
The model parameter updates corresponding weight.
Coordinate equipment to update according to each model parameter received and (herein refer to the first model parameter to update), counts each ginseng
With equipment to the participation state of this model modification.Wherein, participating in equipment can be the participation state of this model modification
Time participates in state or participates in state without the time, and having time state indicates that participating in equipment having time participates in this model modification,
No time state indicates that participating in equipment participates in this model modification without the time.Coordinating device statistics participation state can be statistics
Participation state is the quantity for the participation equipment that having time participates in state, or statistics participation state participates in the participation of state for no time
The quantity of equipment, or both count.
In the present embodiment, it participates in equipment and is divided into the first model parameter update for sending that locally training obtains, and do not send out
Two kinds of participation equipment for sending model parameter to update, transmission indicate what having time participated in, and what is do not sent indicates no time ginseng
With, then coordinate equipment and can be the quantity that the first model parameter for will receiving updates to participate in this model more as having time
The quantity of new participation equipment is also possible to subtract the number that the model parameter received updates with all quantity for participating in equipment
Amount, the quantity of the participation equipment of this model modification is participated in as no time, wherein all participation equipment refer in advance with association
Equipment is adjusted to establish all participation equipment of communication connection.When participate in equipment transmission is the first model for carrying label at the first time
When parameter updates, coordinating equipment can be the participation that this model modification is participated in using the quantity of first time label as having time
The quantity of equipment.
Coordinate equipment to be adjusted to obtain next time on the basis of the waiting time of this model modification according to statistical result
The waiting time of model modification.Specific adjustable strategies can be, when having time participates in the participation equipment of this model modification
When quantity is more, waiting time is reduced, because local instruction can be completed within this waiting time by mostly participating in equipment
Practice and send model parameter update, then can reduce the waiting time of next model modification so that next model modification when
Between shorten, to reduce the training time of federal learning model;When having time participates in the number of the participation equipment of this model modification
When measuring less, increase waiting time, all has little time to complete local instruction within this waiting time because mostly participating in equipment
Practice and send model parameter update, then can increase the waiting time of next model modification, so that participating in this mould without the time
The participation equipment that type updates can be participated in next model modification with having time, the participation equipment always fixed is avoided to participate in model more
Newly, to improve the model quality of federal learning model.
Specifically, coordinating equipment can be one preset quantity of setting, when detecting that having time participates in this model modification
The quantity of participation equipment when being greater than the preset quantity, determine having time participate in the quantity of the participation equipment of this model modification compared with
It is more, when being less than or equal to the preset quantity, determine that having time participates in the negligible amounts of the participation equipment of this model modification.Increase
The mode of waiting time can be waiting time obtaining the waiting time of next model modification plus preset increment, presets
Increment can be a fixed increment, determine when needing to increase waiting time every time, in addition the fixed increment, is also possible to pass
The increment of increasing such as adds 2 milliseconds, second plus 4 milliseconds for the first time.The mode for reducing waiting time then can be similar.
Coordinate equipment after the waiting time for obtaining newest conjunctive model parameter and next model modification, in next mould
The waiting time of the newest conjunctive model parameter and next model modification is carried in the model modification request message that type updates, and
The model modification request message of next time model modification is sent to each participation equipment, to start model modification next time.
Circulation detects when training pattern be in convergence state until coordinating equipment, terminates trained no longer to carry out next time that model is more
Newly, using newest conjunctive model parameter as the final argument to training pattern, that is, the training of federal learning model is completed.
Coordinate equipment detection and can be the newest conjunctive model of calculating to the mode whether training pattern is in convergence state
The difference of parameter and last conjunctive model parameter, if less than one preset value of difference, it is determined that be in and receive to training pattern
State is held back, if being not less than the preset value, it is determined that be in not converged state to training pattern;It is also possible to judgment models update
Whether number reaches preset times, if reaching preset times, it is determined that is in convergence state to training pattern;It can also be judgement
Training duration no can be greater than preset duration, if more than preset duration, it is determined that be in convergence state to training pattern.Wherein, in advance
If value, preset times and preset duration can be configured as needed.
In the present embodiment, the model modification request message coordinating equipment and sending is received by participating in equipment, and obtain this
The conjunctive model parameter of model modification and waiting time;Training pattern, which is treated, according to local data and conjunctive model parameter carries out this
Ground training obtains the update of the first model parameter;Determine whether that having time participates in this model modification according to waiting time;If it is determined that
Having time participates in this model modification, then send the first model parameter update give coordinate equipment, or will carry at the first time mark
The first model parameter update of note is sent to coordination equipment, for coordinate equipment according to the first model parameter update to obtain it is newest
Conjunctive model parameter, and each participation equipment is counted to the participation state of this model modification, according to system according to the first model parameter
It counts result and determines the waiting time of next model modification, and carry newest conjunctive model in next model modification request message
The waiting time of parameter and next model modification, until detecting when training pattern is in convergence state, by newest joint
Model parameter as the final argument to training pattern, realize coordinate equipment effectively according to participate in equipment locally training duration come
Dynamic adjusts the federal learning model training time, while the federal learning model training time of reduction as far as possible, mentions as best one can
The quality of high federation's learning model, to realize training time and the quality for taking into account federal learning model well.
Further, it is based on above-mentioned fourth embodiment, proposes federal the 5th embodiment of learning model training method of the present invention,
In the present embodiment, the model parameter updates the update sequence that message sending time He this model modification are also carried in request
Number, the step C30 includes:
Step C301 obtains the message sending time and the update serial number from the model modification request message;
Message sending time and this model modification can also be carried in the model modification request message of transmission by coordinating equipment
Update serial number.Wherein, message sending time, which refers to, coordinates the sending time that equipment transmission pattern updates request message, updates sequence
The serial number for number referring to this model modification identifies which time model modification this model modification is.Coordinate equipment in model modification
Message sending time is carried in request and updates serial number, the network delay for determining this model modification to help to participate in equipment.
Equipment is participated in obtain message sending time from model modification request message and update serial number.
Step C302 according to the message sending time, the update serial number and receives the model modification request
Receiving time determines network delay;
The receiving time that equipment record receives model modification request message is participated in, according to message sending time, updates sequence
Number and receiving time determine the network delay of this model modification.Specifically, it participates in equipment record and receives update serial number n's
The receiving time t2 of model modification request message, and get message sending time t1, then participate in equipment calculate t2-t1 to get
The time difference between sending time and receiving time requested to model modification, due to when general networking communicates in several ms
It is symmetrical for prolonging, therefore can estimate that participating in the time that equipment transmission pattern parameter updates receives model parameter more with equipment is coordinated
Time difference between the new time is also t2-t1, then the network delay for obtaining n-th model modification is 2* (t2-t1).
Step C303 according to the waiting time, is estimated and locally duration and the network delay is trained to judge whether sometimes
Between carry out local training and send model parameter update;
Equipment is participated in after determining the network delay of this model modification, according to waiting time, estimates local training duration
Judge whether that having time carries out local training and transmission pattern parameter updates with network delay, that is, judges whether that having time carries out this
Ground training and having time, which update the first model parameter that local training obtains, is sent to coordination equipment.Specifically, equipment is participated in
Whether comparable local training duration+network delay is less than waiting time, if being less than waiting time, it is determined that having time carries out this
Ground training simultaneously sends model parameter update, if being not more than waiting time, it is determined that the no time carries out local training and transmission pattern
Parameter updates.As shown in figure 3, pre-estimating can complete locally at the t3 moment if the waiting time of n-th model modification is w
Training, namely estimate a length of t3-t2 when local training, then the model parameter update that participation equipment transmission can be calculated can
It is reached at the t4=t2-t1+t3 moment and coordinates equipment, then t4-t1+ μ and w can be compared by participating in equipment, wherein μ indicates network not
Symmetry bring network delay is uncertain, generally constant, for example, 2 milliseconds.The value that μ can also be taken is zero, i.e., can also be with
The asymmetry of communication delay is not considered.If t4-t1+ μ < w, it is determined that having time carries out local training and sends model parameter more
Newly, if t4-t1+ μ >=w, it is determined that the no time carries out local training and sends model parameter update.
Step C304, however, it is determined that having time carries out local training and sends model parameter update, it is determined that having time participates in
This model modification;
If participating in equipment determines that having time carries out local training and sends model parameter update, it is determined that having time participates in this
Secondary model modification.
Step C305, however, it is determined that the no time carries out local training and sends model parameter update, it is determined that the no time participates in
This model modification.
It carries out local training without the time and sends model parameter to update if participating in equipment and determining, it is determined that the no time participates in this
Secondary model modification.
In the present embodiment, message sending time and update sequence are carried in model modification request message by coordinating equipment
Number, enable participation equipment more accurately to estimate the network delay of this model modification, to accurately judge whether sometimes
Between participate in this model modification so that waiting time of next model modification can accurately be adjusted by coordinating equipment.
Further, after step C305, further includes:
Step C50 judges whether that having time transmission pattern parameter updates according to the waiting time and the network delay;
Without the time this model modification is participated in if participating in equipment and determining, it can be according to waiting time and above-mentioned determination this mould
The network delay that type updates judges whether that having time transmission pattern parameter updates.Specifically, participating in equipment can determine whether this model
Whether the network delay of update is less than waiting time, if being less than, it is determined that and having time transmission pattern parameter updates, if being not less than,
It then determines and is updated without time transmission pattern parameter.Determining that participation equipment does not send mould when updating without time transmission pattern parameter
Shape parameter updates, but still carries out local training, and the model parameter update that training obtains is saved, in case next model is more
It is used when new.Such as continue to use above-mentioned example, participating in equipment can determine whether 2* (t2-t1)+β is less than w, wherein β be participate in equipment into
Time needed for row weighted average operation, β, which may further include, to be participated in equipment model parameter update is encrypted and taken
Between, for example, the time needed for executing homomorphic cryptography participates in equipment and determines that having time transmission pattern is joined as 2* (t2-t1)+β < w
Number updates, and as 2* (t2-t1)+β >=w, participates in equipment and determines without the update of time transmission pattern parameter.
Step C60, however, it is determined that having time transmission pattern parameter updates, then obtains continuous not sent model parameter recently and update
Number;
If it is determined that having time transmission pattern parameter updates, then participates in equipment and obtain not sent model parameter update continuous recently
Number, judge whether be greater than preset times.Wherein, record time that continuous not sent model parameter updates recently in equipment is participated in
Number, not sent model parameter updates the model parameter for referring to that not sent local training obtains and updates here, and also not sent local is melted
Obtained model parameter is closed to update.
Preset times are configured according to specific needs, it is therefore an objective to prevent from participating in equipment continuous several times not transmission pattern
Parameter updates, and is such as set as 1 time, and to prevent from participating in equipment in the case where last time not having the update of transmission pattern parameter, this is not yet
Transmission pattern parameter updates.
Step C70, when detecting that the number is greater than preset times, more with last time model by the conjunctive model parameter
The preceding model parameter update that local training obtains is carried out when new and carries out local fusion, obtains the update of the second model parameter;
If more than preset times, then the preceding model parameter that local training obtains will be carried out when last time model modification and updates and connects
The conjunctive model parameter received carries out local fusion, obtains the update of the second model parameter.It should be noted that in the present embodiment
In, equipment will be participated in and locally merge obtained model parameter and update being known as the update of the second model parameter, with distinguish with it is above-mentioned
In embodiment participate in equipment carry out locally merge obtain the first model parameter update, it is possible to understand that when, the first model parameter
It updates and the second model parameter is updated in the only difference on source.
The mode locally merged, which can be, is weighted and averaged preceding model parameter update X1 and conjunctive model parameter X2, weighs
It can be and be configured as the case may be again, as the weight of preceding model parameter update can be the number that the participation equipment is possessed
According to amount, the ratio a for participating in the data volume that all participation equipment that federal learning model is trained possess, the power of conjunctive model parameter are accounted for
It can be (1-a) again, then locally merge obtained model parameter and be updated to X1*a+X2 (1-a).It should be noted that local melt
The time that the mode of conjunction obtains model parameter update is extremely short, can be ignored, in the case where ignoring, the value of above-mentioned β
It can be 0.
Second model parameter for carrying the second time label is sent to the coordination equipment, or will taken by step C80
Second model parameter with second time label is sent to the coordination after being encrypted according to predetermined encryption algorithm
Equipment.
It participates in equipment and the second model parameter for carrying the second time label is sent to coordination equipment, or second will be carried
Second model parameter of time label is sent to coordination equipment after being encrypted according to predetermined encryption algorithm.Second time label is used
In indicating that no time participates in this model modification, for example, the second time label can be marker being denoted as 0.Predetermined encryption is calculated
Method can be homomorphic encryption algorithm or other traditional Encryption Algorithm.
It is updated if participating in equipment and determining without time transmission pattern parameter, the first model parameter can not be sent and updated also not
The update of the second model parameter is sent, but participates in equipment and still carries out local training, and the model parameter that local training is obtained is more
It newly carries out saving use when in case of next model modification.
It is sent under the scene that the second model parameter updates when participating in equipment to equipment is coordinated, coordinates the model that equipment receives
Parameter update will have the first model parameter update and the second model parameter update two kinds, coordinate equipment to model parameter update into
When row fusion treatment, coordinate equipment and can not distinguish two kinds of model parameters to update, to all model parameters received update into
Row fusion treatment, the mode of fusion treatment and the process phase for carrying out fusion treatment in above-described embodiment to the first model modification parameter
Together.
When coordination equipment updates participation state of each participation equipment of statistics to this model modification according to model parameter, by
It is updated in the first model parameter and participates in sending when equipment having time participates in this model modification, the update of the second model parameter is
Participate in what equipment was sent when participating in this model modification without the time, thus coordinate equipment need to distinguish the first model parameter update and
Second model parameter updates.The mode of differentiation, which may is that, extracts the time label in the model parameter update received,
It being marked at the first time if extracting, model parameter update is that the first model parameter updates, if extracting the second time label,
Then model parameter update is that the second model parameter updates.Coordinate equipment according to first time marker number, determines to this mould
The participation state that type updates is the first quantity of the participation equipment of having time state, according to the quantity that the second time marked, is determined
The second quantity of the participation equipment of state is participated in for no time to the participation state of this model modification.According to the first quantity or
Two quantity are adjusted the waiting time of this model modification, obtain the waiting time of next model modification.
In the present embodiment, this model modification is participated in without the time determining by participating in equipment, but having time sends mould
Shape parameter updates, and on when also not sent model parameter updates several times, carry out local fusion and obtain the update of the second model parameter,
It carries the second time label and is sent to coordination equipment, can avoid participating in equipment because multiple not sent model parameter update due to is coordinated
Equipment, which is considered as, to go offline, not to receive the data that the participation equipment is sent, and participation equipment is also avoided to re-start authentication because going offline
And the time for participating in equipment and coordinating equipment is wasted, and it can also be by setting preset times to 1 time, to avoid participating in, equipment is continuous
Not transmission pattern parameter more news twice.
Further, in one embodiment, described if it is determined that having time participates in this model modification, then by described first
The step of model parameter update is sent to the coordination equipment can include:
Step b, however, it is determined that having time participates in this model modification, then whether judgement currently meets default transmission condition;
If participating in equipment determines that having time participates in this model modification, whether judgement currently meets default transmission condition.
Wherein, default transmission condition can be is configured in advance, and the electricity for such as participating in equipment is greater than default electricity, so that participation is set
Standby not send the update of the first model parameter when electricity is less, saving participates in the electricity of equipment.It is also possible to obtain one 0 to 1
The random number of range, compares whether random number is greater than predetermined probabilities, and such as 0.5, if more than, it is determined that meet default transmission condition,
If being not more than, it is determined that be unsatisfactory for default transmission condition, in this manner, may make that participation equipment need not each model modification
When all transmission pattern parameter update, thus reduce participate in equipment and coordinate equipment burden.
First model parameter update is sent to the coordination equipment when meeting default transmission condition by step c.
When participation equipment, which determines, meets default transmission condition, the update of the first model parameter is sent to coordination equipment.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, computer, clothes
Business device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (13)
1. a kind of federation's learning model training method, which is characterized in that federation's learning model training method is applied to coordinate
Equipment, the coordination equipment and multiple participations equipment communicate to connect, the federal learning model training method the following steps are included:
Request message is updated to each participation equipment transmission pattern, this federation is carried in the model modification request message and is learned
Practise conjunctive model parameter and the waiting time of model modification;
Receive each model parameter update for participating in equipment and sending, wherein the model parameter is updated to be set by each participation
It is obtained for the local training of training pattern progress is treated according to local data and the conjunctive model parameter, and the model parameter
It is updated to transmitted when each participation equipment determines that having time participates in this model modification according to the waiting time;
Progress fusion treatment is updated to each model parameter and obtains newest conjunctive model parameter;
Each equipment that participates in of statistics is updated to the participation state of this model modification, according to statistics according to each model parameter
As a result adjusting the waiting time obtains the waiting time of next model modification;
The newest conjunctive model parameter and the next mould are carried in the model modification request message of next model modification
It is the waiting time that type updates, described when training pattern is in convergence state until detecting, by the newest conjunctive model
Parameter is as the final argument to training pattern.
2. federation's learning model training method as described in claim 1, which is characterized in that described according to each model parameter
Updating the step of counting each participation state for participating in equipment to this model modification includes:
Time label in each model parameter update is extracted;
When extracting label at the first time, according to the quantity that the first time marks, the ginseng to this model modification is determined
It is the first quantity of the participation equipment that having time participates in state with state, wherein the participation equipment is sending local instruction
The model parameter got carries the first time label when updating.
3. federation's learning model training method as claimed in claim 2, which is characterized in that it is described to each model parameter more
After the step of time label in new extracts, further includes:
When extracting the second time label, according to the quantity that second time marks, the ginseng to this model modification is determined
The second quantity of the participation equipment of state is participated in for no time with state, wherein the participation equipment is according to described etc.
It is determined to duration and participates in this model modification without the time, but having time transmission pattern parameter updates, and continuous not sent mould recently
When the number that shape parameter updates is greater than preset times, local training is carried out when by the conjunctive model parameter and last time model modification
Obtained preceding model parameter, which updates, carries out local fusion, obtains the model parameter and updates, and is sending the model parameter more
The second time label is carried when new.
4. federation's learning model training method as claimed in claim 2, which is characterized in that described to adjust institute according to statistical result
Stating the step of obtaining the waiting time of next model modification waiting time includes:
When the statistical result is first quantity, judge first quantity whether less than the first preset quantity;
If first quantity be less than first preset quantity, increase the waiting time obtain next model modification etc.
To duration;
If first quantity is not less than first preset quantity, judge whether first quantity is greater than the second present count
Amount, wherein first preset quantity is less than second preset quantity;
If first quantity be greater than second preset quantity, reduce the waiting time obtain next model modification etc.
To duration.
5. a kind of federation's learning model training method, which is characterized in that federation's learning model training method is applied to participate in
Equipment, the participation equipment and coordinate equipment communication connection, it is described federation learning model training method the following steps are included:
The model modification request message that the coordination equipment is sent is received, obtains this connection from the model modification request message
The conjunctive model parameter and waiting time that nation's learning model updates;
Training pattern is treated according to the local data for participating in equipment and the conjunctive model parameter and carries out local training, is obtained
First model parameter updates;
Determine whether that having time participates in this model modification according to the waiting time;
If it is determined that having time participates in this model modification, then first model parameter update is sent to the coordination equipment,
Or first model parameter update for carrying label at the first time is sent to the coordination equipment.
6. federation's learning model training method as claimed in claim 5, which is characterized in that the model parameter updates in request
The update serial number of message sending time and this model modification is also carried, it is described that having time is determined whether according to the waiting time
The step of participating in this model modification include:
The message sending time and the update serial number are obtained from the model modification request message;
Net is determined according to the message sending time, the receiving time for updating serial number and receiving the model modification request
Network time delay;
According to the waiting time, estimates and duration and the network delay is locally trained to judge whether that having time carries out local training
And send model parameter update;
If it is determined that having time carries out local training and sends model parameter update, it is determined that having time participates in this model modification;
If it is determined that carrying out local training without the time and sending model parameter update, it is determined that the no time participates in this model modification.
7. federation's learning model training method as claimed in claim 6, which is characterized in that described if it is determined that carrying out this without the time
Ground training simultaneously sends model parameter update, it is determined that the no time participated in after the step of this model modification, further includes:
Judge whether that having time transmission pattern parameter updates according to the waiting time and the network delay;
If it is determined that having time transmission pattern parameter updates, then the number that continuous not sent model parameter updates recently is obtained;
When detecting that the number is greater than preset times, carried out when by the conjunctive model parameter and last time model modification local
The preceding model parameter that training obtains, which updates, carries out local fusion, obtains the update of the second model parameter;
When second model parameter for carrying the second time label being sent to the coordination equipment, or described second will be carried
Between second model parameter that marks encrypted according to predetermined encryption algorithm after be sent to the coordination equipment.
8. federation's learning model training method as claimed in claim 5, which is characterized in that described by first model parameter
Update is sent to the step of coordination equipment and includes:
First model parameter is updated and is encrypted according to predetermined encryption algorithm, and encrypted first model is joined
Number is sent to the coordination equipment.
9. federation's learning model training method as claimed in claim 5, which is characterized in that described if it is determined that having time participates in this
Secondary model modification, then first model parameter is updated the step of being sent to the coordination equipment includes:
If it is determined that having time participates in this model modification, then whether judgement currently meets default transmission condition;
When meeting default transmission condition, first model parameter update is sent to the coordination equipment.
10. a kind of equipment, which is characterized in that the equipment includes: memory, processor and is stored on the memory and can
The federal learning model training program run on the processor, federation's learning model training program is by the processor
The step of federal learning model training method according to any one of claims 1 to 4 is realized when execution.
11. a kind of equipment, which is characterized in that the equipment includes: memory, processor and is stored on the memory and can
The federal learning model training program run on the processor, federation's learning model training program is by the processor
The step of federal learning model training method as described in any one of claim 5 to 9 is realized when execution.
12. a kind of federation's learning model training system, which is characterized in that federation's learning model training system includes: at least
One coordination equipment and at least one participation equipment, the coordination equipment is equipment described in any one of claim 10, and the participation is set
Standby is equipment described in claim 11.
13. a kind of computer readable storage medium, which is characterized in that be stored with federal on the computer readable storage medium
Model training program is practised, federation's learning model training program is realized when being executed by processor as any in claim 1 to 9
The step of federal learning model training method described in item.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910538946.6A CN110263908B (en) | 2019-06-20 | 2019-06-20 | Federal learning model training method, apparatus, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910538946.6A CN110263908B (en) | 2019-06-20 | 2019-06-20 | Federal learning model training method, apparatus, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110263908A true CN110263908A (en) | 2019-09-20 |
CN110263908B CN110263908B (en) | 2024-04-02 |
Family
ID=67920063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910538946.6A Active CN110263908B (en) | 2019-06-20 | 2019-06-20 | Federal learning model training method, apparatus, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110263908B (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674528A (en) * | 2019-09-20 | 2020-01-10 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
CN111026436A (en) * | 2019-12-09 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Model joint training method and device |
CN111062493A (en) * | 2019-12-20 | 2020-04-24 | 深圳前海微众银行股份有限公司 | Longitudinal federation method, device, equipment and medium based on public data |
CN111324440A (en) * | 2020-02-17 | 2020-06-23 | 深圳前海微众银行股份有限公司 | Method, device and equipment for executing automation process and readable storage medium |
CN111340453A (en) * | 2020-02-28 | 2020-06-26 | 深圳前海微众银行股份有限公司 | Federal learning development method, device, equipment and storage medium |
CN111400442A (en) * | 2020-02-28 | 2020-07-10 | 深圳前海微众银行股份有限公司 | Resident address analysis method, resident address analysis device, resident address analysis equipment and readable storage medium |
CN111460528A (en) * | 2020-04-01 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Multi-party combined training method and system based on Adam optimization algorithm |
CN111475853A (en) * | 2020-06-24 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | Model training method and system based on distributed data |
CN111612168A (en) * | 2020-06-30 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Management method and related device for machine learning task |
CN111666576A (en) * | 2020-04-29 | 2020-09-15 | 平安科技(深圳)有限公司 | Data processing model generation method and device and data processing method and device |
WO2021056760A1 (en) * | 2019-09-24 | 2021-04-01 | 深圳前海微众银行股份有限公司 | Federated learning data encryption method, apparatus and device, and readable storage medium |
CN112702623A (en) * | 2020-12-18 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Video processing method, device, equipment and storage medium |
CN112907309A (en) * | 2019-11-19 | 2021-06-04 | 阿里巴巴集团控股有限公司 | Model updating method, resource recommendation method, device, equipment and system |
CN112994981A (en) * | 2021-03-03 | 2021-06-18 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
WO2021121029A1 (en) * | 2019-12-20 | 2021-06-24 | 深圳前海微众银行股份有限公司 | Training model updating method and system, and agent, server and computer-readable storage medium |
CN113094180A (en) * | 2021-05-06 | 2021-07-09 | 苏州联电能源发展有限公司 | Wireless federal learning scheduling optimization method and device |
CN113158223A (en) * | 2021-01-27 | 2021-07-23 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and medium based on state transition kernel optimization |
WO2021155671A1 (en) * | 2020-08-24 | 2021-08-12 | 平安科技(深圳)有限公司 | High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium |
CN113268727A (en) * | 2021-07-19 | 2021-08-17 | 天聚地合(苏州)数据股份有限公司 | Joint training model method, device and computer readable storage medium |
CN113377931A (en) * | 2020-03-09 | 2021-09-10 | 香港理工大学深圳研究院 | Language model collaborative learning method, system and terminal of interactive robot |
CN113469370A (en) * | 2021-06-22 | 2021-10-01 | 河北工业大学 | Industrial Internet of things data sharing method based on federal incremental learning |
CN113515760A (en) * | 2021-05-28 | 2021-10-19 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, device, computer equipment and storage medium |
WO2021219053A1 (en) * | 2020-04-29 | 2021-11-04 | 深圳前海微众银行股份有限公司 | Federated learning modeling method, apparatus and device, and readable storage medium |
CN113837397A (en) * | 2021-09-27 | 2021-12-24 | 平安科技(深圳)有限公司 | Model training method and device based on federal learning and related equipment |
WO2022041947A1 (en) * | 2020-08-24 | 2022-03-03 | 华为技术有限公司 | Method for updating machine learning model, and communication apparatus |
WO2022062724A1 (en) * | 2020-09-27 | 2022-03-31 | 中兴通讯股份有限公司 | Fault prediction method and apparatus, and computer-readable storage medium |
WO2022156910A1 (en) * | 2021-01-25 | 2022-07-28 | Nokia Technologies Oy | Enablement of federated machine learning for terminals to improve their machine learning capabilities |
CN114822863A (en) * | 2022-05-12 | 2022-07-29 | 浙江大学 | Method, apparatus, storage medium, and program product for analyzing medical data based on federated learning system |
CN114915983A (en) * | 2021-02-07 | 2022-08-16 | 展讯通信(上海)有限公司 | Data acquisition method and device |
WO2022227212A1 (en) * | 2021-04-25 | 2022-11-03 | 平安科技(深圳)有限公司 | Federated learning-based speech representation model training method and apparatus, device, and medium |
WO2023028907A1 (en) * | 2021-09-01 | 2023-03-09 | Qualcomm Incorporated | Techniques for using relay averaging in federated learning |
WO2023103959A1 (en) * | 2021-12-07 | 2023-06-15 | 华为技术有限公司 | Wireless communication method and apparatus |
WO2023231620A1 (en) * | 2022-06-02 | 2023-12-07 | 华为技术有限公司 | Communication method and apparatus |
WO2024183627A1 (en) * | 2023-03-03 | 2024-09-12 | 华为技术有限公司 | Model update method and communication apparatus |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109165725A (en) * | 2018-08-10 | 2019-01-08 | 深圳前海微众银行股份有限公司 | Neural network federation modeling method, equipment and storage medium based on transfer learning |
CN109255444A (en) * | 2018-08-10 | 2019-01-22 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on transfer learning |
CN109284313A (en) * | 2018-08-10 | 2019-01-29 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on semi-supervised learning |
CN109635462A (en) * | 2018-12-17 | 2019-04-16 | 深圳前海微众银行股份有限公司 | Model parameter training method, device, equipment and medium based on federation's study |
CN109886417A (en) * | 2019-03-01 | 2019-06-14 | 深圳前海微众银行股份有限公司 | Model parameter training method, device, equipment and medium based on federation's study |
-
2019
- 2019-06-20 CN CN201910538946.6A patent/CN110263908B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109165725A (en) * | 2018-08-10 | 2019-01-08 | 深圳前海微众银行股份有限公司 | Neural network federation modeling method, equipment and storage medium based on transfer learning |
CN109255444A (en) * | 2018-08-10 | 2019-01-22 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on transfer learning |
CN109284313A (en) * | 2018-08-10 | 2019-01-29 | 深圳前海微众银行股份有限公司 | Federal modeling method, equipment and readable storage medium storing program for executing based on semi-supervised learning |
CN109635462A (en) * | 2018-12-17 | 2019-04-16 | 深圳前海微众银行股份有限公司 | Model parameter training method, device, equipment and medium based on federation's study |
CN109886417A (en) * | 2019-03-01 | 2019-06-14 | 深圳前海微众银行股份有限公司 | Model parameter training method, device, equipment and medium based on federation's study |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110674528A (en) * | 2019-09-20 | 2020-01-10 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
CN110674528B (en) * | 2019-09-20 | 2024-04-09 | 深圳前海微众银行股份有限公司 | Federal learning privacy data processing method, device, system and storage medium |
WO2021056760A1 (en) * | 2019-09-24 | 2021-04-01 | 深圳前海微众银行股份有限公司 | Federated learning data encryption method, apparatus and device, and readable storage medium |
CN112907309A (en) * | 2019-11-19 | 2021-06-04 | 阿里巴巴集团控股有限公司 | Model updating method, resource recommendation method, device, equipment and system |
CN111026436A (en) * | 2019-12-09 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Model joint training method and device |
WO2021114933A1 (en) * | 2019-12-09 | 2021-06-17 | 支付宝(杭州)信息技术有限公司 | Model joint training method and apparatus |
WO2021121029A1 (en) * | 2019-12-20 | 2021-06-24 | 深圳前海微众银行股份有限公司 | Training model updating method and system, and agent, server and computer-readable storage medium |
CN111062493A (en) * | 2019-12-20 | 2020-04-24 | 深圳前海微众银行股份有限公司 | Longitudinal federation method, device, equipment and medium based on public data |
CN111062493B (en) * | 2019-12-20 | 2021-06-15 | 深圳前海微众银行股份有限公司 | Longitudinal federation method, device, equipment and medium based on public data |
CN111324440A (en) * | 2020-02-17 | 2020-06-23 | 深圳前海微众银行股份有限公司 | Method, device and equipment for executing automation process and readable storage medium |
CN111400442A (en) * | 2020-02-28 | 2020-07-10 | 深圳前海微众银行股份有限公司 | Resident address analysis method, resident address analysis device, resident address analysis equipment and readable storage medium |
CN111400442B (en) * | 2020-02-28 | 2024-06-04 | 深圳前海微众银行股份有限公司 | Method, device, equipment and readable storage medium for analyzing resident address |
CN111340453A (en) * | 2020-02-28 | 2020-06-26 | 深圳前海微众银行股份有限公司 | Federal learning development method, device, equipment and storage medium |
CN111340453B (en) * | 2020-02-28 | 2024-09-24 | 深圳前海微众银行股份有限公司 | Federal learning development method, device, equipment and storage medium |
CN113377931A (en) * | 2020-03-09 | 2021-09-10 | 香港理工大学深圳研究院 | Language model collaborative learning method, system and terminal of interactive robot |
CN111460528B (en) * | 2020-04-01 | 2022-06-14 | 支付宝(杭州)信息技术有限公司 | Multi-party combined training method and system based on Adam optimization algorithm |
CN111460528A (en) * | 2020-04-01 | 2020-07-28 | 支付宝(杭州)信息技术有限公司 | Multi-party combined training method and system based on Adam optimization algorithm |
WO2021219053A1 (en) * | 2020-04-29 | 2021-11-04 | 深圳前海微众银行股份有限公司 | Federated learning modeling method, apparatus and device, and readable storage medium |
CN111666576B (en) * | 2020-04-29 | 2023-08-04 | 平安科技(深圳)有限公司 | Data processing model generation method and device, and data processing method and device |
CN111666576A (en) * | 2020-04-29 | 2020-09-15 | 平安科技(深圳)有限公司 | Data processing model generation method and device and data processing method and device |
CN111475853A (en) * | 2020-06-24 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | Model training method and system based on distributed data |
CN111612168A (en) * | 2020-06-30 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Management method and related device for machine learning task |
WO2021155671A1 (en) * | 2020-08-24 | 2021-08-12 | 平安科技(深圳)有限公司 | High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium |
WO2022041947A1 (en) * | 2020-08-24 | 2022-03-03 | 华为技术有限公司 | Method for updating machine learning model, and communication apparatus |
WO2022062724A1 (en) * | 2020-09-27 | 2022-03-31 | 中兴通讯股份有限公司 | Fault prediction method and apparatus, and computer-readable storage medium |
CN112702623A (en) * | 2020-12-18 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Video processing method, device, equipment and storage medium |
WO2022156910A1 (en) * | 2021-01-25 | 2022-07-28 | Nokia Technologies Oy | Enablement of federated machine learning for terminals to improve their machine learning capabilities |
CN113158223A (en) * | 2021-01-27 | 2021-07-23 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and medium based on state transition kernel optimization |
WO2022160578A1 (en) * | 2021-01-27 | 2022-08-04 | 深圳前海微众银行股份有限公司 | State transition core optimization-based data processing method, apparatus and device, and medium |
CN114915983A (en) * | 2021-02-07 | 2022-08-16 | 展讯通信(上海)有限公司 | Data acquisition method and device |
CN112994981A (en) * | 2021-03-03 | 2021-06-18 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
CN112994981B (en) * | 2021-03-03 | 2022-05-10 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
WO2022227212A1 (en) * | 2021-04-25 | 2022-11-03 | 平安科技(深圳)有限公司 | Federated learning-based speech representation model training method and apparatus, device, and medium |
CN113094180B (en) * | 2021-05-06 | 2023-10-10 | 苏州联电能源发展有限公司 | Wireless federal learning scheduling optimization method and device |
CN113094180A (en) * | 2021-05-06 | 2021-07-09 | 苏州联电能源发展有限公司 | Wireless federal learning scheduling optimization method and device |
CN113515760A (en) * | 2021-05-28 | 2021-10-19 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, device, computer equipment and storage medium |
CN113515760B (en) * | 2021-05-28 | 2024-03-15 | 平安国际智慧城市科技股份有限公司 | Horizontal federal learning method, apparatus, computer device, and storage medium |
CN113469370B (en) * | 2021-06-22 | 2022-08-30 | 河北工业大学 | Industrial Internet of things data sharing method based on federal incremental learning |
CN113469370A (en) * | 2021-06-22 | 2021-10-01 | 河北工业大学 | Industrial Internet of things data sharing method based on federal incremental learning |
CN113268727A (en) * | 2021-07-19 | 2021-08-17 | 天聚地合(苏州)数据股份有限公司 | Joint training model method, device and computer readable storage medium |
WO2023028907A1 (en) * | 2021-09-01 | 2023-03-09 | Qualcomm Incorporated | Techniques for using relay averaging in federated learning |
CN113837397B (en) * | 2021-09-27 | 2024-02-02 | 平安科技(深圳)有限公司 | Model training method and device based on federal learning and related equipment |
CN113837397A (en) * | 2021-09-27 | 2021-12-24 | 平安科技(深圳)有限公司 | Model training method and device based on federal learning and related equipment |
WO2023103959A1 (en) * | 2021-12-07 | 2023-06-15 | 华为技术有限公司 | Wireless communication method and apparatus |
CN114822863A (en) * | 2022-05-12 | 2022-07-29 | 浙江大学 | Method, apparatus, storage medium, and program product for analyzing medical data based on federated learning system |
WO2023231620A1 (en) * | 2022-06-02 | 2023-12-07 | 华为技术有限公司 | Communication method and apparatus |
WO2024183627A1 (en) * | 2023-03-03 | 2024-09-12 | 华为技术有限公司 | Model update method and communication apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN110263908B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263908A (en) | Federal learning model training method, equipment, system and storage medium | |
Jiang et al. | Multi-agent reinforcement learning for efficient content caching in mobile D2D networks | |
He et al. | Secure social networks in 5G systems with mobile edge computing, caching, and device-to-device communications | |
CN111310932B (en) | Method, device, equipment and readable storage medium for optimizing transverse federal learning system | |
Zhou et al. | Incentive-driven deep reinforcement learning for content caching and D2D offloading | |
He et al. | Integrated networking, caching, and computing for connected vehicles: A deep reinforcement learning approach | |
CN109862610B (en) | D2D user resource allocation method based on deep reinforcement learning DDPG algorithm | |
WO2021232754A1 (en) | Federated learning modeling method and device, and computer-readable storage medium | |
CN111242316B (en) | Longitudinal federal learning model training optimization method, device, equipment and medium | |
WO2021219054A1 (en) | Transverse federated learning system optimization method, apparatus and device, and readable storage medium | |
Huang et al. | A game-theoretic resource allocation approach for intercell device-to-device communications in cellular networks | |
Li et al. | SMDP-based coordinated virtual machine allocations in cloud-fog computing systems | |
CN106411749B (en) | A kind of routing resource for software defined network based on Q study | |
Fu et al. | Learning to compete for resources in wireless stochastic games | |
CN109325584A (en) | Federation's modeling method, equipment and readable storage medium storing program for executing neural network based | |
CN109165725A (en) | Neural network federation modeling method, equipment and storage medium based on transfer learning | |
Yu et al. | INDAPSON: An incentive data plan sharing system based on self-organizing network | |
CN109635462A (en) | Model parameter training method, device, equipment and medium based on federation's study | |
CN109165515A (en) | Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study | |
CN109255444A (en) | Federal modeling method, equipment and readable storage medium storing program for executing based on transfer learning | |
WO2023093238A1 (en) | Method and apparatus for performing service processing by using learning model | |
Xia et al. | A reputation-based model for trust evaluation in social cyber-physical systems | |
Fan et al. | Delay-aware resource allocation in fog-assisted IoT networks through reinforcement learning | |
Sun et al. | Heterogeneous-belief based incentive schemes for crowd sensing in mobile social networks | |
Zeng et al. | Trust-based multi-agent imitation learning for green edge computing in smart cities |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |