CN110490738A - A kind of federal learning method of mixing and framework - Google Patents

A kind of federal learning method of mixing and framework Download PDF

Info

Publication number
CN110490738A
CN110490738A CN201910720373.9A CN201910720373A CN110490738A CN 110490738 A CN110490738 A CN 110490738A CN 201910720373 A CN201910720373 A CN 201910720373A CN 110490738 A CN110490738 A CN 110490738A
Authority
CN
China
Prior art keywords
group
participant
federal learning
learning model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910720373.9A
Other languages
Chinese (zh)
Inventor
程勇
董苗波
刘洋
陈天健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201910720373.9A priority Critical patent/CN110490738A/en
Priority to PCT/CN2019/117518 priority patent/WO2021022707A1/en
Publication of CN110490738A publication Critical patent/CN110490738A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of federal learning method of mixing and framework, this method is suitable for the federal learning model training with multiple groups participant;Wherein method are as follows: each group is directed to, according to every group of data set joint training of participant in organizing of the first federal learning model;First federal learning model of each group is merged to obtain the second federal learning model, and the described second federal learning model is sent to participant in each group;For each group, the updated first federal learning model is obtained according to the data set training of participant in the described second federal learning model and described group, it returns and the step of obtaining the second federal learning model is merged to the first federal learning model of each group, until model training terminates.When the above method is applied to financial technology (Fintech), the accuracy rate of federal learning model can be promoted.

Description

A kind of federal learning method of mixing and framework
Technical field
The present invention relates to the financial technology field (Fintech) and federal learning areas more particularly to a kind of federal of mixing Learning method and framework.
Background technique
With the development of computer technology, more and more technologies (big data, distribution, block chain (Blockchain), Artificial intelligence etc.) it applies in financial field, traditional financial industry gradually changes to financial technology (Fintech).Currently, financial The adjustment of many monetary devices is all relied on to the federal study of a large amount of data of financial transaction progress as a result, corresponding in sciemtifec and technical sphere The adjustment of monetary device is likely to impact the profit and loss of financial institution.Therefore, for a financial institution, federation is learned The accuracy for practising model is most important.
However, in the scene of the federal study of application at present, although being frequently encountered the data energy shape that participant A and B possess , can be with joint mapping machine learning model at complementation, but the data volume that participant A and B possess is still considerably less, the connection of building The performance of molding type is difficult to reach expectation index, so that the accuracy of conjunctive model is also not high enough.Therefore, in the prior art, join The not high enough accuracy for the conjunctive model that nation learns is a urgent problem to be solved.
Summary of the invention
The embodiment of the present application provides a kind of federal learning method of mixing and framework, solves federal study mould in the prior art The inaccurate problem of type.
In a first aspect, the embodiment of the present application provides a kind of federal learning method of mixing, this method is suitable for having multiple groups ginseng With the training of the federal model of person, wherein include identical sample object and not between the data set of the participant in same group Same sample characteristics;It include identical sample characteristics and different samples pair between the data set of participant between different groups As;The described method includes: each group is directed to, according to every group of data set joint training of participant in organizing of the first federal study mould Type;Wherein, each participant exchanges with other participants in group in organizing during the federal learning model of training described first The intermediate result of training;Merged to obtain the second federal learning model to the first of each group the federal learning model, and by institute It states the second federal learning model and is sent to participant in each group;For each group, according to the described second federal learning model and The data set training of participant obtains the updated first federal learning model in described group, returns to the first federal to each group It practises model and is merged the step of obtaining the second federal learning model, until model training terminates.
In the above method, in the longitudinal federal learning model of at least one of acquisition, since the first federal learning model is group Interior each participant in the training process according to organize in other participants training intermediate result determine, therefore every group first Wheel optimization has been carried out in federal learning model, then is merged to obtain the second federation to the first federal learning model of each group Learning model, and it is directed to each group, it is obtained more according to the data set of participant in the described second federal learning model and described group The first federal learning model after new, therefore the federal learning model suitable for each group participant got has fully considered respectively A first federal learning model advanced optimizes on the basis of the first federal learning model of each group, therefore passes through the above method The scalability that federal study can be greatly improved, combines the data for considering more multi-player, realizes the connection to mass data Nation's study, to increase the accuracy of federal study.
In a kind of optional embodiment, the preset termination condition that the model training terminates includes at least one of the following: institute State the parameter convergence of the second federal learning model;The update times of described second federal learning model are greater than or equal to default training Number;The training time of described second federal learning model is greater than or equal to default training duration.
In the above method, the specific termination condition of training end is provided, is just stopped when one or more more than satisfaction Training, to avoid because training federation learning model does not stop and consumes resource.
In a kind of optional embodiment, each group includes a coordinator in group, the mistake of the federal learning model of training described first Each participant has exchanged trained intermediate result with other interior participants are organized in organizing in journey, comprising: for any group appoint One participant executes following training process and obtains the described first federal learning model, comprising: for any group of any participation Person executes following training process and obtains the described first federal learning model, comprising: the participant will be according to the participant's The intermediate result of the initial model of data set training is sent to other participants;The participant is anti-according to other described participants The intermediate result of feedback obtains the training result of the initial model, and is sent to coordinator in described group;Coordinator in described group According to the training result of each participant, determines undated parameter and be sent to each participant;The participant joins according to the update Number updates the initial model, obtains the described first federal learning model.
In the above method, participant will send according to the intermediate result of the initial model of the data set of participant training Give other participants;The intermediate result that the participant feeds back according to other described participants, obtains the instruction of the initial model Practice result, that is to say, that the training result of participant has fully considered the intermediate result for organizing interior other participants, and training result is more Add accurately, and coordinator determines undated parameter and be sent to each participant according to the training result of each participant in described group; The participant updates the initial model according to the undated parameter, has obtained the more accurate first federal learning model.
In a kind of optional embodiment, the first federal learning model to each group, which is merged to obtain second, federal to be learned Practise model, comprising: be weighted and averaged the parameter value of same parameters in the first of each group the federal learning model, as institute State the value of the parameter in the second federal learning model.
Under aforesaid way, by the way that the parameter value of same parameters in the first of each group the federal learning model is weighted It is average, the value of the parameter in the second federal learning model is obtained, so that each parameter is determined by weight, so that the second federal study In parameter value it is more accurate.
In a kind of optional embodiment, the first federal learning model to each group, which is merged to obtain second, federal to be learned Practise model, comprising: by coordinator between group, the parameter value of same parameters in the first of each group the federal learning model is carried out Weighted average, the value as the parameter in the described second federal learning model;By coordinator between group, by the described second federal It practises model and is sent to coordinator in each group;The described second federal learning model is sent in group and participates in by coordinator in described group Person.
Under aforesaid way, by coordinator between group by the parameter of same parameters in the first of each group the federal learning model Value is weighted and averaged, as the value of the parameter in the described second federal learning model, so as to avoid organizing between interior coordinator The communication of frequent interactive learning model, further improves the acquisition efficiency of federal learning model.
Second aspect, the application provide a kind of federal study framework of mixing, comprising: the federal learning system of multiple groups first and association Tune person;Wherein, every group first federal learning system includes multiple participants;With each participant in the federal learning system of group first Data set between include identical sample object and different sample characteristics;It is each between the federal learning system of difference group first It include identical sample characteristics and different sample objects between the data set of participant;Any participant, is used for, according to group The federal learning model of the first of every group of data set joint training of interior participant;Wherein, the federal learning model of training described first During organize in each participant and organize in other participants have exchanged trained intermediate result;The coordinator, is used for The first of each group federal learning model is merged to obtain the second federal learning model, and will second federation learning model It is sent to participant in each group.
In a kind of optional embodiment, the coordinator is coordinator in group in each first federal learning system;Or institute Stating coordinator is coordinator between group between each first federal learning system.
In a kind of optional embodiment, the participant, for will be according to the initial of the data set of participant training The intermediate result of model is sent to other participants;The participant is also used to the centre according to other participants feedback As a result, obtaining the training result of the initial model, and it is sent to coordinator in described group;Coordinator in described group, is also used to Undated parameter is determined according to the training result of each participant and is sent to each participant;The participant, is also used to according to Undated parameter updates the initial model, obtains the described first federal learning model.
The beneficial effect of above-mentioned second aspect and each embodiment of second aspect can refer to above-mentioned first aspect and first The beneficial effect of each embodiment of aspect, which is not described herein again.
The third aspect, the embodiment of the present application provide a kind of computer equipment, including program or instruction, when described program or refer to Order is performed, the method to execute above-mentioned first aspect and each embodiment of first aspect.
Fourth aspect, the embodiment of the present application provides a kind of storage medium, including program or instruction, when described program or instruction It is performed, the method to execute above-mentioned first aspect and each embodiment of first aspect.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of the federal study framework of mixing provided by the embodiments of the present application;
Fig. 2 is in a kind of any group first federal learning system of the federal study framework of mixing provided by the embodiments of the present application Obtain the schematic diagram of the first federal learning model;
Fig. 3 is a kind of specific schematic diagram of the federal study framework of mixing provided by the embodiments of the present application;
Fig. 4 is a kind of specific schematic diagram of the federal study framework of mixing provided by the embodiments of the present application;
Fig. 5 is a kind of step flow diagram for mixing federal learning method provided by the embodiments of the present application;
Fig. 6 is to obtain showing for the second federal learning model in the federal study framework of a kind of mixing provided by the embodiments of the present application It is intended to.
Specific embodiment
In order to better understand the above technical scheme, below in conjunction with Figure of description and specific embodiment to above-mentioned Technical solution is described in detail, it should be understood that the specific features in the embodiment of the present application and embodiment are to the application skill The detailed description of art scheme, rather than the restriction to technical scheme, in the absence of conflict, the embodiment of the present application And the technical characteristic in embodiment can be combined with each other.
In financial institution's (banking institution, insurance institution or security organization), in the business of progress, (loan transaction of such as bank is deposited Money business etc.) in operation process, the adjustment of many monetary devices, which is all relied on, carries out federal study to a large amount of data of financial transaction As a result, the adjustment of corresponding monetary device is likely to impact the profit and loss of financial institution.Therefore, a financial institution is come It says, the accuracy of federal learning model is most important.
Federation's study (federated learning) refer to by combine different participant (participant, or Party, also referred to as data owner (data owner) or client (client)) carry out machine learning method.In federation In study, participant is not needed to other participants and coordinator (coordinator, also referred to as parameter server (parameter server) or aggregate server (aggregation server)) stick one's chin out the data possessed, thus Federation's study can be very good protection privacy of user and ensure data safety.
In the prior art, in the scene of the federal study of application at present, though it is frequently encountered the data that participant A and B possess Complementation so can be formed, can be with joint mapping machine learning model, but the data volume that participant A and B possess is still considerably less, structure The performance for the conjunctive model built is difficult to reach expectation index, so that the accuracy of conjunctive model is also not high enough.Such case also can The accuracy for the conjunctive model for causing federal study to obtain is not high enough.Such case does not meet the demand of bank and other financial mechanism, The high-efficiency operation of financial institution's items business is not can guarantee.
For this purpose, the embodiment of the present application provides a kind of federal study framework of mixing, as shown in Figure 1, being the embodiment of the present application A kind of schematic diagram of federal study framework of mixing is provided.
The federal study framework of mixing shown in fig. 1 includes: the federal learning system of multiple groups first and coordinator;Wherein, every group First federal learning system includes multiple participants;It is wrapped between data set with each participant in the federal learning system of group first Contain identical sample object and different sample characteristics;The data set of each participant between the federal learning system of difference group first Between include identical sample characteristics and different sample objects.It should be noted that the federal reading-rack of mixing shown in fig. 1 It in structure, is illustrated so that the number of participant in each first federal learning system is 2 as an example, including participant AjAnd Bj(j is small In or equal to K positive integer, K is positive integer).Moreover, the number of participant is not limited to 2 in the first federal learning system, often The number of participant can be identical in a first federal learning system, can also be different.
Any participant, is used for, according to every group of data set joint training of the first federal study according to participant in organizing Model;Wherein, each participant hands over other participants in group in organizing during the federal learning model of training described first The intermediate result of training is changed.
The coordinator is merged to obtain the second federal learning model for the first federal learning model to each group, And the described second federal learning model is sent to participant in each group.
It should be noted that the target of the federal study framework of mixing shown in fig. 1 is that train is a federal study Model, the federal learning model suitable for each group participant finally trained are as follows: the second federal that last training in rotation is got Practise model.And other than the second federal learning model that last training in rotation is got, it is to training end process since training The parameter optimisation procedure of federal learning model, and the first federal learning model and the second federal learning model are in different instructions The federal learning model for practicing the stage is the federal learning model of appearance during middle trained, is not that the federal of final output is learned Model is practised, the federal learning model parameter of different training stages will be updated variation.Final output is a federal learning model, As the second federal learning model for getting of last training in rotation.
In framework shown in fig. 1, the coordinator is coordinator in the group in each first federal learning system;Or the association Coordinator between group of the tune person between each first federal learning system.
As shown in Fig. 2, first participant is any participant in any group first federal learning system, first participant and Coordinator can be used for obtaining the first federal study mould in the following way (the hereinafter referred to first federal mode of learning) in group Type:
(1) first participant will be sent to it according to the intermediate result of the initial model of the data set of first participant training He is participant.(2) intermediate result that first participant feeds back according to other described participants, obtains the training of the initial model As a result, and being sent to coordinator in described group.(3) coordinator determines undated parameter simultaneously according to the training result of each participant in organizing It is sent to each participant.(4) first participant updates the initial model according to the undated parameter, and it is federal to obtain described first Learning model.It should be noted that Fig. 2 shows the federal learning model of acquisition first schematic diagram in, only federal learned with first Participant's number illustrates the process for being 2 in learning system, and participant only includes first in the federal learning system of first in Fig. 2 Participant and second participant (i.e. other participants), second participant are identical as the performed step of first participant.This In application, without limitation, details are not described herein for participant's number in the first federal learning system.
Fig. 2 shows training process be framework shown in fig. 1 train the federal learning model suitable for each group participant One sub- training process of process, the federal learning model of first here are the interim connection that the sub- training process obtains Nation's learning model.
The data characteristics overlapping that first federal learning process is suitable for participant is smaller, and user is overlapped more situation Under, take out the part user and data progress combination machines study instruction that participant user is identical and user data feature is different Practice.Than if any two participants A and B for belonging to the same area, wherein participant A is a bank, and participant B is an electricity Quotient's platform.Participant A and B possesses more identical user in areal, but A is different from the business of B, the number of users of record It is different according to feature.Particularly, the user data feature of A and B record may be complementary.It under such a scenario, can be with It helps A and B to construct combination machines study prediction model using the first federal learning method, helps A and B to provide to client more preferable Service.
In order to help A and B to combine modeling, coordinator C is needed to participate in.First part: participant A and B realize encryption sample It is aligned example.Since the user group of Liang Jia enterprise A and B is not completely coincident, system is aligned using user's sample based on encryption Technology confirms the shared user of both sides under the premise of A and B underground respective data, and does not expose the use not overlapped each other Family models to combine the feature of these users.
The Encryption Model training process of first federal study is following, and (following steps only illustrate to instruct by taking gradient descent algorithm as an example Practice process):
After determining shared user group, so that it may utilize these data training machine learning models.In order to guarantee to train The confidentiality of data in the process needs to carry out encryption training by coordinator C.By taking linear regression model (LRM) as an example, training process can It is divided into following 4 step.1. the walks, public key is distributed to A and B by coordinator C, to carry out to the data for needing to exchange in training process Encryption.2. walks, interact the intermediate result for calculating gradient between participant A and B in an encrypted form.3. the walks: participant A It is calculated with the B gradient value for being based respectively on encryption, the person of simultaneously participating in B calculates loss function according to its label data, and knot Fruit, which summarizes, gives coordinator C.Coordinator C calculates total gradient value by summarized results and is decrypted.4. the walks: coordinator C will be solved Gradient after close returns to participant A and B respectively, and participant A and B is according to the parameter of the respective model of gradient updating.Participant and Coordinator's iteration above-mentioned steps until loss function convergence model parameter convergence or reach maximum number of iterations or Person is to reach the maximum training time, and this completes entire model training processes.
It should be noted that cryptographic operation and encrypted transmission are all in the first federal study and the second federal learning process It is optional, is to need to be determined according to concrete application scene, not all application scenarios require cryptographic operation And encrypted transmission.
In actual application, although complementation can be formed by being frequently encountered the data that participant A and B possess, it can combine Machine learning model is constructed, but the data volume that participant A and B possess is all considerably less, the performance of the conjunctive model of building cannot Reach expectation index.Particularly, the power of deep learning (deep learning) is built upon the basis of mass data On.Equally, the performance of integrated study (ensemble learning), for example, XGboost, is also built upon mass data base On plinth.In practical application scene, longitudinal federal learning art building deep learning model or integrated study mould are utilized When type, it is necessary to which the data volume for solving the problems, such as that participant A and B possess is too small.
Specifically, by the federal study framework of mixing shown in fig. 1, the federal study suitable for each group participant is obtained The detailed process of model can be such that
It include phase between the data set with each participant between the federal learning system of group first firstly the need of explanation Same sample characteristics and different sample objects.Than the bank if any two different regions, their user group respectively from Area where respectively, mutual intersection very little.But their business is much like, the user data feature significant portion of record It is identical.It can be used and the first federal learning model of each group is merged to obtain the second federal learning model, to help Liang Jia bank constructs conjunctive model to predict their customer action.
If the data volume that participant A1, B1, A2, B2 possess is very few, pass through the model of longitudinal federal study acquisition The performance of M1 and M2 may all can be poor, expectation index is not achieved.And passes through joint coordination person C1 and C2 and carry out lateral The model M of nation's study building, performance are just likely to have biggish promotion, can satisfy expected requirement.
Here possible practical application scene is illustrated.For example, participant (Ai,Bi) data that are jointly owned with and (Aj,Bj) data characteristics of data that is jointly owned is identical (same feature space), but user's difference (non- overlapping sample/ID space).And participant AjAnd BjIdentical (the same sample/ID of the user of the data possessed Space), but data characteristics is different (different feature space).I.e. practical application scene can be participant (Ai,Bi) and (Aj,Bj) progress laterally federal study can be combined;Participant AjAnd BjThe longitudinal federal study of progress can be combined. Wherein, i, j=1,2, i ≠ j.
When coordinator is coordinator in group in each first federal learning system, as shown in figure 3, a kind of possible implementation In mode, mixing federal study framework includes 2 first federal learning systems (only with 2 first federal study systems shown in Fig. 3 Illustrate for system, but the first federal learning system quantity is not limited to 2), coordinator C1With coordinator C2To organize interior coordinator, by Coordinator C1With coordinator C2, the first federal learning model of each group is merged to obtain the second federal learning model, specifically such as Under:
(a) coordinator C1With participant A1、B1The federal learning model M of training first1;At the same time, coordinator C2And participation Person A2、B2The federal learning model M of training first2.Specific first federal learning model training process can be with reference to exemplified by Fig. 2 The framework and process of longitudinal federal study.
(b) coordinator C1And C2Respectively by the first federal learning model M1And M2It is sent to other side.
(c) coordinator C1And C2Model Fusion is carried out respectively, for example, to model M1And M2The weighted average of the value of parameter, Correspondence parameter value as the second federal learning model M.
(d) coordinator C1And C2The second federation learning model M is distributed to participant A respectively1、B1、A2、B2
(e) coordinator C1With participant A1、B1Continue federal of training first on the basis of the second federation learning model M Model is practised, and updates the first federal learning model M1;At the same time, coordinator C2With participant A2、B2In the second federal study mould Continue training pattern on the basis of type M, and updates the first federal learning model M2.The process can also be with the longitudinal direction exemplified by Fig. 2 The framework and process of federation's study.
Iteration above procedure (a)-(e) until the second federation learning model M restrain or reach maximum number of iterations or Reach the maximum model training time.
After training the second federal learning model M, coordinator C1Second federation learning model M is distributed to participant A1 And B1, coordinator C2Second federation learning model M is distributed to participant A2And B2.Participant A1、B1、A2、B2What is finally obtained is Identical second federal learning model M.
When only there are two when the first federal learning system, the coordinator of two first federal learning systems can directly be exchanged First federal learning model Mj, third-party participation is not needed, system resource and expense can be saved.
In framework shown in Fig. 3, target is to train a federal learning model, continues to optimize the federal learning model of update Parameter.Final output is M that last training in rotation is got, and all to M in each round training1、M2It does and updates with M parameter, M other than the M of last wheel output, in each round training1、M2It is the learning model in middle trained stage with M.
As coordinator between the group between coordinator is each first federal learning system, as shown in figure 4, a kind of possible implementation In mode, mixing federal study framework includes K first federal learning system, and K is the integer more than or equal to 2, by association in organizing Tune person C1~CKAnd coordinator C between group0, the first federal learning model of each group is merged to obtain the second federal study mould Type, specific as follows:
(a) coordinator CjWith participant Aj、BjThe federal learning model M of training firstj, j=1,2 ..., K.Detailed process can With framework and process with reference to exemplified by Fig. 2.
(b) coordinator CjBy the first federal learning model MjIt is sent to coordinator C between group0, j=1,2 ..., K.
(c) coordinator C between group0To the first federal learning model M receivedjModel Fusion is carried out, for example, to the first federation Learning model M1~MjThe weighted average of the value of parameter obtains the second federal learning model M for being suitable for each group participant.
(d) coordinator C between group0Second federal learning model is updated into M and is distributed to each coordinator Cj, j=1,2 ..., K. Alternatively possible is achieved in that, coordinator C between group0Second federal learning model is updated into M and is directly distributed to participant AjWith Bj, j=1,2 ..., K.
(e) coordinator CjSecond federal learning model is updated into M and is transmitted to participant AjAnd Bj, j=1,2 ..., K.
(f) coordinator CjWith participant Aj、BjContinue federal of training first on the basis of the second federation learning model M Model is practised, and updates the first federal learning model Mj, j=1,2 ..., K.Detailed process can be learned with reference to federation exemplified by Fig. 2 Practise framework and model training process.
Iteration above procedure (a)-(f) until the second federation learning model M restrain or reach maximum number of iterations or Reach the maximum training time.
After training the second federal learning model M, coordinator C between group0By trained second federation learning model M points Issue coordinator Cj, then by coordinator CjSecond federation learning model M is distributed to participant AjAnd Bj, j=1,2 ..., K.Ginseng With person AjAnd BjThat finally obtain is the identical second federal learning model M, j=1,2 ..., K.Alternatively possible implementation It is coordinator C between group0Trained second federation learning model M is directly distributed to participant AjAnd Bj, j=1,2 ..., K.
In framework shown in Fig. 4, target is to train a federal learning model, continues to optimize the federal learning model of update Parameter.Final output is M that last training in rotation is got, all to multiple M in each round trainingjIt does and updates with M parameter, remove M outside the M of last wheel output, in each round trainingjIt is the learning model in middle trained stage with M.
Above-mentioned coordinator is between coordinator in the group in each first federal learning system or each first federal learning system Between group in the embodiment of coordinator, the classification federation learning model training of federal learning system is mixed including two kinds: (1) being participated in Person and the federal study subsystem of the interior coordinator's composition first of group, train the first federation learning model Mj;Again by coordinating in two groups Person forms the federal learning model M of training second;(2) by coordinator trains the second federal jointly between coordinator and group in multiple groups Practise model M.(1) learnt in (2) two ways by coordinator's distribution trained second is federal between coordinator or group in organizing Model is to participant.The sum that participant finally obtains uses the second federal of each first federal study subsystem training Practise model.
When there is the multiple first federal learning systems, world model can be directly distributed to each ginseng by coordinator between organizing With person, the transfer of the coordinator of the first federal study subsystem is not needed, communication overhead is saved, reduces communication delay, it can To accelerate model training.
In the embodiment of the present application, mixing in the first federal learning system of federal study may include 2 or 2 or more Participant.Moreover, between participant and coordinator, participant and participant, between coordinator and global coordinator, message is transmitted It can be the message transmission of encryption, for example, being also possible to the message not encrypted transmission using homomorphic cryptography technology.It is described to disappear Breath transmission includes data association message transmission, gradient information transmission, model parameter updates transmission, model performance test result passes Defeated, model training trigger command transmission etc..
Illustrate a kind of mixing federation learning method that the application proposes below by Fig. 5 in conjunction with framework shown in fig. 1.It should Method is suitable for the federal model training with multiple groups participant, wherein includes between the data set of the participant in same group There are identical sample object and different sample characteristics;It include identical sample between the data set of participant between different groups Feature and different sample objects;The method steps are as follows:
Step 501: each group is directed to, according to every group of data set joint training of participant in organizing of the first federal study mould Type.
Step 502: the first federal learning model of each group being merged to obtain the second federal learning model, and will be described Second federal learning model is sent to the participant in each group.
Step 503: being directed to each group, assembled for training according to the data of participant in the described second federal learning model and described group The updated first federal learning model is got, returns and the first federal learning model of each group is merged to obtain second The step of nation's learning model, until model training terminates.
It should be noted that step 501~step 503 target is to train a federal learning model, as finally Second federal learning model of one wheel output.Return step 502 is until the process that training terminates is to continue to optimize update federation to learn Practise the process of the parameter of model.The federal learning model generated during step 501~step 503 is for the end The intermediate product of second federal learning model of one wheel output.
In step 501, organized during the federal learning model of training described first in each participant and group other Participant has exchanged trained intermediate result.For any group of any participant, executes following training process and obtain described The process of one federal learning model specifically includes:
The participant will be sent to other according to the intermediate result of the initial model of the data set of participant training Participant;The intermediate result that the participant feeds back according to other described participants, obtains the training result of the initial model, And it is sent to coordinator in described group;Coordinator determines that undated parameter is concurrent according to the training result of each participant in described group Give each participant;The participant updates the initial model according to the undated parameter, obtains the described first federal study Model.
In step 502, can be in a manner of, by the parameter value of same parameters in the first of each group the federal learning model into Row weighted average, the value as the parameter in the described second federal learning model.
It, will be in the first of each group the federal learning model by coordinator between group in the mode of alternatively possible realization The parameter value of same parameters is weighted and averaged, the value as the parameter in the described second federal learning model;Pass through described group Between coordinator, the described second federal learning model is sent to coordinator in each group;Coordinator is by described second in described group Nation's learning model is sent to participant in group.
It can specifically be carried out by the second federal mode of learning:
The data characteristics overlapping that second federal study is suitable for each participant is more, and user is overlapped less situation Under, take out that participant's data characteristics is identical and part data that user is not exactly the same carry out combination machines study.Than if any The bank of two different regions, area of their user group respectively from respective place, mutual intersection very little.But it Business it is much like, the user data feature significant portion of record is identical.Laterally federal study can be used to help Liang Jia bank constructs conjunctive model to predict their customer action.
The federal learning system framework of example as shown in Figure 6,1. the walks, when coordinator A locally completes model in a group After parameter updates, organize in coordinator A can between group in coordinator's transmission group coordinator A the model parameter locally obtained more Newly.Coordinator A can be by way of encryption, for example, being joined using homomorphic cryptography technology to coordinator's transmission pattern between group in group Number updates.The model parameter can be the parameter of federal learning model, for example, the weight connected between the node of neural network Parameter;Alternatively, the conjunctive model parameter is also possible to the gradient information of federal learning model, for example, under neural network gradient Gradient information in algorithm drops.2. walks, between group coordinator by the model parameter of coordinator in different groups received more It is newly merged, for example, seeking being weighted and averaged.3. the walks, coordinator is by the fused second federal learning model parameter between group It updates (also referred to as world model's parameter) and is distributed to coordinator in each group again.Coordinator can also be by way of encryption between group The federal learning model parameter of transmission second.4. the walks, organize in coordinator the receive second federal learning model parameter can be used Make updated model of the starting model (starting point) of local model training either as the first federal learning model Parameter either continues to train to start training on the basis of the first federal learning model.
Coordinator's iteration above-mentioned steps between the interior coordinator of group and group are until loss function convergence either model parameter convergence Reach maximum number of iterations or reach the maximum training time, this completes entire model training processes.
It should be noted that the preset termination condition that the model training terminates includes following at least one in step 503 : the parameter convergence of the described second federal learning model;The update times of described second federal learning model are greater than or equal to pre- If frequency of training;The training time of described second federal learning model is greater than or equal to default training duration.
In the mixing federation learning method and framework that the application proposes, federal learning model training is carried out by classification: first Training obtains the first federal learning model of each first federal learning system, carries out laterally further according to each first federal learning model Fusion obtains the second federal learning model.Therefore, it can be gathered around by method in the application and framework using multiple participants Some data, and the scalability of the first federal learning system is preferable, and it is too small can effectively to solve the data volume that participant possesses The problem of.
The embodiment of the present application provides a kind of computer equipment, including program or instruction, when described program or instruction are performed When, to execute the federal learning method of a kind of mixing provided by the embodiments of the present application and any optional method.
The embodiment of the present application provides a kind of storage medium, including program or instruction, when described program or instruction be performed, To execute the federal learning method of a kind of mixing provided by the embodiments of the present application and any optional method.
Finally, it should be noted that it should be understood by those skilled in the art that, embodiments herein can provide as method, be System or computer program product.Therefore, the application can be used complete hardware embodiment, complete software embodiment or combine software With the form of the embodiment of hardware aspect.Moreover, it wherein includes that computer can use journey that the application, which can be used in one or more, The computer implemented in the computer-usable storage medium (including but not limited to magnetic disk storage, optical memory etc.) of sequence code The form of program product.
The application be referring to according to the present processes, equipment (system) and computer program product flow chart and/or Block diagram describes.It should be understood that each process that can be realized by computer program instructions in flowchart and/or the block diagram and/or The combination of process and/or box in box and flowchart and/or the block diagram.It can provide these computer program instructions to arrive General purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices processor to generate one Machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for realizing flowing The device for the function of being specified in journey figure one process or multiple processes and/or block diagrams one box or multiple boxes.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
Obviously, those skilled in the art can carry out various modification and variations without departing from the model of the application to the application It encloses.In this way, if these modifications and variations of the application belong within the scope of the claim of this application and its equivalent technologies, then The application is also intended to include these modifications and variations.

Claims (10)

1. a kind of federal learning method of mixing, which is characterized in that it is trained suitable for the federal model with multiple groups participant, In, it include identical sample object and different sample characteristics between the data set of the participant in same group;Between different groups Participant data set between include identical sample characteristics and different sample objects;The described method includes:
For each group, according to every group of data set joint training of participant in organizing of the first federal learning model;Wherein, training Each participant has exchanged trained centre with other interior participants are organized in organizing during described first federal learning model As a result;First federal learning model of each group is merged to obtain the second federal learning model, and second federation is learned It practises model and is sent to participant in each group;For each group, according to being participated in the described second federal learning model and described group The data set training of person obtains the updated first federal learning model, returns and melts to the first federal learning model of each group Conjunction obtains the step of the second federal learning model, until model training terminates.
2. the method as described in claim 1, which is characterized in that the preset termination condition that the model training terminates includes following At least one of: the parameter convergence of the described second federal learning model;The update times of the second federal learning model be greater than or Equal to default frequency of training;The training time of described second federal learning model is greater than or equal to default training duration.
3. the method as described in claim 1, which is characterized in that each group includes coordinator in group, and training described first is federal Each participant has exchanged trained intermediate result with other interior participants are organized in organizing during learning model, comprising:
For any group of any participant, executes following training process and obtains the described first federal learning model, comprising:
The participant will be sent to other participations according to the intermediate result of the initial model of the data set of participant training Person;
The intermediate result that the participant feeds back according to other described participants obtains the training result of the initial model, and It is sent to coordinator in described group;
Coordinator determines undated parameter and is sent to each participant according to the training result of each participant in described group;
The participant updates the initial model according to the undated parameter, obtains the described first federal learning model.
4. method a method according to any one of claims 1-3, which is characterized in that the first federal learning model to each group carries out Fusion obtains the second federal learning model, comprising:
The parameter value of same parameters in the first of each group federal learning model is weighted and averaged, as described second The value of the parameter in nation's learning model.
5. method a method according to any one of claims 1-3, which is characterized in that the first federal learning model to each group carries out Fusion obtains the second federal learning model, comprising:
By coordinator between group, the parameter value of same parameters in the first of each group the federal learning model is weighted flat , the value as the parameter in the described second federal learning model;
By coordinator between described group, the described second federal learning model is sent to coordinator in each group.
6. a kind of federal study framework of mixing characterized by comprising the federal learning system of multiple groups first and coordinator;Wherein, Every group first federal learning system includes multiple participants;With each participant in the federal learning system of group first data set it Between include identical sample object and different sample characteristics;The number of each participant between the federal learning system of difference group first According to including identical sample characteristics and different sample objects between collection;
Any participant, is used for, according to every group of data set joint training of participant in organizing of the first federal learning model;Its In, other participants in interior each participant and group, which are organized, during the federal learning model of training described first has exchanged training Intermediate result;
The coordinator is merged to obtain the second federal learning model for the first federal learning model to each group, and will Described second federal learning model is sent to participant in each group.
7. framework as claimed in claim 6, which is characterized in that the coordinator is in the group in each first federal learning system Coordinator;Or coordinator between group of the coordinator between each first federal learning system.
8. framework as claimed in claim 7, which is characterized in that the participant, for will be according to the data of the participant The intermediate result for collecting the initial model of training is sent to other participants;
The participant is also used to obtain the training of the initial model according to the intermediate result of other participants feedback As a result, and being sent to coordinator in described group;
Coordinator in described group is also used to determine undated parameter according to the training result of each participant and is sent to each participant;
The participant is also used to update the initial model according to the undated parameter, obtains the described first federal study mould Type.
9. a kind of computer equipment, which is characterized in that including program or instruction, when described program or instruction are performed, as weighed Benefit require any one of 1 to 5 described in method be performed.
10. a kind of storage medium, which is characterized in that including program or instruction, when described program or instruction are performed, such as right It is required that method described in any one of 1 to 5 is performed.
CN201910720373.9A 2019-08-06 2019-08-06 A kind of federal learning method of mixing and framework Pending CN110490738A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910720373.9A CN110490738A (en) 2019-08-06 2019-08-06 A kind of federal learning method of mixing and framework
PCT/CN2019/117518 WO2021022707A1 (en) 2019-08-06 2019-11-12 Hybrid federated learning method and architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910720373.9A CN110490738A (en) 2019-08-06 2019-08-06 A kind of federal learning method of mixing and framework

Publications (1)

Publication Number Publication Date
CN110490738A true CN110490738A (en) 2019-11-22

Family

ID=68549883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910720373.9A Pending CN110490738A (en) 2019-08-06 2019-08-06 A kind of federal learning method of mixing and framework

Country Status (2)

Country Link
CN (1) CN110490738A (en)
WO (1) WO2021022707A1 (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111081337A (en) * 2020-03-23 2020-04-28 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111177249A (en) * 2019-12-10 2020-05-19 浙江大学 Multi-data-source data visualization method and device based on federal learning thought
CN111178538A (en) * 2019-12-17 2020-05-19 杭州睿信数据科技有限公司 Federated learning method and device for vertical data
CN111222646A (en) * 2019-12-11 2020-06-02 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111260061A (en) * 2020-03-09 2020-06-09 厦门大学 Differential noise adding method and system in federated learning gradient exchange
CN111325352A (en) * 2020-02-20 2020-06-23 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
CN111352799A (en) * 2020-02-20 2020-06-30 中国银联股份有限公司 Inspection method and device
CN111369042A (en) * 2020-02-27 2020-07-03 山东大学 Wireless service flow prediction method based on weighted federal learning
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN111475853A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Model training method and system based on distributed data
CN111476376A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Alliance learning method, alliance learning device and alliance learning system
CN112148437A (en) * 2020-10-21 2020-12-29 深圳致星科技有限公司 Calculation task acceleration processing method, device and equipment for federal learning
CN112217706A (en) * 2020-12-02 2021-01-12 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN112232518A (en) * 2020-10-15 2021-01-15 成都数融科技有限公司 Lightweight distributed federated learning system and method
CN112396189A (en) * 2020-11-27 2021-02-23 中国银联股份有限公司 Method and device for multi-party construction of federal learning model
CN112990488A (en) * 2021-03-16 2021-06-18 香港理工大学深圳研究院 Federal learning method based on machine isomerism
WO2021120676A1 (en) * 2020-06-30 2021-06-24 平安科技(深圳)有限公司 Model training method for federated learning network, and related device
WO2021120951A1 (en) * 2019-12-20 2021-06-24 深圳前海微众银行股份有限公司 Knowledge transfer method, apparatus and device based on federated learning, and medium
CN113051606A (en) * 2021-03-11 2021-06-29 佳讯飞鸿(北京)智能科技研究院有限公司 Block chain mutual communication method of intelligent agent
TWI732557B (en) * 2019-12-09 2021-07-01 大陸商支付寶(杭州)信息技術有限公司 Model joint training method and device based on blockchain
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN113704810A (en) * 2021-04-01 2021-11-26 华中科技大学 Federated learning oriented chain-crossing consensus method and system
WO2021259366A1 (en) * 2020-06-24 2021-12-30 Jingdong Technology Holding Co., Ltd. Federated doubly stochastic kernel learning on vertical partitioned data
WO2022037239A1 (en) * 2020-08-21 2022-02-24 Huawei Technologies Co.,Ltd. System and methods for supporting artificial intelligence service in a network
CN114090983A (en) * 2022-01-24 2022-02-25 亿景智联(北京)科技有限公司 Heterogeneous federated learning platform communication method and device
CN114186694A (en) * 2021-11-16 2022-03-15 浙江大学 Efficient, safe and low-communication longitudinal federal learning method
CN114221957A (en) * 2021-11-30 2022-03-22 中国电子科技网络信息安全有限公司 Country management system
US11283609B2 (en) 2020-08-21 2022-03-22 Huawei Technologies Co., Ltd. Method and apparatus for supporting secure data routing
WO2022095523A1 (en) * 2020-11-03 2022-05-12 华为技术有限公司 Method, apparatus and system for managing machine learning model
WO2022094888A1 (en) * 2020-11-05 2022-05-12 浙江大学 Decision tree-oriented longitudinal federation learning method
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
WO2022144000A1 (en) * 2020-12-31 2022-07-07 京东科技信息技术有限公司 Federated learning model training method and apparatus, and electronic device
WO2023050778A1 (en) * 2021-09-30 2023-04-06 中兴通讯股份有限公司 Model training method and system, and electronic device and computer-readable storage medium
WO2023208043A1 (en) * 2022-04-29 2023-11-02 索尼集团公司 Electronic device and method for wireless communication system, and storage medium
US11842260B2 (en) 2020-09-25 2023-12-12 International Business Machines Corporation Incremental and decentralized model pruning in federated machine learning

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037722B (en) * 2021-02-26 2022-06-07 山东浪潮科学研究院有限公司 Intrusion detection method and device for edge calculation scene
CN113037662A (en) * 2021-03-02 2021-06-25 电子科技大学 Mobile equipment radio frequency distribution identification method based on federal learning
CN113792883B (en) * 2021-03-03 2024-04-16 京东科技控股股份有限公司 Model training method, device, equipment and medium based on federal learning
CN113112026A (en) * 2021-04-02 2021-07-13 佳讯飞鸿(北京)智能科技研究院有限公司 Optimization method and device for federated learning model
CN113240461B (en) * 2021-05-07 2022-08-16 广州银行股份有限公司 Method, system and medium for identifying potential customers based on longitudinal federal learning
CN113139796B (en) * 2021-05-10 2022-06-21 深圳市洞见智慧科技有限公司 Recommendation method and device based on longitudinal federal learning
CN113315604B (en) * 2021-05-25 2022-06-03 电子科技大学 Adaptive gradient quantization method for federated learning
CN113298404A (en) * 2021-06-03 2021-08-24 光大科技有限公司 Method and device for determining workload of federal learning participator
CN113379071B (en) * 2021-06-16 2022-11-29 中国科学院计算技术研究所 Noise label correction method based on federal learning
CN113486378A (en) * 2021-07-22 2021-10-08 杭州煋辰数智科技有限公司 System for realizing data set construction processing based on federal learning and generation method thereof
CN113673696B (en) * 2021-08-20 2024-03-22 山东鲁软数字科技有限公司 Power industry hoisting operation violation detection method based on reinforcement federal learning
CN113723619A (en) * 2021-08-31 2021-11-30 南京大学 Federal learning training method based on training phase perception strategy
CN113992676B (en) * 2021-10-27 2022-09-06 天津大学 Incentive method and system for layered federal learning under terminal edge cloud architecture and complete information
CN114004363A (en) * 2021-10-27 2022-02-01 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN113992692B (en) * 2021-10-27 2022-09-06 天津大学 Method and system for layered federal learning under terminal edge cloud architecture and incomplete information
CN114363176B (en) * 2021-12-20 2023-08-08 中山大学 Network identification method, device, terminal and medium based on federal learning
CN116468132A (en) * 2022-01-10 2023-07-21 华为技术有限公司 System, method and device for processing user data
CN114710330B (en) * 2022-03-22 2023-01-24 华东师范大学 Anomaly detection method based on heterogeneous layered federated learning
CN115021883B (en) * 2022-07-13 2022-12-27 北京物资学院 Signaling mechanism for application of federal learning in wireless cellular systems
CN115086399B (en) * 2022-07-28 2022-12-06 深圳前海环融联易信息科技服务有限公司 Federal learning method and device based on hyper network and computer equipment
CN116665319B (en) * 2023-07-31 2023-11-24 华南理工大学 Multi-mode biological feature recognition method based on federal learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284313B (en) * 2018-08-10 2021-08-27 深圳前海微众银行股份有限公司 Federal modeling method, device and readable storage medium based on semi-supervised learning
CN109711529B (en) * 2018-11-13 2022-11-08 中山大学 Cross-domain federated learning model and method based on value iterative network
CN109635462A (en) * 2018-12-17 2019-04-16 深圳前海微众银行股份有限公司 Model parameter training method, device, equipment and medium based on federation's study
CN109871702A (en) * 2019-02-18 2019-06-11 深圳前海微众银行股份有限公司 Federal model training method, system, equipment and computer readable storage medium

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI732557B (en) * 2019-12-09 2021-07-01 大陸商支付寶(杭州)信息技術有限公司 Model joint training method and device based on blockchain
CN111177249A (en) * 2019-12-10 2020-05-19 浙江大学 Multi-data-source data visualization method and device based on federal learning thought
CN111177249B (en) * 2019-12-10 2022-05-17 浙江大学 Multi-data-source data visualization method and device based on federal learning thought
CN111222646A (en) * 2019-12-11 2020-06-02 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium
CN111178538A (en) * 2019-12-17 2020-05-19 杭州睿信数据科技有限公司 Federated learning method and device for vertical data
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111178538B (en) * 2019-12-17 2023-08-15 杭州睿信数据科技有限公司 Federal learning method and device for vertical data
WO2021120951A1 (en) * 2019-12-20 2021-06-24 深圳前海微众银行股份有限公司 Knowledge transfer method, apparatus and device based on federated learning, and medium
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111241567B (en) * 2020-01-16 2023-09-01 深圳前海微众银行股份有限公司 Data sharing method, system and storage medium in longitudinal federal learning
CN111325352A (en) * 2020-02-20 2020-06-23 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
CN111325352B (en) * 2020-02-20 2021-02-19 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
CN111352799A (en) * 2020-02-20 2020-06-30 中国银联股份有限公司 Inspection method and device
CN111369042A (en) * 2020-02-27 2020-07-03 山东大学 Wireless service flow prediction method based on weighted federal learning
CN111369042B (en) * 2020-02-27 2021-09-24 山东大学 Wireless service flow prediction method based on weighted federal learning
CN111260061B (en) * 2020-03-09 2022-07-19 厦门大学 Differential noise adding method and system in federated learning gradient exchange
CN111260061A (en) * 2020-03-09 2020-06-09 厦门大学 Differential noise adding method and system in federated learning gradient exchange
CN111081337A (en) * 2020-03-23 2020-04-28 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111081337B (en) * 2020-03-23 2020-06-26 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
WO2021259366A1 (en) * 2020-06-24 2021-12-30 Jingdong Technology Holding Co., Ltd. Federated doubly stochastic kernel learning on vertical partitioned data
CN111475853A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Model training method and system based on distributed data
CN111476376A (en) * 2020-06-24 2020-07-31 支付宝(杭州)信息技术有限公司 Alliance learning method, alliance learning device and alliance learning system
US11636400B2 (en) 2020-06-24 2023-04-25 Jingdong Digits Technology Holding Co., Ltd. Federated doubly stochastic kernel learning on vertical partitioned data
WO2021120676A1 (en) * 2020-06-30 2021-06-24 平安科技(深圳)有限公司 Model training method for federated learning network, and related device
US11588907B2 (en) 2020-08-21 2023-02-21 Huawei Technologies Co., Ltd. System and methods for supporting artificial intelligence service in a network
WO2022037239A1 (en) * 2020-08-21 2022-02-24 Huawei Technologies Co.,Ltd. System and methods for supporting artificial intelligence service in a network
US11283609B2 (en) 2020-08-21 2022-03-22 Huawei Technologies Co., Ltd. Method and apparatus for supporting secure data routing
US11842260B2 (en) 2020-09-25 2023-12-12 International Business Machines Corporation Incremental and decentralized model pruning in federated machine learning
CN112232518B (en) * 2020-10-15 2024-01-09 成都数融科技有限公司 Lightweight distributed federal learning system and method
CN112232518A (en) * 2020-10-15 2021-01-15 成都数融科技有限公司 Lightweight distributed federated learning system and method
CN112148437B (en) * 2020-10-21 2022-04-01 深圳致星科技有限公司 Calculation task acceleration processing method, device and equipment for federal learning
CN112148437A (en) * 2020-10-21 2020-12-29 深圳致星科技有限公司 Calculation task acceleration processing method, device and equipment for federal learning
WO2022095523A1 (en) * 2020-11-03 2022-05-12 华为技术有限公司 Method, apparatus and system for managing machine learning model
WO2022094888A1 (en) * 2020-11-05 2022-05-12 浙江大学 Decision tree-oriented longitudinal federation learning method
CN112396189B (en) * 2020-11-27 2023-09-01 中国银联股份有限公司 Method and device for constructing federal learning model by multiple parties
CN112396189A (en) * 2020-11-27 2021-02-23 中国银联股份有限公司 Method and device for multi-party construction of federal learning model
CN112217706A (en) * 2020-12-02 2021-01-12 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
WO2022116725A1 (en) * 2020-12-02 2022-06-09 腾讯科技(深圳)有限公司 Data processing method, apparatus, device, and storage medium
WO2022144000A1 (en) * 2020-12-31 2022-07-07 京东科技信息技术有限公司 Federated learning model training method and apparatus, and electronic device
CN113051606A (en) * 2021-03-11 2021-06-29 佳讯飞鸿(北京)智能科技研究院有限公司 Block chain mutual communication method of intelligent agent
CN112990488B (en) * 2021-03-16 2024-03-26 香港理工大学深圳研究院 Federal learning method based on machine isomerism
CN112990488A (en) * 2021-03-16 2021-06-18 香港理工大学深圳研究院 Federal learning method based on machine isomerism
CN113704810A (en) * 2021-04-01 2021-11-26 华中科技大学 Federated learning oriented chain-crossing consensus method and system
CN113704810B (en) * 2021-04-01 2024-04-26 华中科技大学 Federal learning-oriented cross-chain consensus method and system
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN113689003B (en) * 2021-08-10 2024-03-22 华东师范大学 Mixed federal learning framework and method for safely removing third party
WO2023050778A1 (en) * 2021-09-30 2023-04-06 中兴通讯股份有限公司 Model training method and system, and electronic device and computer-readable storage medium
WO2023087549A1 (en) * 2021-11-16 2023-05-25 浙江大学 Efficient, secure and less-communication longitudinal federated learning method
CN114186694A (en) * 2021-11-16 2022-03-15 浙江大学 Efficient, safe and low-communication longitudinal federal learning method
CN114221957A (en) * 2021-11-30 2022-03-22 中国电子科技网络信息安全有限公司 Country management system
CN114090983A (en) * 2022-01-24 2022-02-25 亿景智联(北京)科技有限公司 Heterogeneous federated learning platform communication method and device
CN114648131A (en) * 2022-03-22 2022-06-21 中国电信股份有限公司 Federal learning method, device, system, equipment and medium
WO2023208043A1 (en) * 2022-04-29 2023-11-02 索尼集团公司 Electronic device and method for wireless communication system, and storage medium

Also Published As

Publication number Publication date
WO2021022707A1 (en) 2021-02-11

Similar Documents

Publication Publication Date Title
CN110490738A (en) A kind of federal learning method of mixing and framework
Kang et al. Communication-efficient and cross-chain empowered federated learning for artificial intelligence of things
CN112733967B (en) Model training method, device, equipment and storage medium for federal learning
Kaur et al. Scalability in blockchain: Challenges and solutions
CN111125779A (en) Block chain-based federal learning method and device
CN110417558A (en) Verification method and device, the storage medium and electronic device of signature
CN108009823A (en) The distributed call method and system for calculating power resource based on block chain intelligence contract
CN110443375A (en) A kind of federation's learning method and device
CN110210233A (en) Joint mapping method, apparatus, storage medium and the computer equipment of prediction model
Lihu et al. A proof of useful work for artificial intelligence on the blockchain
CN104820945A (en) Online social network information transmision maximization method based on community structure mining algorithm
Chitra et al. Agent-based simulations of blockchain protocols illustrated via kadena’s chainweb
CN112016954A (en) Resource allocation method and device based on block chain network technology and electronic equipment
Lagutin et al. Secure open federation of IoT platforms through interledger technologies-the SOFIE approach
Lin et al. DRL-based adaptive sharding for blockchain-based federated learning
CN116627970A (en) Data sharing method and device based on blockchain and federal learning
Gao et al. Gradientcoin: A peer-to-peer decentralized large language models
CN115034836A (en) Model training method and related device
CN114491616A (en) Block chain and homomorphic encryption-based federated learning method and application
Petruzzi et al. Experiments with social capital in multi-agent systems
Huang et al. Edge resource pricing and scheduling for blockchain: A stackelberg game approach
CN108881421A (en) Cloud service Data Audit method based on block chain
CN112101577A (en) XGboost-based cross-sample federal learning and testing method, system, device and medium
CN115640305B (en) Fair and reliable federal learning method based on blockchain
Nguyen et al. Blockchain as a service for multi-access edge computing: A deep reinforcement learning approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination