CN112329947A - Federal learning incentive method and system based on differential evolution - Google Patents

Federal learning incentive method and system based on differential evolution Download PDF

Info

Publication number
CN112329947A
CN112329947A CN202011170752.4A CN202011170752A CN112329947A CN 112329947 A CN112329947 A CN 112329947A CN 202011170752 A CN202011170752 A CN 202011170752A CN 112329947 A CN112329947 A CN 112329947A
Authority
CN
China
Prior art keywords
participant
differential evolution
federated learning
period
individual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011170752.4A
Other languages
Chinese (zh)
Inventor
麦伟杰
沈凤山
危明铸
袁峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Software Application Technology Guangzhou GZIS
Original Assignee
Guangzhou Institute of Software Application Technology Guangzhou GZIS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Software Application Technology Guangzhou GZIS filed Critical Guangzhou Institute of Software Application Technology Guangzhou GZIS
Priority to CN202011170752.4A priority Critical patent/CN112329947A/en
Priority to PCT/CN2021/074276 priority patent/WO2022088541A1/en
Publication of CN112329947A publication Critical patent/CN112329947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Genetics & Genomics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a federated learning incentive method and a federated learning incentive system based on differential evolution, which utilize the excellent global optimization capability and local detection capability of DE (DE), so that each participant shifts along with time t (assuming a month as a period), the difference (expected loss) between the income obtained by each participant from the federation and the income which should be obtained by each participant is minimized, "expected loss and waiting time" between the participants is minimized, the actual income and the expected difference of the obtained returns of each participant in the federated learning are automatically balanced, and the participants are effectively promoted to provide reliable data so that the federated learning is carried out stably for a long time; the dynamic adjustment of the total revenue of federal learning and the revenue of each participant is effectively realized, the sustainable operation target is maximized, the unfairness problem of the participants is minimized, and the dependence on manual intervention is avoided.

Description

Federal learning incentive method and system based on differential evolution
Technical Field
The embodiment of the invention relates to the technical field of information, in particular to a federal learning incentive method and system based on differential evolution.
Background
Federal Learning (Federal Learning/Federal Learning) refers to a machine Learning framework, which can effectively help a plurality of nodes (representing individuals or organizations) to jointly train a model under the condition of meeting the requirement of data privacy protection. Under a federal learning framework, a server side issues model parameters to a plurality of nodes, each node inputs a local training sample into a model to perform primary training, and after the training is finished, each node calculates the gradient obtained based on the training result. Subsequently, the server side is based on a Secure Aggregation (SA) protocol, and the sum of the gradients of each node can be calculated.
In general, the data volume required for training the artificial intelligence application model is very large, but in implementation, data information related to a major emergency is "small data" and is distributed in different mechanisms or areas in a scattered manner, that is, such data is small in scale; or important information such as labels or partial characteristic values is lacked; or the data is privacy data protected by law, and the phenomenon is called data island. Due to the phenomenon, the federal learning process needs the joint participation of all participants to train an accurate and reliable model. However, how to make the participants continuously participate in federal learning is an important challenge, and the key to achieving this goal is to formulate a method of rewarding to share the federally generated profit with the participants fairly and equitably. The existing method is a data operator, which is held by an industry alliance or a key government unit, a project team is adopted to research, develop and share a shared exchange tool set and a platform to be responsible for the convergence and management of data, meanwhile, a certain proportion of cost is charged to a data user, the data operator and the data user form a complete industry system, and each unit pays related cost in the data using process and establishes an incentive mechanism in the form of fund return. However, the incentive method is difficult to reasonably distribute the benefits of federal learning to each participant in a fair, fair and dynamic way along with the time transfer, and has a great deal of manual intervention problems.
Disclosure of Invention
The embodiment of the invention provides a federated learning incentive method and a federated learning incentive system based on differential evolution, and 3) dynamic adjustment of the total revenue of federated learning and the revenue of each participant is effectively realized, a sustainable operation target is maximized, the unfairness problem of the participants is minimized, and dependence on manual intervention is avoided.
In a first aspect, an embodiment of the present invention provides a federated learning incentive method based on differential evolution, including:
step S1, obtaining the expected loss offset of the party i in the ith period in the Federal learning operation period T:
Figure BDA0002747206190000021
Figure BDA0002747206190000022
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data to the federation in the tth period; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
step S2, initializing the maximum profit round T, profit B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
step S3, obtaining C of each participanti(t) and Qi(t);
Step S4, taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
Preferably, in step S1:
Figure BDA0002747206190000023
Yi(t) is a queue system:
Yi(t+1)=max[(Yi(t)+Ci(t)-ui(t),0]
Qi(t) is a time queue:
Qi(t+1)=max[(Qi(t)+λi(t)-ui(t),0]
preferably, in step S2, the scaling factor F of the differential evolution algorithm is set to 0.5, and the crossover factor CR is set to 0.5;
the method encodes the income of each participant into a population form, records and obtains an initial fitness f (t)', and specifically comprises the following steps:
the number of participants is n, and the revenue of each participant is encoded into the formation of the population:
Figure BDA0002747206190000031
wherein the attribute dimension owned by each participant is D; y when t is equal to 0i(t)、Qi(t)、Ci(t)、λiThe value of (t) is substituted into the expected loss offset and the initial fitness f (t)' value is recorded.
Preferably, the obtaining of C of each participanti(t) and Qi(t), specifically including:
starting from i ═ 1 to n, i denotes the participants, i.e. population individuals;
if participant i contributes to federal data di(t)>0, then calculate Ci(t)、Qi(t);
If i does not provide any data, i.e. Ci(t)=0。
Preferably, f (t) is used as the objective function, and U is used as the objective functioni(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution, specifically comprising:
step S41, in the current generation period t, for each individual ui,tRandomly selecting three from the current populationVolume vector ur1,t,ur2,t,ur3,tWherein r is1≠r2≠r3Not equal to i, and r1,r2,r3E {1,2, ·, n } a random integer;
carrying out mutation operation according to the following formula to produce variant individual ui,t
Vi,t=ur1,t+F·(ur2,t-ur3,t)
Step S42, aiming at target vector ui,tAnd a variant variation vector Vi,tRandom recombination crossover of the individual components:
Figure BDA0002747206190000032
step S43, based on the individual adaptive value, to the experimental vector Si,t Ui,gAnd a target vector ui,tWhen the fitness of the experimental individual S is comparediIs superior to the target individual uiThen select SiEntering the evolution of the next generation, otherwise, selecting ui
Figure BDA0002747206190000033
Preferably, the method further comprises the following steps:
step S5, updating the value of the participator in the t round, and simultaneously updating Y according to the updated valuei(t)、QiThe value of (t).
Preferably, the method further comprises the following steps:
step S6, taking the maximum evaluation times of the objective function as the termination conditions of the algorithm; if the conditions are met, outputting the optimal individual, wherein the value is the optimal round scheme solution; otherwise, let t be t +1, and then go to step S42.
In a second aspect, an embodiment of the present invention provides a federated learning incentive system based on differential evolution, including:
the expected loss module is used for acquiring the expected loss offset of the participant i in the ith period in the Federal learning operation period T:
Figure BDA0002747206190000041
Figure BDA0002747206190000042
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data to the federation in the tth period; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
an initialization module for initializing the maximum profit round T, profits B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
a participant calculation module for obtaining C of each participanti(t) and Qi(t);
A differential evolution processing module for taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the differential evolution-based federated learning incentive method according to the embodiment of the first aspect of the present invention.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the differential evolution-based federated learning incentive method according to an embodiment of the first aspect of the present invention.
According to the federal learning incentive method and system based on differential evolution, provided by the embodiment of the invention, by utilizing the excellent global optimization capability and local detection capability of DE, each participant shifts along with time t (assuming a month as a period), the difference (expected loss) between the income obtained by each participant from the federal and the income which should be obtained by each participant is minimized, "expected loss and waiting time" between the participants is minimized, the actual income and the expected difference of the obtained return of each participant in the federal learning are automatically balanced, and the participants are effectively promoted to provide reliable data so that the federal learning can be carried out stably for a long time; the dynamic adjustment of the total revenue of federal learning and the revenue of each participant is effectively realized, the sustainable operation target is maximized, the unfairness problem of the participants is minimized, and the dependence on manual intervention is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a federated learning incentive method based on differential evolution, according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the present application, the term "and/or" is only one kind of association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, the terms "comprise" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a system, product or apparatus that comprises a list of elements or components is not limited to only those elements or components but may alternatively include other elements or components not expressly listed or inherent to such product or apparatus. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 provides a federated learning incentive method based on differential evolution according to an embodiment of the present invention, which includes:
step S1, obtaining the expected loss offset of the party i in the ith period in the Federal learning operation period T:
Figure BDA0002747206190000061
Figure BDA0002747206190000062
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data d (t) to the federation in the tth period, assuming it is already available; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
step S2, initializing the maximum profit round T, profit B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
step S3, obtaining C of each participanti(t) and Qi(t);
Step S4, taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
Preferably, in step S1:
Figure BDA0002747206190000063
Yi(t) is a queue system:
Yi(t+1)=max[(Yi(t)+Ci(t)-ui(t),0] (4)
Qi(t) is a time queue:
Qi(t+1)=max[(Qi(t)+λi(t)-ui(t),0] (5)
in addition to the above embodiment, in step S2, the scaling factor F of the differential evolution algorithm is set to 0.5, and the crossover factor CR is set to 0.5;
the method encodes the income of each participant into a population form, records and obtains an initial fitness f (t)', and specifically comprises the following steps:
the number of participants is n, and the revenue of each participant is encoded into the formation of the population:
Figure BDA0002747206190000064
wherein the attribute dimension owned by each participant is D; y when t is equal to 0i(t)、Qi(t)、Ci(t)、λiThe value of (t) is substituted for the formula (1), and the value of the initial fitness f (t)' is recorded.
On the basis of the above embodiment, the C of each participant is obtainedi(t) and Qi(t), specifically including:
starting from i ═ 1 to n, i denotes the participants, i.e. population individuals;
if participant i contributes to federal data di(t)>0, then calculate Ci(t)、Qi(t);
If i does not provide any data, i.e. Ci(t)=0。
On the basis of the above embodiment, the f (t) is taken as an objective function, and U is taken asi(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution, specifically comprising:
step S41, in the current generation period t, for each individual ui,tRandomly selecting three individual vectors u from the current populationr1,t,ur2,t,ur3,tWherein r is1≠r2≠r3Not equal to i, and r1,r2,r3E {1,2, ·, n } a random integer;
carrying out mutation operation according to the following formula to produce variant individual ui,t
Vi,t=ur1,t+F·(ur2,t-ur3,t) (6)
Step S42, aiming at target vector ui,tAnd a variant variation vector Vi,tRandom recombination crossover of the individual components:
Figure BDA0002747206190000071
the cross operation of DE is mainly to increase the potential diversity of the population, usually to the target vector ui,tAnd a variant variation vector Vi,tIs randomly recombined, but the experimental vector S must be ensuredi,tAt least one component being derived from the variation vector Vi,tThe other components are controlled by the parameter CR. The crossover operation was performed as follows (7).
Step S43, based on the individual adaptive value, to the experimental vector Si,t Ui,gAnd a target vector ui,tWhen the fitness of the experimental individual S is comparediIs superior to the target individual uiThen select SiEntering the evolution of the next generation, otherwise, selecting ui
Figure BDA0002747206190000072
Selecting operation: according to the greedy selection scheme, the selection of DE is based on the fitness value of the individual (the invention refers to the availability of the participants), which is essentially the experimental vector si,t Ui,gAnd a target vector ui,tThe fitness of (3) is compared. Namely when the subject SiIs superior to the target individual uiWhen S is presentiWill be selected to enter the next generation of evolution, otherwise uiWill be selected. The selection operation is calculated according to equation (8).
On the basis of the above embodiment, the method further includes:
step S5, updating the value of the participator in the t round, and simultaneously updating Y according to the updated valuei(t)、QiThe value of (t).
On the basis of the above embodiment, the method further includes:
step S6, taking the maximum evaluation number of the objective function as a termination condition of the algorithm, where MAX _ FES is 5000 × D is a dimension of the argument U as D; if the conditions are met, outputting the optimal individual, wherein the value is the optimal round scheme solution; otherwise, let t be t +1, and then go to step S42.
The embodiment of the invention also provides a federated learning incentive system based on differential evolution, and the federated learning incentive method based on differential evolution in the embodiments comprises the following steps:
the expected loss module is used for acquiring the expected loss offset of the participant i in the ith period in the Federal learning operation period T:
Figure BDA0002747206190000081
Figure BDA0002747206190000082
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data to the federation in the tth period; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
an initialization module for initializing the maximum profit round T, profits B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
a participant calculation module for obtaining C of each participanti(t) and Qi(t);
A differential evolution processing module for taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
Based on the same concept, an embodiment of the present invention further provides a server, as shown in fig. 2, where the server may include: a processor (processor)810, a communication Interface 820, a memory 830 and a communication bus 840, wherein the processor 810, the communication Interface 820 and the memory 830 communicate with each other via the communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform the steps of the differential evolution based federated learning incentive method as described in the various embodiments above.
In addition, the logic instructions in the memory 830 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Based on the same concept, embodiments of the present invention further provide a non-transitory computer-readable storage medium storing a computer program, where the computer program includes at least one code segment that is executable by a master device to control the master device to implement the steps of the differential evolution-based federated learning incentive method according to the above embodiments.
Based on the same technical concept, the embodiment of the present application further provides a computer program, which is used to implement the above method embodiment when the computer program is executed by the main control device.
The program may be stored in whole or in part on a storage medium packaged with the processor, or in part or in whole on a memory not packaged with the processor.
Based on the same technical concept, the embodiment of the present application further provides a processor, and the processor is configured to implement the above method embodiment. The processor may be a chip.
In summary, according to the federal learning incentive method and system based on differential evolution provided by the embodiments of the present invention, by using the excellent global optimization capability and local detection capability of DE, each participant shifts (assuming a period of months) with time t in the federal learning process, the difference (expected loss) between the income obtained by each participant from the federal and the income that should be obtained by each participant is minimized, "expected loss and waiting time" between the participants is minimized, the actual income and the expected return difference of each participant in the federal learning are automatically balanced, and the participants are effectively promoted to provide reliable data so that the federal learning can be performed stably for a long time; the dynamic adjustment of the total revenue of federal learning and the revenue of each participant is effectively realized, the sustainable operation target is maximized, the unfairness problem of the participants is minimized, and the dependence on manual intervention is avoided.
The embodiments of the present invention can be arbitrarily combined to achieve different technical effects.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid state disk), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A federated learning incentive method based on differential evolution is characterized by comprising the following steps:
step S1, obtaining the expected loss offset of the party i in the ith period in the Federal learning operation period T:
Figure FDA0002747206180000011
Figure FDA0002747206180000012
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data to the federation in the tth period; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
step S2, initializing the maximum profit round T, profit B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
step S3, obtaining C of each participanti(t) and Qi(t);
Step S4, taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
2. The differential evolution-based federated learning incentive method according to claim 1, wherein in the step S1:
Figure FDA0002747206180000013
Yi(t) is a queue system:
Yi(t+1)=max[(Yi(t)+Ci(t)-ui(t),0]
Qi(t) is a time queue:
Qi(t+1)=max[(Qi(t)+λi(t)-ui(t),0]。
3. the differential evolution-based federal learning incentive method as claimed in claim 2, wherein in step S2, a scaling factor F of 0.5 and a cross factor CR of 0.5 are set for the differential evolution algorithm;
the method encodes the income of each participant into a population form, records and obtains an initial fitness f (t)', and specifically comprises the following steps:
the number of participants is n, and the revenue of each participant is encoded into the formation of the population:
Figure FDA0002747206180000021
wherein the attribute dimension owned by each participant is D; y when t is equal to 0i(t)、Qi(t)、Ci(t)、λiThe value of (t) is substituted into the expected loss offset and the initial fitness f (t)' value is recorded.
4. The federated learning incentive method based on differential evolution of claim 1, wherein the obtaining of C of each participanti(t) and Qi(t), specifically including:
starting from i ═ 1 to n, i denotes the participants, i.e. population individuals;
if participant i contributes to federal data di(t)>0, then calculate Ci(t)、Qi(t);
If i does not provide any data, i.e. Ci(t)=0。
5. The differential evolution-based federated learning incentive method according to claim 3, wherein f (t) is an objective function and U is an objective functioni(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution, specifically comprising:
step S41, in the current generation period t, for each individual ui,tRandomly selecting three individual vectors u from the current populationr1,t,ur2,t,ur3,tWherein r is1≠r2≠r3Not equal to i, and r1,r2,r3E {1,2, ·, n } a random integer;
carrying out mutation operation according to the following formula to produce variant individual ui,t
Vi,t=ur1,t+F·(ur2,t-ur3,t)
Step S42, aiming at target vector ui,tAnd a variant variation vector Vi,tRandom recombination crossover of the individual components:
Figure FDA0002747206180000022
step S43, based on the individual adaptive value, to the experimental vector Si,tUi,gAnd a target vector ui,tWhen the fitness of the experimental individual S is comparediIs superior to the target individual uiThen select SiEntering the evolution of the next generation, otherwise, selecting ui
Figure FDA0002747206180000023
6. The differential evolution-based federated learning incentive method of claim 5, further comprising:
step S5, updating the value of the participator in the t round, and simultaneously updating Y according to the updated valuei(t)、QiThe value of (t).
7. The differential evolution-based federated learning incentive method of claim 6, further comprising:
step S6, taking the maximum evaluation times of the objective function as the termination conditions of the algorithm; if the conditions are met, outputting the optimal individual, wherein the value is the optimal round scheme solution; otherwise, let t be t +1, and then go to step S42.
8. A federated learning incentive system based on differential evolution, comprising:
the expected loss module is used for acquiring the expected loss offset of the participant i in the ith period in the Federal learning operation period T:
Figure FDA0002747206180000031
Figure FDA0002747206180000032
wherein, Ui(t) the benefit of participant i in the t-th period; b (t) is total profit; ci(t) the cost required for party i to contribute data to the federation in the tth period; y isi(t) the difference between the gains; qi(t) represents a time queue waiting for federal payment;
an initialization module for initializing the maximum profit round T, profits B (T), Yi(t)=0,Qi(t) ═ 0; setting a scaling factor and a cross factor of a differential evolution algorithm; coding the income of each participant into a population form, and recording to obtain initial fitness f (t)';
a participant calculation module for obtaining C of each participanti(t) and Qi(t);
A differential evolution processing module for taking f (t) as an objective function and Ui(t)、Yi(t)、Qi(t)、λi(t) as a constraint condition, performing differential evolution processing to obtain minimum expected loss and waiting time.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the differential evolution based federated learning incentive method of any of claims 1 to 7.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the differential evolution based federated learning incentive method of any of claims 1-7.
CN202011170752.4A 2020-10-28 2020-10-28 Federal learning incentive method and system based on differential evolution Pending CN112329947A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011170752.4A CN112329947A (en) 2020-10-28 2020-10-28 Federal learning incentive method and system based on differential evolution
PCT/CN2021/074276 WO2022088541A1 (en) 2020-10-28 2021-01-29 Differential evolution-based federated learning incentive method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011170752.4A CN112329947A (en) 2020-10-28 2020-10-28 Federal learning incentive method and system based on differential evolution

Publications (1)

Publication Number Publication Date
CN112329947A true CN112329947A (en) 2021-02-05

Family

ID=74296344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011170752.4A Pending CN112329947A (en) 2020-10-28 2020-10-28 Federal learning incentive method and system based on differential evolution

Country Status (2)

Country Link
CN (1) CN112329947A (en)
WO (1) WO2022088541A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157434A (en) * 2021-02-26 2021-07-23 西安电子科技大学 Excitation method and system for user node of horizontal federated learning system
CN113656833A (en) * 2021-08-09 2021-11-16 浙江工业大学 Privacy stealing defense method based on evolutionary computation under vertical federal architecture
CN113837368A (en) * 2021-09-27 2021-12-24 中国太平洋保险(集团)股份有限公司 Control method and device for evaluating data value of each participant in federal learning
CN114217933A (en) * 2021-12-27 2022-03-22 北京百度网讯科技有限公司 Multi-task scheduling method, device, equipment and storage medium
CN115345317A (en) * 2022-08-05 2022-11-15 北京交通大学 Fair reward distribution method based on fairness theory and oriented to federal learning

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116384502B (en) * 2022-09-09 2024-02-20 京信数据科技有限公司 Method, device, equipment and medium for calculating contribution of participant value in federal learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180089587A1 (en) * 2016-09-26 2018-03-29 Google Inc. Systems and Methods for Communication Efficient Distributed Mean Estimation
CN110363305B (en) * 2019-07-17 2023-09-26 深圳前海微众银行股份有限公司 Federal learning method, system, terminal device and storage medium
CN110490335A (en) * 2019-08-07 2019-11-22 深圳前海微众银行股份有限公司 A kind of method and device calculating participant's contribution rate
CN110910158A (en) * 2019-10-08 2020-03-24 深圳逻辑汇科技有限公司 Federal learning revenue allocation method and system
CN111222646B (en) * 2019-12-11 2021-07-30 深圳逻辑汇科技有限公司 Design method and device of federal learning mechanism and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157434A (en) * 2021-02-26 2021-07-23 西安电子科技大学 Excitation method and system for user node of horizontal federated learning system
CN113157434B (en) * 2021-02-26 2024-05-07 西安电子科技大学 Method and system for exciting user nodes of transverse federal learning system
CN113656833A (en) * 2021-08-09 2021-11-16 浙江工业大学 Privacy stealing defense method based on evolutionary computation under vertical federal architecture
CN113837368A (en) * 2021-09-27 2021-12-24 中国太平洋保险(集团)股份有限公司 Control method and device for evaluating data value of each participant in federal learning
CN114217933A (en) * 2021-12-27 2022-03-22 北京百度网讯科技有限公司 Multi-task scheduling method, device, equipment and storage medium
CN115345317A (en) * 2022-08-05 2022-11-15 北京交通大学 Fair reward distribution method based on fairness theory and oriented to federal learning

Also Published As

Publication number Publication date
WO2022088541A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN112329947A (en) Federal learning incentive method and system based on differential evolution
Liu et al. Fedcoin: A peer-to-peer payment system for federated learning
WO2021114821A1 (en) Isolation forest model construction and prediction method and device based on federated learning
Liu et al. Competing bandits in matching markets
Jiao et al. Toward an automated auction framework for wireless federated learning services market
CN115270001B (en) Privacy protection recommendation method and system based on cloud collaborative learning
CN108647988B (en) Advertisement information processing system, method and device and electronic equipment
CN111340565B (en) Information recommendation method, device, equipment and storage medium
CN112948885B (en) Method, device and system for realizing privacy protection of multiparty collaborative update model
CN108510315A (en) A kind of resource issuing method and relevant device
Alfantoukh et al. Multi-stakeholder consensus decision-making framework based on trust: a generic framework
CN116627970A (en) Data sharing method and device based on blockchain and federal learning
Chen et al. A Mechanism Design Approach for Multi-party Machine Learning
Narang et al. Design of trusted B2B market platforms using permissioned blockchains and game theory
Petruzzi et al. A generic social capital framework for optimising self-organised collective action
Shorish Blockchain state machine representation
CN111753386B (en) Data processing method and device
Saxena et al. Social network analysis of the caste-based reservation system in India
CN113761070A (en) Block chain intelligence data sharing excitation method, system, equipment and medium
CN114819197A (en) Block chain alliance-based federal learning method, system, device and storage medium
CN114625977A (en) Service recommendation method and device based on federal learning and related medium
CN114580661A (en) Data processing method and device based on federal learning and computer equipment
JP6852193B2 (en) How to determine the central vertex in social networks, devices, devices and storage media
CN111324913A (en) Collaborative authoring method, platform and computer-readable storage medium based on block chain
CN111260468A (en) Block chain based data operation method and related equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination