CN111538598A - Federal learning modeling method, device, equipment and readable storage medium - Google Patents
Federal learning modeling method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN111538598A CN111538598A CN202010360246.5A CN202010360246A CN111538598A CN 111538598 A CN111538598 A CN 111538598A CN 202010360246 A CN202010360246 A CN 202010360246A CN 111538598 A CN111538598 A CN 111538598A
- Authority
- CN
- China
- Prior art keywords
- model
- model training
- completed
- federal
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000012549 training Methods 0.000 claims abstract description 521
- 230000008569 process Effects 0.000 claims abstract description 22
- 230000003993 interaction Effects 0.000 claims description 34
- 230000002776 aggregation Effects 0.000 claims description 13
- 238000004220 aggregation Methods 0.000 claims description 13
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 abstract description 13
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a federated learning modeling method, a federated learning modeling device, equipment and a readable storage medium, wherein the federated learning modeling method comprises the following steps: and coordinating each model training participating device corresponding to each model training task to be completed to carry out a preset federal learning modeling process based on each model training time period so as to complete each model training task to be completed. The method and the device solve the technical problem that the coordinator in the federal learning system is low in calculation resource utilization rate.
Description
Technical Field
The application relates to the field of artificial intelligence of financial technology (Fintech), in particular to a method, a device and equipment for modeling for federated learning and a readable storage medium.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of federal learning is more and more extensive, in a federal learning scene, a model is usually trained by a plurality of federal learning participants, and a coordinator is used for coordinating each federal learning participant to perform model training, for example, when each federal is performed, the gradient sent by each federal participant is weighted and averaged, but when each federal participant performs local iterative training, the coordinator does not need to execute a calculation task but occupies calculation resources, that is, when each federal participant performs local iterative training, the calculation resources of the coordinator are wasted, and further the calculation resource utilization rate of the coordinator is reduced, that is, the technical problem in the prior art that the calculation resource utilization rate of the coordinator in a federal learning system is low exists.
Disclosure of Invention
The application mainly aims to provide a method, a device and equipment for modeling federated learning and a readable storage medium, and aims to solve the technical problem that in the prior art, the utilization rate of computing resources of a coordinator in a federated learning system is low.
In order to achieve the above object, the present application provides a federal learning modeling method, where the federal learning modeling method is applied to a first device, and the federal learning modeling method includes:
negotiating and interacting with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
and obtaining a model training time period corresponding to each to-be-completed model training task, and coordinating each model training participating device corresponding to each to-be-completed model training task to perform a preset federal learning modeling process based on each model training time period so as to complete each to-be-completed model training task.
Optionally, the step of determining, in each of the second devices, each model training participating device corresponding to each of the to-be-completed model training tasks includes:
obtaining model training information corresponding to each model training task;
and determining each model training participation device corresponding to each model training task to be completed by carrying out intention confirmation interaction with each second device based on each model training information.
Optionally, the model training information includes model index information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
respectively sending the model index information to each second device, so that each second device determines each target model training task participating in each model training task based on the acquired model training demand information and each model index information, and generates first determination information corresponding to each target model training task;
and determining each model training participating device corresponding to each model training task to be completed based on each first determination information fed back by each second device.
Optionally, the model training information includes model training time information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
respectively sending the model training time information to each second device, so that each second device determines each target model training task participating in each model training task based on the acquired training time limit information and each model training time information, and generates second determination information corresponding to each target model training task;
and determining each model training participating device corresponding to each model training task to be completed based on each piece of second determination information fed back by each piece of second equipment.
Optionally, in each model training time period, receiving local model parameters sent by each model training participating device corresponding to the model training time period, respectively, and calculating the latest federal model parameters based on a preset aggregation rule;
determining whether the latest federal model parameters meet preset training task end conditions;
if the latest federal model parameters meet the preset training task end conditions, the latest federal model parameters are sent to the second devices so that the second devices can update respective local models;
and if the latest federal model parameters do not meet the preset training task end conditions, respectively sending the latest federal model parameters to each model training participatory device, so that each model participatory device updates the respective federal participatory model, and recalculating the latest federal model parameters until the latest federal model parameters meet the preset training task end conditions.
In order to achieve the above object, the present application further provides a federal learning modeling method, where the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
interacting with the first equipment, determining model training information, and acquiring equipment state information so as to determine whether to participate in a to-be-completed model training task corresponding to the model training information based on the equipment state information;
and if the model training task to be completed is participated, executing a preset federal learning modeling process by coordinating and interacting with the first equipment so as to complete the model training task to be completed.
Optionally, the step of executing a preset federal learning modeling procedure by performing a coordinated interaction with the first device includes:
determining a model to be trained corresponding to the model training task to be completed, and performing iterative training on the model to be trained until the model to be trained reaches a preset iteration number, and acquiring local model parameters corresponding to the model to be trained;
sending the local model parameters to the first device, so that the first device can calculate the latest federal model parameters based on the local model parameters;
and receiving the latest federal model parameters fed back by the first equipment, updating the model to be trained based on the latest federal model parameters until the local model reaches a preset training end condition, and obtaining a target modeling model corresponding to the model training task to be completed.
The application also provides a federal study modeling device, federal study modeling device is virtual device, just federal study modeling device is applied to first equipment, federal study modeling device includes:
the negotiation module is used for carrying out negotiation interaction with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
and the coordination module is used for acquiring model training time periods corresponding to the model training tasks to be completed, coordinating each model training participating device corresponding to each model training task to be completed to carry out a preset federal learning modeling process based on each model training time period, so as to complete each model training task to be completed.
Optionally, the negotiation module includes:
the acquisition unit is used for acquiring model training information corresponding to each model training task;
and the determining unit is used for determining each model training participating device corresponding to each model training task to be completed by carrying out intention confirmation interaction with each second device based on each model training information.
Optionally, the determining unit includes:
the first sending subunit is configured to send each piece of model index information to each piece of second equipment, so that each piece of second equipment determines, in each piece of model training task, each target model training task involved in each piece of model training task based on the acquired model training requirement information and each piece of model index information, and generates first determination information corresponding to each piece of target model training task;
and the first determining subunit is configured to determine, based on each piece of first determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
Optionally, the determining unit further includes:
a second sending subunit, configured to send each piece of model training time information to each piece of second equipment, so that each piece of second equipment determines, in each piece of model training task, each target model training task that participates in based on the obtained training time limit information and each piece of model training time information, and generates second determination information corresponding to each target model training task;
and the second determining subunit is configured to determine, based on each piece of second determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
Optionally, the coordination module comprises:
the calculation unit is used for respectively receiving local model parameters sent by each model training participating device corresponding to each model training time period in each model training time period, and calculating the latest federal model parameters based on a preset aggregation rule;
the first judging unit is used for determining whether the latest federal model parameters meet preset training task ending conditions or not;
the updating unit is used for sending the latest federal model parameters to each second device to update the local model of each second device if the latest federal model parameters meet the preset training task ending conditions;
and the second judging unit is used for respectively sending the latest federal model parameters to each model training participatory device if the latest federal model parameters do not meet the preset training task ending conditions, so that each model participatory device updates the respective federal participatory model to recalculate the latest federal model parameters until the latest federal model parameters meet the preset training task ending conditions.
In order to achieve the above object, the present application further provides a federal learning modeling apparatus, wherein the federal learning modeling apparatus is applied to a second device, and the federal learning modeling apparatus further includes:
the interaction module is used for interacting with the first equipment, determining model training information and acquiring equipment state information so as to determine whether to participate in a to-be-completed model training task corresponding to the model training information based on the equipment state information;
and the federal learning modeling module is used for executing a preset federal learning modeling process to complete the model training task to be completed through carrying out coordination interaction with the first equipment if participating in the model training task to be completed.
Optionally, the federal learning modeling module includes:
the iterative training unit is used for determining a model to be trained corresponding to the model training task to be completed, performing iterative training on the model to be trained until the number of times of iteration of the model to be trained reaches a preset number of times, and acquiring local model parameters corresponding to the model to be trained;
a sending unit, configured to send the local model parameter to the first device, so that the first device calculates a latest federated model parameter based on the local model parameter;
and the updating unit is used for receiving the latest federal model parameters fed back by the first equipment, updating the model to be trained based on the latest federal model parameters until the local model reaches a preset training end condition, and obtaining a target modeling model corresponding to the model training task to be completed.
The application also provides a federal learning modeling equipment, federal learning modeling equipment is entity equipment, federal learning modeling equipment includes: a memory, a processor, and a program of the federal learning modeling method stored in the memory and executable on the processor, the program of the federal learning modeling method being executable by the processor to perform the steps of the federal learning modeling method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing the federal learning modeling method, the program implementing the steps of the federal learning modeling method as described above when executed by a processor.
The method comprises the steps of carrying out negotiation interaction on second equipment associated with first equipment, determining model training tasks to be completed, determining model training participation equipment corresponding to the model training tasks to be completed in the second equipment, further obtaining model training time periods corresponding to the model training tasks to be completed, coordinating the model training participation equipment corresponding to the model training tasks to be completed to carry out preset federated learning modeling processes based on the model training time periods, and completing the model training tasks to be completed. That is, before performing the federal learning modeling, determining each to-be-completed model training task to be executed by interacting with each second device, and further determining each model training participating device and model training time period corresponding to each to-be-completed model training task, so that a coordinator can coordinate each model training participating device corresponding to each to-be-completed model training task to perform a preset federal learning modeling process based on each model training time period, so as to complete each to-be-completed model training task, that is, when each model participating device of one to-be-completed training model is performing local iterative training, the coordinator can coordinate each model training participating device corresponding to other to-be-completed model training tasks to perform the federal learning modeling, and the situation that the coordinator occupies computing resources without executing a computing task when each federal participant carries out local iterative training is avoided, so that the purpose of fully utilizing the computing resources of the coordinator is achieved, and the utilization rate of the computing resources of the coordinator is improved, so that the technical problem of low utilization rate of the computing resources of the coordinator in the federal learning system is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a federated learning modeling method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of the Federal learning modeling method of the present application;
fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the federal learning modeling method of the present application, referring to fig. 1, the federal learning modeling method is applied to a first device, and the federal learning modeling method includes:
step S10, performing negotiation interaction with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
in this embodiment, it should be noted that one to-be-completed model training task corresponds to one or more model training participating devices, the first device is a coordinator of horizontal federal learning, the second device is a participant of horizontal federal learning, the model training participating devices are second devices participating in the to-be-completed training task, and the to-be-completed model training task is a task of performing model training based on horizontal federal learning, where one to-be-completed model training task may be used to train one or more target models, and one target model may also be obtained based on executing one or more to-be-completed model training tasks.
Optionally, the second device may select to execute the to-be-completed model training task in a preset trusted execution environment, for example, sgx (intel Software Guard extensions) of intel.
And performing negotiation interaction with each second device associated with the first device, determining each model training task to be completed, determining each model training participating device corresponding to each model training task to be completed in each second device, specifically performing negotiation interaction with each second device associated with the first device, determining each model training task to be completed and model training information of each model training task to be completed, and further determining each model training participating device corresponding to each model training task to be completed in each second device based on the model training information.
In step S10, the step of determining, in each second device, each model training participating device corresponding to each to-be-completed model training task includes:
step S11, obtaining model training information corresponding to each model training task;
in this embodiment, it should be noted that the model training information includes model name information, model training time period, and the like, where the model name information is an identifier of a corresponding model to be trained, such as a code, a character string, and the like, and the model training time period is estimated time information required by model training.
Step S12, based on each piece of model training information, determining each piece of model training participating equipment corresponding to each to-be-completed model training task by performing intention confirmation interaction with each second equipment.
In this embodiment, based on each piece of model training information, through willingness confirmation interaction with each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task is determined, specifically, each piece of model training information is sent to each piece of second equipment, the second equipment acquires equipment state information, respectively determines whether to participate in a to-be-completed model training task corresponding to each piece of model training information based on the equipment state information, feeds back determination information corresponding to the to-be-completed model training task to the first equipment if the participation in the to-be-completed model training task is determined, the first device respectively receives the determined information and identifies the second device corresponding to each determined information as a model training participation device, and counting one or more model training participation devices corresponding to each model training task to be completed.
Wherein the model training information includes model index information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
step A10, sending each piece of model index information to each piece of second equipment respectively, so that each piece of second equipment determines each target model training task participating in each model training task based on the acquired model training requirement information and each piece of model index information respectively, and generates first determination information corresponding to each target model training task;
in this embodiment, it should be noted that the model index information is identification information of a corresponding to-be-completed model training task, such as a code or a character string, and the first determination information is information indicating that the second device determines to participate in the to-be-completed model training task corresponding to the model index information, where the first determination information may be willingness information, local model parameter information, or local model gradient information separately replied by the second device, so as to indicate that the second device is willing to participate in the corresponding to-be-completed model training task, and each to-be-completed training task corresponds to a model training time period for executing a task.
Sending each piece of model index information to each piece of second equipment respectively, so that each piece of second equipment determines each target model training task to participate in each piece of model training task based on the acquired model training demand information and each piece of model index information respectively, and generates first determination information corresponding to each piece of target model training task, specifically, broadcasting the model index information corresponding to each piece of model training time period to each piece of second equipment within a preset time length before each piece of model training time period starts, so that the second equipment determines a corresponding to-be-completed model training task based on the model index information, and determines whether to participate in the to-be-completed model training task based on the acquired current equipment running state, wherein the equipment running state comprises currently available computing resources, and then determines whether to participate in the to-be-completed model training task, if yes, the to participate in the to-be-completed model training task, and feeding back first determination information to the first equipment to indicate participation in the model training task to be completed, and if the first determination information is determined not to participate in the model training task to be completed, ignoring the model index information and waiting for receiving the next model index information.
Step a20, determining, based on each piece of first determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
In this embodiment, each model training participating device corresponding to each to-be-completed model training task is determined based on each first determination information fed back by each second device, specifically, each first determination information corresponding to the to-be-completed model training task sent by each second device is received before each model training time period starts, and each second device sending each first determination information is used as the model training participating device, where one first determination information corresponds to one second device corresponding to one model training participating device.
Wherein the model training information includes model training time information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
step B10, sending each piece of model training time information to each piece of second equipment, so that each piece of second equipment determines each target model training task involved in each piece of model training task based on the obtained training time limit information and each piece of model training time information, and generates second determination information corresponding to each target model training task;
in this embodiment, it should be noted that the second determination information is information indicating that the second device determines to participate in the to-be-completed model training task corresponding to the model training time information, each to-be-completed training task corresponds to one model training time period for executing a task, and the second determination information is sent to the first device by the second device before the model training time period corresponding to the to-be-completed model training task corresponding to the second determination information begins.
Sending each piece of model training time information to each piece of second equipment respectively, so that each piece of second equipment determines each target model training task participating in each piece of model training task and generates second determination information corresponding to each piece of target model training task respectively based on the obtained training time limit information and each piece of model training time information, specifically, sending each piece of model training time information to each piece of second equipment respectively before each piece of model training time period corresponding to each piece of model training task to be completed, so that each piece of second equipment obtains training time limit information respectively, wherein the training time limit information indicates whether the second equipment has idle time and enough computing resources to participate in the model training task to be completed in the model training time period, and each piece of second equipment is based on the training time limit information and the model training time information, and determining whether to participate in a to-be-completed model training task corresponding to the model training time information, if so, feeding back second determination information to the first equipment to indicate participation in the to-be-completed model training task, and if not, ignoring the model training time information and waiting for receiving next model training time information.
And step B20, determining, based on each piece of second determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
In this embodiment, each model training participating device corresponding to each to-be-completed model training task is determined based on each second determination information fed back by each second device, specifically, each second determination information corresponding to the to-be-completed model training task and sent by each second device is received before each model training time period starts, and the second device sending each second determination information is used as the model training participating device, where one first determination information corresponds to one second device and corresponds to one model training participating device.
Step S20, obtaining a model training time period corresponding to each to-be-completed model training task, and coordinating each model training participating device corresponding to each to-be-completed model training task to perform a preset federal learning modeling process based on each model training time period, so as to complete each to-be-completed model training task.
In this embodiment, it should be noted that the preset federal learning modeling procedure is a procedure for performing federal learning, and each of the model training time periods includes a first model training time period and a second model training time period.
Obtaining model training time periods corresponding to the model training tasks to be completed, coordinating each model training participating device corresponding to each model training task to be completed to perform a preset federal learning modeling process based on each model training time period to complete each model training task to be completed, specifically obtaining the model training time periods corresponding to each model training task to be completed, receiving local model parameters sent by each corresponding model training participating device in each model training time period, and calculating the latest federal model parameters corresponding to each local model parameter based on a preset aggregation rule, wherein the preset aggregation rule comprises weighted averaging, summation and the like, and determining whether the latest federal model parameters reach preset training task completing conditions, if the latest federal model parameters reach the training task completing conditions, respectively sending the latest federal model parameters to each second device so that each second device updates a respective local model based on the latest federal model parameters, respectively sending the latest federal model parameters to each model training participatory device if the latest federal model parameters do not reach the training task completion conditions, so that each model training participatory device updates the respective local model, re-performing federal learning based on the updated local model, re-calculating the latest federal model parameters until the latest federal model parameters reach the training task completion conditions, wherein the training task completion conditions comprise loss function convergence, maximum iteration times of the model and the like, wherein if the model training time periods have an intersection time period, the first device receives the local model parameters corresponding to each model training task to be completed according to the intersection time period And determining a time sequence for calculating the latest federal model parameters corresponding to each to-be-completed model training task, for example, if each to-be-completed model training task includes a task a and a task B, the first device receives all the local model parameters sent by the model training participation device corresponding to the task a at point 9 for zero 7 minutes, and receives all the local model parameters sent by the model training participation device corresponding to the task B at point 9 for zero 9 minutes, so that the first device calculates the latest federal model parameters corresponding to the task a preferentially, and then calculates the latest federal model parameters corresponding to the task B.
Optionally, the first device may select to perform, in the preset trusted execution environment, a step of calculating, based on a preset aggregation rule, the latest federated model parameter corresponding to each of the local model parameters.
The step of coordinating each model training participating device corresponding to each model training task to be completed to perform the preset federal learning modeling process based on each model training time period comprises the following steps:
step S21, in each model training time period, respectively receiving local model parameters sent by each model training participating device corresponding to the model training time period, and calculating the latest federal model parameters based on a preset aggregation rule;
in this embodiment, it should be noted that the local model parameters include model network parameters, gradient information, and the like, where the model network parameters are network parameters of the local model after iterative training of the local model owned by the model training participation device for a preset number of times, and the local model after iterative training is performed, for example, assuming that the local model is a linear model Y- β0+β1X1+β2X2+…+βnXnThen the network parameter is a vector (β)0,β1,β2,…,βn)。
In each model training time period, local model parameters sent by each model training participating device corresponding to the model training time period are respectively received, and a latest federal model parameter is calculated based on a preset aggregation rule, specifically, in each model training time period, local model parameters sent by each model training participating device corresponding to the model training time period are received, wherein each local model parameter is obtained by performing iteration training on a federal participating model corresponding to the local model parameter by the model training participating device for a preset number of times, the federal participating model is a local model of the model training participating device, and then, based on the preset aggregation rule, the local model parameters are weighted and averaged to obtain the latest federal model parameter.
Step S22, determining whether the latest federal model parameters meet the end conditions of a preset training task;
in this embodiment, it should be noted that the preset training task ending condition includes that the training reaches the maximum iteration number, the loss function training converges, and the like.
Determining whether the latest federal model parameters meet preset training task end conditions, specifically, if the difference value between the latest federal model parameters and the latest federal model parameters in the previous round is smaller than a preset difference value threshold, determining that the latest federal model parameters reach the preset training task end conditions,
step S23, if the latest federal model parameters meet the preset training task end conditions, the latest federal model parameters are sent to the second devices so that the second devices can update respective local models;
in this embodiment, if the latest federal model parameter meets the preset training task end condition, the latest federal model parameter is sent to each second device, so that each second device updates its own local model, specifically, if the latest federal model parameter meets the preset training task end condition, the latest federal model parameter is sent to each second device, so that each second device replaces and updates the corresponding model parameter in the local model to the latest federal model parameter based on the latest federal model parameter.
Additionally, if the difference value between the latest federal model parameter and the latest federal model parameter of the previous round is smaller than a preset difference value threshold value, the latest federal model parameter is judged to reach the preset training task end condition.
Step S23, if the latest federal model parameters do not meet the preset training task end conditions, the latest federal model parameters are respectively sent to the model training participating devices so that the model participating devices can update the respective federal participating models to recalculate the latest federal model parameters until the latest federal model parameters meet the preset training task end conditions.
In this embodiment, if the latest federal model parameter does not satisfy the preset training task end condition, the latest federal model parameter is respectively sent to each model training participating device, so that each model participating device updates its own federal participation model, to recalculate the latest federal model parameter until the latest federal model parameter satisfies the preset training task end condition, specifically, if the latest federal model parameter does not satisfy the preset training task end condition, the latest federal model parameter is respectively sent to each model training participating device, so that each model training participating device updates its own federal participation model based on the latest federal model parameter, and performs iterative training on the updated federal participation model, and further, when the number of iterative training times reaches the preset iterative training times, and re-acquiring local model parameters of the federal participation model after iterative training, and sending the re-calculated local model parameters to the first device, so that the first device can re-calculate the latest federal model parameters based on the re-calculated local model parameters sent by the second devices and the preset aggregation rule until the latest federal model parameters meet the preset training task end condition.
In this embodiment, each to-be-completed model training task is determined by performing negotiation interaction with each second device associated with the first device, each model training participating device corresponding to each to-be-completed model training task is determined in each second device, a model training time period corresponding to each to-be-completed model training task is further obtained, and each model training participating device corresponding to each to-be-completed model training task is coordinated to perform a preset federal learning modeling process based on each model training time period, so as to complete each to-be-completed model training task. That is, the embodiment provides a method for performing federal learning based on a time division manner, that is, before performing federal learning modeling, each to-be-completed model training task to be executed is determined by interacting with each second device, and then each model training participating device and model training time period corresponding to each to-be-completed model training task are determined, and then a coordinator can coordinate each model training participating device corresponding to each to-be-completed model training task to perform a preset federal learning modeling process based on each model training time period, so as to complete each to-be-completed model training task, that is, when each model participating device of one to-be-completed training model is performing local iterative training, the coordinator can coordinate each model training participating device corresponding to other to-be-completed model training tasks to perform federal learning modeling, and the conditions that the coordinator does not need to execute a calculation task and consume calculation resources when each federal participant carries out local iterative training are further avoided, so that the purpose of fully utilizing the calculation resources of the coordinator is achieved, and the utilization rate of the calculation resources of the coordinator is improved, so that the technical problem that the utilization rate of the calculation resources of the coordinator in the federal learning system is low is solved.
Further, referring to fig. 2, based on the first embodiment of the present application, in another embodiment of the present application, the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
step C10, interacting with the first device, determining model training information, and acquiring device state information to determine whether to participate in a to-be-completed model training task corresponding to the model training information based on the device state information;
in this embodiment, the model training information includes model index information and model training time information, and the device state information includes available computing resources of the second device, where the available computing resources are computing resources that can be called by the second device within a model training time period corresponding to the to-be-completed model training task.
Before step C10, the two devices perform negotiation interaction with the first device to determine each to-be-completed model training task.
Interacting with the first device, determining model training information, obtaining device state information, determining whether to participate in a to-be-completed model training task corresponding to the model training information based on the device state information, specifically, negotiating with the first device to interact, obtaining model training information, determining available computing resources, and further determining whether the available computing resources satisfy the to-be-completed model training task corresponding to the model training information, determining to participate in the to-be-completed model training task if the available computing resources satisfy the to-be-completed model training task, determining not to participate in the to-be-completed model training task if the available computing resources do not satisfy the to-be-completed model training task, for example, assuming that the to-be-completed model training task occupies 50% of all computing resources of the second device, and if the available computing resources which can be called by the second equipment are 40%, the available computing resources do not meet the model training task to be completed, and then the model training task to be completed is determined not to be participated in.
And step C20, if participating in the model training task to be completed, performing a preset federal learning modeling process by performing coordination interaction with the first equipment to complete the model training task to be completed.
In this embodiment, if the model training task to be completed is referred to, a preset federal learning modeling process is executed by performing coordinated interaction with the first device to complete the model training task to be completed, specifically, if the model training task to be completed is referred to, a model to be trained corresponding to the model training task to be completed is determined, iterative training is performed on the model to be trained until the model to be trained reaches a preset iterative training number, local model parameters of the model to be trained after the iterative training are obtained, and the local model parameters are sent to the first device, so that the first device calculates latest federal model parameters based on the local model parameters sent by the second devices, broadcasts the latest federal model parameters to the second devices, and the second devices receive the latest federal model parameters, and updating the model to be trained based on the latest federal model parameter, judging whether the updated model to be trained meets a preset iteration end condition, if the updated model to be trained meets the preset iteration end condition, judging that the model training task to be completed is completed, and if the updated model to be trained does not meet the preset iteration end condition, performing iteration training on the model to be trained again to allow the first device to recalculate the latest federal model parameter so as to update the model to be trained again until the updated model to be trained meets the preset iteration end condition.
In step C20, the step of executing a preset federal learning modeling procedure by performing a coordinated interaction with the first device includes:
step C21, determining a model to be trained corresponding to the model training task to be completed, and performing iterative training on the model to be trained until the number of times of iteration of the model to be trained reaches a preset number of times, and acquiring local model parameters corresponding to the model to be trained;
in this embodiment, a model to be trained corresponding to the model training task to be completed is determined, iterative training is performed on the model to be trained until the model to be trained reaches a preset iteration number, and a local model parameter corresponding to the model to be trained is obtained, specifically, the model to be trained corresponding to the model training task to be completed is determined, iterative training and updating are performed on the model to be trained until the model to be trained reaches the preset iteration number, and the local model parameter of the model to be trained after the iterative training and updating is extracted.
Step C22, sending the local model parameters to the first device, so that the first device can calculate the latest federal model parameters based on the local model parameters;
in this embodiment, the local model parameters are sent to the first device, so that the first device calculates latest federated model parameters based on the local model parameters, and specifically, the local model parameters are sent to the first device, so that the first device calculates latest federated model parameters corresponding to the local model parameters based on the local model parameters sent by the associated second devices, through a preset aggregation rule, where the preset aggregation rule includes weighting, averaging, summing, and the like.
And step C23, receiving the latest federal model parameters fed back by the first equipment, updating the model to be trained based on the latest federal model parameters until the local model reaches a preset training end condition, and obtaining a target modeling model corresponding to the model training task to be completed.
In this embodiment, the latest federal model parameter fed back by the first device is received, the model to be trained is updated based on the latest federal model parameter until the local model reaches a preset training end condition, the target modeling model corresponding to the model training task to be completed is obtained, specifically, the latest federal model parameter fed back by the first device is received, the local model parameter in the model to be trained is replaced and updated to the latest federal model parameter, the updated model to be trained is obtained, whether the updated model to be trained meets the preset iterative training end condition or not is judged, if the updated model to be trained meets the preset iterative training end condition, the updated model to be trained is used as the target modeling model, and if the updated model to be trained does not meet the preset iterative training end condition, and performing iterative training on the model to be trained again to replace and update the model to be trained until the model to be trained after replacement and update meets a preset iterative training end condition.
The method comprises the steps of determining model training information through interaction with the first equipment, obtaining equipment state information, determining whether to participate in a to-be-completed model training task corresponding to the model training information based on the equipment state information, and executing a preset federal learning modeling process through coordination interaction with the first equipment if participating in the to-be-completed model training task so as to complete the to-be-completed model training task. That is, the embodiment provides a modeling method based on federal learning, that is, before the federal learning modeling is performed, negotiation interaction is performed with the first device and a device running state of the first device is obtained, whether a to-be-completed model training task corresponding to the model training information is involved is determined, and then if the participation is determined, coordination interaction can be performed with the first device, a preset federal learning modeling process is executed, so as to complete the model training task, that is, the second device can autonomously select to participate in the to-be-completed model training task before the federal learning modeling is performed each time, so that a foundation is laid for solving the technical problem that a coordinator in a federal learning system is low in computational resource utilization rate.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 3, the federal learning modeling apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the federal learning modeling apparatus may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the federated learning modeling apparatus architecture shown in FIG. 3 does not constitute a limitation of the federated learning modeling apparatus, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 3, the memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a federal learning modeling program. The operating system is a program that manages and controls the hardware and software resources of the federated learning modeling apparatus, supporting the operation of the federated learning modeling program as well as other software and/or programs. The network communication module is used to enable communication between the various components within the memory 1005, as well as with other hardware and software in the federated learning modeling system.
In the federal learning modeling apparatus shown in fig. 3, the processor 1001 is configured to execute the federal learning modeling program stored in the memory 1005 to implement the steps of any one of the above-described federal learning modeling methods.
The specific implementation of the federal learning modeling device of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The embodiment of the present application further provides a federal learning modeling device, which is applied to the first device, and the federal learning modeling device includes:
the negotiation module is used for carrying out negotiation interaction with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
and the coordination module is used for acquiring model training time periods corresponding to the model training tasks to be completed, coordinating each model training participating device corresponding to each model training task to be completed to carry out a preset federal learning modeling process based on each model training time period, so as to complete each model training task to be completed.
Optionally, the negotiation module includes:
the acquisition unit is used for acquiring model training information corresponding to each model training task;
and the determining unit is used for determining each model training participating device corresponding to each model training task to be completed by carrying out intention confirmation interaction with each second device based on each model training information.
Optionally, the determining unit includes:
the first sending subunit is configured to send each piece of model index information to each piece of second equipment, so that each piece of second equipment determines, in each piece of model training task, each target model training task involved in each piece of model training task based on the acquired model training requirement information and each piece of model index information, and generates first determination information corresponding to each piece of target model training task;
and the first determining subunit is configured to determine, based on each piece of first determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
Optionally, the determining unit further includes:
a second sending subunit, configured to send each piece of model training time information to each piece of second equipment, so that each piece of second equipment determines, in each piece of model training task, each target model training task that participates in based on the obtained training time limit information and each piece of model training time information, and generates second determination information corresponding to each target model training task;
and the second determining subunit is configured to determine, based on each piece of second determination information fed back by each piece of second equipment, each piece of model training participating equipment corresponding to each to-be-completed model training task.
Optionally, the coordination module comprises:
the calculation unit is used for respectively receiving local model parameters sent by each model training participating device corresponding to each model training time period in each model training time period, and calculating the latest federal model parameters based on a preset aggregation rule;
the first judging unit is used for determining whether the latest federal model parameters meet preset training task ending conditions or not;
the updating unit is used for sending the latest federal model parameters to each second device to update the local model of each second device if the latest federal model parameters meet the preset training task ending conditions;
and the second judging unit is used for respectively sending the latest federal model parameters to each model training participatory device if the latest federal model parameters do not meet the preset training task ending conditions, so that each model participatory device updates the respective federal participatory model to recalculate the latest federal model parameters until the latest federal model parameters meet the preset training task ending conditions.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
In order to achieve the above object, an embodiment of the present application further provides a federal learning modeling apparatus, where the federal learning modeling apparatus is applied to a second device, and the federal learning modeling apparatus includes:
the interaction module is used for interacting with the first equipment, determining model training information and acquiring equipment state information so as to determine whether to participate in a to-be-completed model training task corresponding to the model training information based on the equipment state information;
and the federal learning modeling module is used for executing a preset federal learning modeling process to complete the model training task to be completed through carrying out coordination interaction with the first equipment if participating in the model training task to be completed.
Optionally, the federal learning modeling module includes:
the iterative training unit is used for determining a model to be trained corresponding to the model training task to be completed, performing iterative training on the model to be trained until the number of times of iteration of the model to be trained reaches a preset number of times, and acquiring local model parameters corresponding to the model to be trained;
a sending unit, configured to send the local model parameter to the first device, so that the first device calculates a latest federated model parameter based on the local model parameter;
and the updating unit is used for receiving the latest federal model parameters fed back by the first equipment, updating the model to be trained based on the latest federal model parameters until the local model reaches a preset training end condition, and obtaining a target modeling model corresponding to the model training task to be completed.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The embodiment of the application provides a readable storage medium, and the readable storage medium stores one or more programs, which can be executed by one or more processors for implementing the steps of any one of the above-mentioned federal learning modeling methods.
The specific implementation of the readable storage medium of the application is substantially the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (10)
1. The federated learning modeling method is applied to first equipment, and comprises the following steps:
negotiating and interacting with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
and obtaining a model training time period corresponding to each to-be-completed model training task, and coordinating each model training participating device corresponding to each to-be-completed model training task to perform a preset federal learning modeling process based on each model training time period so as to complete each to-be-completed model training task.
2. The federal learning modeling method as claimed in claim 1, wherein the step of determining, in each of the second devices, each model training participant device corresponding to each of the to-be-completed model training tasks includes:
obtaining model training information corresponding to each model training task;
and determining each model training participation device corresponding to each model training task to be completed by carrying out intention confirmation interaction with each second device based on each model training information.
3. The federated learning modeling method of claim 2, wherein the model training information includes model index information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
respectively sending the model index information to each second device, so that each second device determines each target model training task participating in each model training task based on the acquired model training demand information and each model index information, and generates first determination information corresponding to each target model training task;
and determining each model training participating device corresponding to each model training task to be completed based on each first determination information fed back by each second device.
4. The federal learning modeling method as claimed in claim 2, wherein the model training information includes model training time information,
the step of determining each model training participating device corresponding to each to-be-completed model training task by performing intention confirming interaction with each second device based on each model training information includes:
respectively sending the model training time information to each second device, so that each second device determines each target model training task participating in each model training task based on the acquired training time limit information and each model training time information, and generates second determination information corresponding to each target model training task;
and determining each model training participating device corresponding to each model training task to be completed based on each piece of second determination information fed back by each piece of second equipment.
5. The federal learning modeling method as claimed in claim 1, wherein the step of coordinating each model training participating device corresponding to each model training task to be completed to perform a preset federal learning modeling procedure based on each model training time period comprises:
in each model training time period, receiving local model parameters sent by each model training participating device corresponding to the model training time period respectively, and calculating the latest federal model parameters based on a preset aggregation rule;
determining whether the latest federal model parameters meet preset training task end conditions;
if the latest federal model parameters meet the preset training task end conditions, the latest federal model parameters are sent to the second devices so that the second devices can update respective local models;
and if the latest federal model parameters do not meet the preset training task end conditions, respectively sending the latest federal model parameters to each model training participatory device, so that each model participatory device updates the respective federal participatory model, and recalculating the latest federal model parameters until the latest federal model parameters meet the preset training task end conditions.
6. The federated learning modeling method is applied to second equipment, and comprises the following steps:
interacting with the first equipment, determining model training information, and acquiring equipment state information so as to determine whether to participate in a to-be-completed model training task corresponding to the model training information based on the equipment state information;
and if the model training task to be completed is participated, executing a preset federal learning modeling process by coordinating and interacting with the first equipment so as to complete the model training task to be completed.
7. The federal learning modeling method as claimed in claim 6, wherein said step of executing a predetermined federal learning modeling procedure through a coordinated interaction with said first device comprises:
determining a model to be trained corresponding to the model training task to be completed, and performing iterative training on the model to be trained until the model to be trained reaches a preset iteration number, and acquiring local model parameters corresponding to the model to be trained;
sending the local model parameters to the first device, so that the first device can calculate the latest federal model parameters based on the local model parameters;
and receiving the latest federal model parameters fed back by the first equipment, updating the model to be trained based on the latest federal model parameters until the local model reaches a preset training end condition, and obtaining a target modeling model corresponding to the model training task to be completed.
8. The utility model provides a federal study modeling device, its characterized in that, federal study modeling device includes:
the negotiation module is used for carrying out negotiation interaction with each second device associated with the first device, determining each model training task to be completed, and determining each model training participating device corresponding to each model training task to be completed in each second device;
and the coordination module is used for acquiring model training time periods corresponding to the model training tasks to be completed, coordinating each model training participating device corresponding to each model training task to be completed to carry out a preset federal learning modeling process based on each model training time period, so as to complete each model training task to be completed.
9. The federal learning modeling apparatus is characterized in that the federal learning modeling apparatus includes: a memory, a processor, and a program stored on the memory for implementing the federated learning modeling method,
the memory is used for storing a program for realizing the federal learning modeling method;
the processor is configured to execute a program implementing the federal learning modeling method to implement the steps of the federal learning modeling method as claimed in any of claims 1 to 5 or 6 to 7.
10. A readable storage medium having stored thereon a program for implementing a federal learning modeling method, the program being executed by a processor to implement the steps of the federal learning modeling method as claimed in any of claims 1 to 5 or 6 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010360246.5A CN111538598A (en) | 2020-04-29 | 2020-04-29 | Federal learning modeling method, device, equipment and readable storage medium |
PCT/CN2021/090823 WO2021219053A1 (en) | 2020-04-29 | 2021-04-29 | Federated learning modeling method, apparatus and device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010360246.5A CN111538598A (en) | 2020-04-29 | 2020-04-29 | Federal learning modeling method, device, equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111538598A true CN111538598A (en) | 2020-08-14 |
Family
ID=71979068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010360246.5A Pending CN111538598A (en) | 2020-04-29 | 2020-04-29 | Federal learning modeling method, device, equipment and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111538598A (en) |
WO (1) | WO2021219053A1 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112164224A (en) * | 2020-09-29 | 2021-01-01 | 杭州锘崴信息科技有限公司 | Traffic information processing system, method, device and storage medium for information security |
CN112232519A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
CN112232518A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Lightweight distributed federated learning system and method |
CN112650583A (en) * | 2020-12-23 | 2021-04-13 | 新智数字科技有限公司 | Resource allocation method, device, readable medium and electronic equipment |
CN112700013A (en) * | 2020-12-30 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Parameter configuration method, device, equipment and storage medium based on federal learning |
CN112994981A (en) * | 2021-03-03 | 2021-06-18 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
CN113011602A (en) * | 2021-03-03 | 2021-06-22 | 中国科学技术大学苏州高等研究院 | Method and device for training federated model, electronic equipment and storage medium |
CN113191090A (en) * | 2021-05-31 | 2021-07-30 | 中国银行股份有限公司 | Block chain-based federal modeling method and device |
CN113469377A (en) * | 2021-07-06 | 2021-10-01 | 建信金融科技有限责任公司 | Federal learning auditing method and device |
WO2021219053A1 (en) * | 2020-04-29 | 2021-11-04 | 深圳前海微众银行股份有限公司 | Federated learning modeling method, apparatus and device, and readable storage medium |
WO2022037239A1 (en) * | 2020-08-21 | 2022-02-24 | Huawei Technologies Co.,Ltd. | System and methods for supporting artificial intelligence service in a network |
US11283609B2 (en) | 2020-08-21 | 2022-03-22 | Huawei Technologies Co., Ltd. | Method and apparatus for supporting secure data routing |
WO2022108529A1 (en) * | 2020-11-19 | 2022-05-27 | 脸萌有限公司 | Model construction method and apparatus, and medium and electronic device |
CN114548472A (en) * | 2020-11-26 | 2022-05-27 | 新智数字科技有限公司 | Resource allocation method, device, readable medium and electronic equipment |
WO2022156910A1 (en) * | 2021-01-25 | 2022-07-28 | Nokia Technologies Oy | Enablement of federated machine learning for terminals to improve their machine learning capabilities |
WO2023111150A1 (en) * | 2021-12-16 | 2023-06-22 | Nokia Solutions And Networks Oy | Machine-learning agent parameter initialization in wireless communication network |
WO2023125760A1 (en) * | 2021-12-30 | 2023-07-06 | 维沃移动通信有限公司 | Model training method and apparatus, and communication device |
WO2023143082A1 (en) * | 2022-01-26 | 2023-08-03 | 展讯通信(上海)有限公司 | User device selection method and apparatus, and chip and module device |
WO2023148012A1 (en) * | 2022-02-02 | 2023-08-10 | Nokia Solutions And Networks Oy | Iterative initialization of machine-learning agent parameters in wireless communication network |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114139731A (en) * | 2021-12-03 | 2022-03-04 | 深圳前海微众银行股份有限公司 | Longitudinal federated learning modeling optimization method, apparatus, medium, and program product |
CN114168295A (en) * | 2021-12-10 | 2022-03-11 | 深圳致星科技有限公司 | Hybrid architecture system and task scheduling method based on historical task effect |
CN114492179B (en) * | 2022-01-13 | 2024-09-17 | 工赋(青岛)科技有限公司 | Information processing system, method, apparatus, device, and storage medium |
CN115345317B (en) * | 2022-08-05 | 2023-04-07 | 北京交通大学 | Fair reward distribution method facing federal learning based on fairness theory |
CN115577876A (en) * | 2022-09-27 | 2023-01-06 | 广西综合交通大数据研究院 | Network freight platform freight note-taking punctual prediction method based on block chain and federal learning |
CN116055335B (en) * | 2022-12-21 | 2023-12-19 | 深圳信息职业技术学院 | Internet of vehicles intrusion detection model training method based on federal learning, intrusion detection method and equipment |
CN115987985B (en) * | 2022-12-22 | 2024-02-27 | 中国联合网络通信集团有限公司 | Model collaborative construction method, center cloud, edge node and medium |
CN116186341B (en) * | 2023-04-25 | 2023-08-15 | 北京数牍科技有限公司 | Federal graph calculation method, federal graph calculation device, federal graph calculation equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670684A (en) * | 2018-12-03 | 2019-04-23 | 北京顺丰同城科技有限公司 | The dispatching method and electronic equipment of goods stock based on time window |
US20190171978A1 (en) * | 2017-12-06 | 2019-06-06 | Google Llc | Systems and Methods for Distributed On-Device Learning with Data-Correlated Availability |
CN110442457A (en) * | 2019-08-12 | 2019-11-12 | 北京大学深圳研究生院 | Model training method, device and server based on federation's study |
CN110598870A (en) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Method and device for federated learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180089587A1 (en) * | 2016-09-26 | 2018-03-29 | Google Inc. | Systems and Methods for Communication Efficient Distributed Mean Estimation |
CN110263908B (en) * | 2019-06-20 | 2024-04-02 | 深圳前海微众银行股份有限公司 | Federal learning model training method, apparatus, system and storage medium |
CN111538598A (en) * | 2020-04-29 | 2020-08-14 | 深圳前海微众银行股份有限公司 | Federal learning modeling method, device, equipment and readable storage medium |
-
2020
- 2020-04-29 CN CN202010360246.5A patent/CN111538598A/en active Pending
-
2021
- 2021-04-29 WO PCT/CN2021/090823 patent/WO2021219053A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190171978A1 (en) * | 2017-12-06 | 2019-06-06 | Google Llc | Systems and Methods for Distributed On-Device Learning with Data-Correlated Availability |
CN109670684A (en) * | 2018-12-03 | 2019-04-23 | 北京顺丰同城科技有限公司 | The dispatching method and electronic equipment of goods stock based on time window |
CN110442457A (en) * | 2019-08-12 | 2019-11-12 | 北京大学深圳研究生院 | Model training method, device and server based on federation's study |
CN110598870A (en) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Method and device for federated learning |
Non-Patent Citations (2)
Title |
---|
YONG CHENG: "A Communication efficient collaborative Learning Framework for Distributed Features", 《ARXIV.ORG》, 24 December 2019 (2019-12-24) * |
谢丰;卞建玲;王楠;郑倩;: "联邦学习在泛在电力物联网人工智能领域的应用", 中国高新科技, no. 23, 1 December 2019 (2019-12-01) * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021219053A1 (en) * | 2020-04-29 | 2021-11-04 | 深圳前海微众银行股份有限公司 | Federated learning modeling method, apparatus and device, and readable storage medium |
US11588907B2 (en) | 2020-08-21 | 2023-02-21 | Huawei Technologies Co., Ltd. | System and methods for supporting artificial intelligence service in a network |
US11283609B2 (en) | 2020-08-21 | 2022-03-22 | Huawei Technologies Co., Ltd. | Method and apparatus for supporting secure data routing |
WO2022037239A1 (en) * | 2020-08-21 | 2022-02-24 | Huawei Technologies Co.,Ltd. | System and methods for supporting artificial intelligence service in a network |
CN112164224A (en) * | 2020-09-29 | 2021-01-01 | 杭州锘崴信息科技有限公司 | Traffic information processing system, method, device and storage medium for information security |
CN112232518B (en) * | 2020-10-15 | 2024-01-09 | 成都数融科技有限公司 | Lightweight distributed federal learning system and method |
CN112232519B (en) * | 2020-10-15 | 2024-01-09 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
CN112232518A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Lightweight distributed federated learning system and method |
CN112232519A (en) * | 2020-10-15 | 2021-01-15 | 成都数融科技有限公司 | Joint modeling method based on federal learning |
WO2022108529A1 (en) * | 2020-11-19 | 2022-05-27 | 脸萌有限公司 | Model construction method and apparatus, and medium and electronic device |
CN114548472A (en) * | 2020-11-26 | 2022-05-27 | 新智数字科技有限公司 | Resource allocation method, device, readable medium and electronic equipment |
CN112650583A (en) * | 2020-12-23 | 2021-04-13 | 新智数字科技有限公司 | Resource allocation method, device, readable medium and electronic equipment |
CN112700013A (en) * | 2020-12-30 | 2021-04-23 | 深圳前海微众银行股份有限公司 | Parameter configuration method, device, equipment and storage medium based on federal learning |
WO2022156910A1 (en) * | 2021-01-25 | 2022-07-28 | Nokia Technologies Oy | Enablement of federated machine learning for terminals to improve their machine learning capabilities |
CN112994981B (en) * | 2021-03-03 | 2022-05-10 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
CN113011602B (en) * | 2021-03-03 | 2023-05-30 | 中国科学技术大学苏州高等研究院 | Federal model training method and device, electronic equipment and storage medium |
CN113011602A (en) * | 2021-03-03 | 2021-06-22 | 中国科学技术大学苏州高等研究院 | Method and device for training federated model, electronic equipment and storage medium |
CN112994981A (en) * | 2021-03-03 | 2021-06-18 | 上海明略人工智能(集团)有限公司 | Method and device for adjusting time delay data, electronic equipment and storage medium |
CN113191090A (en) * | 2021-05-31 | 2021-07-30 | 中国银行股份有限公司 | Block chain-based federal modeling method and device |
CN113469377A (en) * | 2021-07-06 | 2021-10-01 | 建信金融科技有限责任公司 | Federal learning auditing method and device |
WO2023111150A1 (en) * | 2021-12-16 | 2023-06-22 | Nokia Solutions And Networks Oy | Machine-learning agent parameter initialization in wireless communication network |
WO2023125760A1 (en) * | 2021-12-30 | 2023-07-06 | 维沃移动通信有限公司 | Model training method and apparatus, and communication device |
WO2023143082A1 (en) * | 2022-01-26 | 2023-08-03 | 展讯通信(上海)有限公司 | User device selection method and apparatus, and chip and module device |
WO2023148012A1 (en) * | 2022-02-02 | 2023-08-10 | Nokia Solutions And Networks Oy | Iterative initialization of machine-learning agent parameters in wireless communication network |
Also Published As
Publication number | Publication date |
---|---|
WO2021219053A1 (en) | 2021-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111538598A (en) | Federal learning modeling method, device, equipment and readable storage medium | |
CN110782042B (en) | Method, device, equipment and medium for combining horizontal federation and vertical federation | |
CN110198244B (en) | Heterogeneous cloud service-oriented resource configuration method and device | |
CN109343942B (en) | Task scheduling method based on edge computing network | |
CN113157422A (en) | Cloud data center cluster resource scheduling method and device based on deep reinforcement learning | |
CN111242316A (en) | Longitudinal federated learning model training optimization method, device, equipment and medium | |
CN111428884A (en) | Federal modeling method, device and readable storage medium based on forward law | |
US20240176906A1 (en) | Methods, apparatuses, and systems for collaboratively updating model by multiple parties for implementing privacy protection | |
CN111428883A (en) | Federal modeling method, device and readable storage medium based on backward law | |
CN112486658B (en) | Task scheduling method and device for task scheduling | |
CN112003903A (en) | Cluster task scheduling method and device and storage medium | |
WO2021217340A1 (en) | Ai-based automatic design method and apparatus for universal smart home scheme | |
CN106293947A (en) | GPU CPU mixing resource allocation system and method under virtualization cloud environment | |
CN111652382B (en) | Data processing method, device and equipment based on block chain and storage medium | |
CN114612212A (en) | Business processing method, device and system based on risk control | |
CN113946389A (en) | Federal learning process execution optimization method, device, storage medium, and program product | |
CN108289115B (en) | Information processing method and system | |
CN111401566A (en) | Machine learning training method and system | |
CN115129481B (en) | Computing resource allocation method and device and electronic equipment | |
CN114139731A (en) | Longitudinal federated learning modeling optimization method, apparatus, medium, and program product | |
US20230214261A1 (en) | Computing power sharing-related exception reporting and handling methods and devices, storage medium, and terminal apparatus | |
US11281890B2 (en) | Method, system, and computer-readable media for image correction via facial ratio | |
CN113568730A (en) | Constraint scheduling method and device for heterogeneous tasks and related products | |
CN113095911A (en) | Order processing method and device, electronic equipment and computer readable medium | |
CN103916426B (en) | A kind of paxos examples update method, equipment and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |