WO2021219053A1 - 联邦学习建模方法、装置、设备及可读存储介质 - Google Patents
联邦学习建模方法、装置、设备及可读存储介质 Download PDFInfo
- Publication number
- WO2021219053A1 WO2021219053A1 PCT/CN2021/090823 CN2021090823W WO2021219053A1 WO 2021219053 A1 WO2021219053 A1 WO 2021219053A1 CN 2021090823 W CN2021090823 W CN 2021090823W WO 2021219053 A1 WO2021219053 A1 WO 2021219053A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- model training
- training
- completed
- federated
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- This application relates to the artificial intelligence field of Fintech, and in particular to a federated learning modeling method, device, equipment, and readable storage medium.
- the main purpose of this application is to provide a federated learning modeling method, device, equipment, and readable storage medium, aiming to solve the technical problem of low utilization rate of the coordinator's computing resources in the federated learning system in the prior art.
- this application provides a federated learning modeling method, the federated learning modeling method is applied to a first device, and the federated learning modeling method includes:
- Each second device associated with the first device performs negotiation and interaction, determines each model training task to be completed, and determines each model training participating device corresponding to each of the model training tasks to be completed in each of the second devices ;
- model training time period corresponding to each model training task to be completed and coordinate each model training participating device corresponding to each model training task to be completed to perform preset federated learning based on each model training time period Modeling process to complete each of the to-be-completed model training tasks.
- the present application also provides a federated learning modeling method, the federated learning modeling method is applied to a second device, and the federated learning modeling method includes:
- the preset federated learning modeling process is executed through coordinated interaction with the first device to complete the to-be-completed model training task.
- the present application also provides a federated learning modeling device, the federated learning modeling device is a virtual device, and the federated learning modeling device is applied to a first device, and the federated learning modeling device includes:
- the negotiation module is used to negotiate and interact with each second device associated with the first device, determine each model training task to be completed, and determine in each second device the corresponding model training task to be completed Participating equipment for each model training;
- the coordination module is used to obtain the model training time period corresponding to each of the to-be-completed model training tasks, and to coordinate each of the model training participating devices corresponding to each of the to-be-completed model training tasks based on each of the model training time periods Perform a preset federated learning modeling process to complete each of the to-be-completed model training tasks.
- the present application also provides a federated learning modeling device, the federated learning modeling device is applied to a second device, and the federated learning modeling device further includes:
- the interaction module is configured to interact with the first device, determine model training information, and obtain device status information, so as to determine whether to participate in the to-be-completed model training task corresponding to the model training information based on the device status information;
- the federated learning modeling module is configured to perform a preset federated learning modeling process through coordinated interaction with the first device if participating in the to-be-completed model training task to complete the to-be-completed model training task.
- the present application also provides a federated learning modeling device.
- the federated learning modeling device is a physical device.
- the federated learning modeling device includes a memory, a processor, and a device stored on the memory and available on the processor.
- the program of the federated learning modeling method is executed by the processor, the steps of the federated learning modeling method can be realized.
- the present application also provides a readable storage medium, the readable storage medium stores a program for implementing the federated learning modeling method, and the program of the federated learning modeling method is executed by a processor to realize the federated learning modeling as described above. The steps of the model method.
- This application determines each model training task to be completed through negotiation and interaction with each second device associated with the first device, and determines each model corresponding to each of the model training tasks to be completed in each of the second devices Train participating equipment, and then obtain the model training time period corresponding to each of the model training tasks to be completed, and coordinate each of the model training participating equipment corresponding to each of the model training tasks to be completed based on each of the model training time periods Perform a preset federated learning modeling process to complete each of the to-be-completed model training tasks. That is, this application provides a method for performing federated learning based on time division, that is, before performing federated learning modeling, by interacting with each of the second devices, it is determined that each model training to be completed needs to be executed.
- the coordinator can coordinate each of the model training tasks to be completed based on each of the model training time periods
- Each corresponding model training participating device performs a preset federated learning modeling process to complete each of the to-be-completed model training tasks, that is, each model participating device of a to-be-completed training model is performing local iterative training
- the coordinator can coordinate each model training participating equipment corresponding to other to-be-completed model training tasks to perform federated learning modeling, thereby avoiding the need for the coordinator to perform computing tasks but occupy computing resources when each federated participant performs local iterative training.
- the goal of making full use of the computing resources of the coordinator is achieved, and the utilization of the computing resources of the coordinator is improved. Therefore, the technical problem of low utilization of the computing resources of the coordinator in the federated learning system is solved.
- Fig. 1 is a schematic flow chart of the first embodiment of the federated learning modeling method according to the application
- FIG. 2 is a schematic flowchart of a second embodiment of a federal learning modeling method according to the application.
- FIG. 3 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the application.
- the embodiment of the present application provides a federated learning modeling method.
- the federated learning modeling method is applied to the first device, and the federated learning modeling method is Modal methods include:
- Step S10 negotiate and interact with each second device associated with the first device, determine each model training task to be completed, and determine each model corresponding to each of the model training tasks to be completed in each of the second devices Training participating equipment;
- one of the model training tasks to be completed corresponds to one or more model training participating devices
- the first device is the coordinator of horizontal federation learning
- the second device is the horizontal federation.
- Participants of learning the model training participating device is a second device participating in the training task to be completed
- the model training task to be completed is a task for model training based on horizontal federated learning
- one of the model training tasks is to be completed It can be used to train one or more target models, and one target model can also be obtained based on performing one or more to-be-completed model training tasks.
- the second device may choose to execute the to-be-completed model training task in a preset trusted execution environment, for example, Intel's SGX (Intel Software Guard Extensions).
- Intel's SGX Intelligent Software Guard Extensions
- Each second device associated with the first device performs negotiation and interaction, determines each model training task to be completed, and determines each model training participating device corresponding to each of the model training tasks to be completed in each of the second devices Specifically, each second device associated with the first device performs negotiation and interaction to determine each model training task to be completed and model training information of each model training task to be completed, and then based on the model training information, In each of the second devices, each model training participating device corresponding to each of the to-be-completed model training tasks is determined.
- step S10 the step of determining each model training participating device corresponding to each of the model training tasks to be completed in each of the second devices includes:
- Step S11 acquiring model training information corresponding to each of the model training tasks
- model training information includes model name information, model training time period, etc., where the model name information is the identification of the corresponding model to be trained, for example, code, string, etc.
- model training time period is estimated time information required for model training.
- Step S12 based on each of the model training information, determine each of the model training participating devices corresponding to each of the to-be-completed model training tasks by performing willingness confirmation interactions with each of the second devices.
- each of the model training participating devices corresponding to each of the to-be-completed model training tasks is determined through willingness confirmation interaction with each of the second devices, specifically, Each of the model training information is sent to each of the second devices, so that each of the second devices can obtain the device status information, and based on the device status information, it is determined whether to participate in the to-be-completed corresponding to each of the model training information.
- the model training task if it is determined to participate in the to-be-completed model training task, the determination information corresponding to the to-be-completed model training task is fed back to the first device, and then the first device receives each determination information, and sets each A second device corresponding to the determination information is identified as a model training participating device, and one or more model training participating devices corresponding to each model training task to be completed are counted.
- model training information includes model index information
- the step of determining each of the model training participating devices corresponding to each of the model training tasks to be completed by performing willingness confirmation interactions with each of the second devices based on each of the model training information includes:
- Step A10 Send each of the model index information to each of the second devices, so that each of the second devices can train each of the models based on the acquired model training requirement information and each of the model index information. Determine each target model training task involved in the task, and generate first determination information corresponding to each target model training task;
- the model index information is identification information of the corresponding model training task to be completed, for example, a code or a character string
- the first determination information indicates that the second device determines Participate in the information of the to-be-completed model training task corresponding to the model index information, where the first determination information may be willingness information, local model parameter information, or local model gradient information, etc., which the second device replies separately, to indicate The second device is willing to participate in corresponding to-be-completed model training tasks, and each of the to-be-completed training tasks corresponds to a model training time period for executing the task.
- Each of the model index information is sent to each of the second devices, so that each of the second devices determines in each of the model training tasks based on the acquired model training requirement information and each of the model index information. Participate in each target model training task, and generate first determination information corresponding to each target model training task, specifically, broadcast to each second device within a preset time period before the start of each model training time period The model index information corresponding to the model training time period for the second device to determine the corresponding model training task to be completed based on the model index information, and based on the acquired current device operating state, wherein the device operating state Including the currently available computing resources, and then determining whether to participate in the to-be-completed model training task.
- the model index information is ignored, and the next model index information is waited for.
- Step A20 Determine each of the model training participating devices corresponding to each of the model training tasks to be completed based on each of the first determination information fed back by each of the second devices.
- each of the model training participating devices corresponding to each of the model training tasks to be completed is determined based on each of the first determination information fed back by each of the second devices, specifically, each of the models Before the start of the training period, each of the first determination information corresponding to the to-be-completed model training task corresponding to the model index information sent by each of the second devices is received, and each of the first determinations is sent
- Each second device of the information serves as the model training participating device, wherein one of the first determination information corresponds to one of the second devices corresponding to one of the model training participating devices.
- model training information includes model training time information
- the step of determining each of the model training participating devices corresponding to each of the model training tasks to be completed by performing willingness confirmation interactions with each of the second devices based on each of the model training information includes:
- Step B10 each of the model training time information is sent to each of the second devices, so that each of the second devices is based on the acquired training time limit information and each of the model training time information, in each of the In the model training task, each target model training task involved is determined, and the second determination information corresponding to each target model training task is generated;
- the second determination information is information indicating that the second device has determined to participate in the model training task to be completed corresponding to the model training time information
- each of the to-be-completed training tasks is Corresponds to a model training time period for executing the task, and the second determination information will be sent by the second device to the second device before the model training time period corresponding to the to-be-completed model training task corresponding to the second determination information starts.
- Each of the model training time information is sent to each of the second devices, so that each of the second devices can perform training tasks in each of the model training tasks based on the acquired training time limit information and each of the model training time information.
- each The model training time information is sent to each of the second devices, so that each of the second devices can obtain training time limit information, where the training time limit information indicates that the second device is training on the model Whether there is free time and sufficient computing resources to participate in the model training task to be completed in the time period, each of the second devices will determine whether to participate in the model training based on the training restriction information and the model training time information The to-be-completed model training task corresponding to the time information. If it is determined to participate in the to-be-completed model training task, the second determination information is fed back to the first device to indicate that it participates in the to-be-completed model training task. If the model training task is to be completed, the model training time information is ignored, and the next model training time information is waited for.
- Step B20 Determine each of the model training participating devices corresponding to each of the model training tasks to be completed based on each of the second determination information fed back by each of the second devices.
- each of the model training participating devices corresponding to each of the model training tasks to be completed is determined based on the respective second determination information fed back by each of the second devices. Specifically, each model Before the start of the training time period, each of the second determining information corresponding to the to-be-completed model training task corresponding to the model training time information sent by each of the second devices is received, and each of the second determining information is sent. 2. A second device for determining information is used as the model training participating device, wherein one of the first determining information corresponds to the second device corresponding to the model training participating device.
- Step S20 Obtain the model training time period corresponding to each of the model training tasks to be completed, and coordinate each of the model training participating devices corresponding to each of the model training tasks to be completed to perform the preview based on each of the model training time periods. Set up a federated learning modeling process to complete each of the to-be-completed model training tasks.
- each model training time period includes a first model training time period and a second model training time period.
- Modeling process to complete each of the to-be-completed model training tasks, specifically, obtain the model training time period corresponding to each of the to-be-completed model training tasks, and receive the corresponding model training time periods in each of the model training time periods.
- the model trains the local model parameters sent by the participating device, and calculates the latest federated model parameters corresponding to each of the local model parameters based on preset aggregation rules, where the preset aggregation rules include weighted average, sum, etc., And determine whether the latest federation model parameters meet the preset training task completion conditions, and if the latest federation model parameters meet the training task completion conditions, the latest federation model parameters are sent to each of the second devices respectively, For each of the second devices to update their respective local models based on the latest federated model parameters, if the latest federated model parameters do not meet the training task completion conditions, the latest federated model parameters are sent to each of the models.
- preset aggregation rules include weighted average, sum, etc.
- the completion conditions of the training task include the convergence of the loss function, the maximum number of iterations of the model, etc., wherein, if there is an intersection time period for each of the model training time periods, within the intersection time period, the first device According to the time sequence of receiving the local model parameters corresponding to each of the to-be-completed model training tasks, the sequence of calculating the latest federated model parameters corresponding to each of the to-be-completed model training tasks will be determined.
- the first device After completing the model training tasks including task A and task B, the first device has received all the local model parameters sent by the model training participating devices corresponding to task A at 9:07, at 9:09 After all the local model parameters sent by the participating devices in the model training corresponding to task B have been received, the first device first calculates the latest federated model parameters corresponding to task A, and then calculates the latest federated model parameters corresponding to task B.
- the first device may choose to execute the step of calculating the latest federated model parameter corresponding to each of the local model parameters based on a preset aggregation rule in the preset trusted execution environment.
- the step of separately coordinating each of the model training participating devices corresponding to each of the model training tasks to be completed to perform a preset federated learning modeling process based on each of the model training time periods includes:
- Step S21 in each model training time period, respectively receive the local model parameters sent by each of the model training participating devices corresponding to the model training time period, and calculate the latest federated model parameters based on preset aggregation rules;
- each model training time period the local model parameters sent by each of the model training participating devices corresponding to the model training time period are respectively received, and the latest federated model parameters are calculated based on preset aggregation rules.
- each In the model training time period receiving local model parameters sent by each of the model training participating devices corresponding to the model training time period, wherein each of the local model parameters is for the model training participating device
- the federated participation model corresponding to the local model parameters is obtained by performing a preset number of iterative trainings, where the federated participation model is a local model of the device participating in the model training, and then based on preset aggregation rules, each local model The parameters are weighted and averaged to obtain the latest federated model parameters.
- Step S22 Determine whether the parameters of the latest federation model meet a preset training task termination condition
- the preset training task termination conditions include training reaching the maximum number of iterations, loss function training convergence, and the like.
- Step S23 If the latest federated model parameters meet the preset training task termination condition, send the latest federated model parameters to each of the second devices, so that each of the second devices can update their respective local models ;
- the latest federated model parameters are sent to each of the second devices so that each of the second devices can update their respective Specifically, if the latest federated model parameters meet the preset training task end condition, the latest federated model parameters are sent to each of the second devices, so that each of the second devices can be based on The latest federation model parameter replaces and updates the corresponding model parameter in the local model to the latest federation model parameter.
- Step S23 If the latest federated model parameters do not meet the preset training task termination condition, the latest federated model parameters are sent to each of the model training participating devices, so that each of the model participating devices can update their respective To recalculate the latest federation model parameters until the latest federation model parameters meet the preset training task termination condition.
- the latest federated model parameters are sent to each of the model training participating devices, so that each of the models can participate.
- the devices update their respective federation participation models to recalculate the latest federation model parameters until the latest federation model parameters meet the preset training task termination condition, specifically, if the latest federation model parameters do not meet the preset training task Assuming the end condition of the training task, the latest federation model parameters are sent to each of the model training participating devices, so that each of the model training participating devices can update their respective federation participation based on the latest federation model parameters.
- Model and perform iterative training on the updated federated participation model, and then when the number of iterative training reaches the preset iterative training number, the local model parameters of the federated participating model after iterative training are re-acquired, and the recalculated each The local model parameters are sent to the first device, so that the first device recalculates the latest federation based on the recalculated local model parameters sent by the second devices and the preset aggregation rule Model parameters, until the latest federated model parameters meet the preset training task termination condition.
- each model training task to be completed is determined, and each of the model training tasks to be completed is determined in each second device.
- the model training participating device further obtains the model training time period corresponding to each of the model training tasks to be completed, and coordinates the model training participants corresponding to each of the model training tasks to be completed based on each of the model training time periods
- the equipment performs a preset federated learning modeling process to complete each of the to-be-completed model training tasks.
- this embodiment provides a method for performing federated learning based on time division, that is, before performing federated learning modeling, by interacting with each of the second devices, determine each to-be-completed model that needs to be executed Training tasks, and then determine each model training participating device and model training time period corresponding to each of the to-be-completed model training tasks, and then the coordinator can separately coordinate each of the to-be-completed model training based on each of the model training time periods
- Each of the model training participating devices corresponding to the task performs a preset federated learning modeling process to complete each of the to-be-completed model training tasks, that is, each model participating device of a to-be-completed training model is performing local iteration
- the coordinator can coordinate various model training participating devices corresponding to other model training tasks to be completed to perform federated learning modeling, thereby avoiding the need for the coordinator to perform computing tasks and consume computing resources when each federated participant performs local iterative training When the situation occurs, the purpose of making full use of the computing resources
- the federated learning modeling method is applied to a second device, and the federated learning modeling method includes:
- Step C10 interacting with the first device, determining model training information, and acquiring device status information, so as to determine whether to participate in the to-be-completed model training task corresponding to the model training information based on the device status information;
- the model training information includes model index information and model training time information
- the device status information includes available computing resources of the second device, where the available computing resources are the second device Computing resources that can be called during the model training time period corresponding to the model training task to be completed.
- the two devices negotiate and interact with the first device to determine each model training task to be completed.
- the first device Interact with the first device to determine model training information, and obtain device status information to determine whether to participate in the model training task to be completed corresponding to the model training information based on the device status information, specifically, with the The first device performs negotiation interaction, obtains model training information, and determines available computing resources, and then determines whether the available computing resources meet the model training tasks to be completed corresponding to the model training information. If the available computing resources meet the to-be-completed model training tasks, If the model training task is completed, it is determined to participate in the to-be-completed model training task. If the available computing resources do not meet the to-be-completed model training task, it is determined not to participate in the to-be-completed model training task.
- Completing the model training task requires 50% of all computing resources of the second device, and the available computing resources that can be called by the second device are 40%, then the available computing resources do not satisfy the to-be-completed model training task , And then determine not to participate in the to-be-completed model training task.
- Step C20 If participating in the to-be-completed model training task, perform a preset federated learning modeling process through coordinated interaction with the first device to complete the to-be-completed model training task.
- a preset federated learning modeling process is executed to complete the to-be-completed model training task, specifically, If participating in the to-be-completed model training task, determine the to-be-trained model corresponding to the to-be-completed model training task, and perform iterative training on the to-be-trained model until the to-be-trained model reaches the preset number of iterative training times, then Obtain the local model parameters of the model to be trained after iterative training, and send the local model parameters to the first device for the first device based on the local model parameters sent by each of the second devices, Calculate the latest federation model parameters, and broadcast the latest federation model parameters to each of the second devices, and then the second device receives the latest federation model parameters, and updates the pending federation model parameters based on the latest federation model parameters.
- the model and determine whether the updated model to be trained meets the preset iteration end condition. If the updated model to be trained meets the preset iteration end condition, then it is determined that the to-be-completed model training task is completed, if the updated to-be-trained model If the model does not meet the preset iterative end condition, iterative training is performed on the model to be trained again, so that the first device can recalculate the latest federated model parameters, so as to re-update the model to be trained until it is updated The subsequent model to be trained meets the preset iteration end condition.
- step C20 the step of executing a preset federated learning modeling process through coordinated interaction with the first device includes:
- Step C21 Determine the to-be-trained model corresponding to the to-be-completed model training task, and perform iterative training on the to-be-trained model until the to-be-trained model reaches a preset number of iterations, and obtain the local model corresponding to the to-be-trained model parameter;
- the to-be-trained model corresponding to the to-be-completed model training task is determined, and the to-be-trained model is iteratively trained until the to-be-trained model reaches a preset number of iterations, and the corresponding to-be-trained model is obtained.
- the model to be trained corresponding to the model training task to be completed is determined, and the model to be trained is updated by iterative training until the model to be trained reaches the preset number of iterations, and the iterative training is extracted The updated local model parameters of the model to be trained.
- Step C22 sending the local model parameters to the first device, so that the first device can calculate the latest federation model parameters based on the local model parameters;
- the local model parameters are sent to the first device, so that the first device can calculate the latest federation model parameters based on the local model parameters, specifically, send the local model parameters To the first device, so that the first device calculates the latest federated model parameter corresponding to each of the local model parameters based on the local model parameters sent by the associated second devices through preset aggregation rules, where all The preset aggregation rules include weighted average, sum and so on.
- Step C23 Receive the latest federated model parameters fed back by the first device, and based on the latest federated model parameters, update the model to be trained until the local model reaches a preset training end condition to obtain the model to be completed The target modeling model corresponding to the training task.
- the latest federated model parameters fed back by the first device are received, and based on the latest federated model parameters, the model to be trained is updated until the local model reaches a preset training end condition to obtain the
- the target modeling model corresponding to the to-be-completed model training task specifically, receiving the latest federated model parameters fed back by the first device, and replacing and updating the local model parameters in the to-be-trained model with the latest federated model parameters, Obtain the model to be trained after replacement and update, and determine whether the model to be trained after replacement and update meets the preset iterative training end condition.
- the model to be trained after replacement and update satisfies the preset iterative training end condition, it will be replaced
- the updated model to be trained is used as the target modeling model. If the replacement and updated model to be trained does not meet the preset iterative training end condition, the iterative training is performed on the to-be-trained model again, so as to The model to be trained is replaced and updated until the model to be trained after replacement and update meets a preset iterative training end condition.
- the model training information is determined, and the device status information is obtained, so as to determine whether to participate in the to-be-completed model training task corresponding to the model training information based on the device status information, and then if Participating in the to-be-completed model training task, through coordinated interaction with the first device, executes a preset federated learning modeling process to complete the to-be-completed model training task. That is, this embodiment provides a method for modeling based on federated learning, that is, before performing federated learning modeling, it is determined whether to participate in the research by negotiating and interacting with the first device and obtaining its own device operating status.
- the model training task to be completed corresponding to the model training information, and if it is determined to participate, it can coordinate and interact with the first device and execute the preset federated learning modeling process to complete the model training task, that is,
- the second device can independently choose to participate in the to-be-completed model training task each time before performing federated learning modeling, thereby laying a foundation for solving the technical problem of low utilization rate of the coordinator's computing resources in the federated learning system.
- FIG. 3 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the present application.
- the federated learning modeling device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002.
- the communication bus 1002 is used to implement connection and communication between the processor 1001 and the memory 1005.
- the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
- the federated learning modeling device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on.
- the rectangular user interface may include a display screen (Display) and an input sub-module such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface.
- the network interface can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
- the structure of the federated learning modeling device shown in FIG. 3 does not constitute a limitation on the federated learning modeling device, and may include more or fewer components than shown in the figure, or a combination of certain components, Or different component arrangements.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a federated learning modeling program.
- the operating system is a program that manages and controls the hardware and software resources of the federated learning modeling equipment, and supports the running of the federated learning modeling program and other software and/or programs.
- the network communication module is used to realize the communication between the components in the memory 1005 and the communication with other hardware and software in the federated learning modeling system.
- the processor 1001 is used to execute the federated learning modeling program stored in the memory 1005 to implement the steps of the federated learning modeling method described in any one of the above.
- the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
- An embodiment of the present application also provides a federated learning modeling device, the federated learning modeling device is applied to a first device, and the federated learning modeling device includes:
- the negotiation module is used to negotiate and interact with each second device associated with the first device, determine each model training task to be completed, and determine in each second device the corresponding model training task to be completed Participating equipment for each model training;
- the coordination module is used to obtain the model training time period corresponding to each of the to-be-completed model training tasks, and to coordinate each of the model training participating devices corresponding to each of the to-be-completed model training tasks based on each of the model training time periods Perform a preset federated learning modeling process to complete each of the to-be-completed model training tasks.
- the negotiation module includes:
- the determining unit is configured to determine each of the model training participating devices corresponding to each of the model training tasks to be completed by performing willingness confirmation interactions with each of the second devices based on each of the model training information.
- the determining unit includes:
- the first sending subunit is configured to send each of the model index information to each of the second devices, so that each of the second devices is based on the acquired model training requirement information and each of the model index information. Determine each target model training task involved in each of the model training tasks, and generate first determination information corresponding to each of the target model training tasks;
- the first determination subunit is configured to determine each of the model training participating devices corresponding to each of the model training tasks to be completed based on each of the first determination information fed back by each of the second devices.
- the determining unit further includes:
- the second sending subunit is configured to send each of the model training time information to each of the second devices, so that each of the second devices is based on the acquired training time limit information and each of the model training time information. , Determine each target model training task involved in each of the model training tasks, and generate second determination information corresponding to each of the target model training tasks;
- the second determination subunit is configured to determine each of the model training participating devices corresponding to each of the model training tasks to be completed based on the respective second determination information fed back by each of the second devices.
- the coordination module includes:
- the calculation unit is configured to respectively receive the local model parameters sent by each of the model training participating devices corresponding to the model training time periods in each of the model training time periods, and calculate the latest federated model parameters based on preset aggregation rules ;
- the first determining unit is configured to determine whether the parameters of the latest federation model meet a preset training task termination condition
- An update unit configured to send the latest federation model parameters to each of the second devices if the latest federation model parameters meet the preset training task termination condition, so that each of the second devices can update their respective Local model
- the second determination unit is configured to send the latest federation model parameters to each of the model training participating devices respectively if the latest federation model parameters do not meet the preset training task termination condition, so as to provide each of the models
- the participating devices update their respective federation participation models to recalculate the latest federation model parameters until the latest federation model parameters meet the preset training task termination condition.
- the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
- an embodiment of the present application further provides a federated learning modeling device, the federated learning modeling device is applied to a second device, and the federated learning modeling device includes:
- the interaction module is configured to interact with the first device, determine model training information, and obtain device status information, so as to determine whether to participate in the to-be-completed model training task corresponding to the model training information based on the device status information;
- the federated learning modeling module is configured to perform a preset federated learning modeling process through coordinated interaction with the first device if participating in the to-be-completed model training task to complete the to-be-completed model training task.
- the federated learning modeling module includes:
- the iterative training unit is used to determine the model to be trained corresponding to the training task of the model to be completed, and to perform iterative training on the model to be trained until the model to be trained reaches a preset number of iterations, and to obtain the model corresponding to the model to be trained Local model parameters;
- a sending unit configured to send the local model parameters to the first device, so that the first device can calculate the latest federated model parameters based on the local model parameters
- the update unit is configured to receive the latest federated model parameters fed back by the first device, and based on the latest federated model parameters, update the to-be-trained model until the local model reaches a preset training end condition to obtain the to-be-trained model. Complete the target modeling model corresponding to the model training task.
- the specific implementation of the federated learning modeling device of the present application is basically the same as each embodiment of the above-mentioned federated learning modeling method, and will not be repeated here.
- the embodiments of the present application provide a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs may also be executed by one or more processors for implementation The steps of the federated learning modeling method described in any one of the above.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
一种联邦学习建模方法、装置、设备及可读存储介质,联邦学习建模方法包括:与第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各第二设备中确定各待完成模型训练任务分别对应的各模型训练参与设备,进而获取各待完成模型训练任务对应的模型训练时间段,并基于各模型训练时间段,协调各待完成模型训练任务分别对应的各模型训练参与设备进行预设联邦学习建模流程,以完成各待完成模型训练任务。
Description
优先权信息
本申请要求于2020年4月29日申请的、申请号为202010360246.5的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及金融科技(Fintech)的人工智能领域,尤其涉及一种联邦学习建模方法、装置、设备及可读存储介质。
随着金融科技,尤其是互联网科技金融的不断发展,越来越多的技术(如分布式、区块链Blockchain、人工智能等)应用在金融领域,但金融业也对技术提出了更高的要求,如对金融业对应待办事项的分发也有更高的要求。
随着计算机软件和人工智能的不断发展,联邦学习的应用领域也越来越广泛,在联邦学习场景中,通常由多个联邦学习参与方共同训练一个模型,而协调者用于协调各个联邦学习参与方进行模型训练,例如,在每轮联邦时,对各联邦参与方发送的梯度进行加权求平均等,但是,在各个联邦参与方进行本地迭代训练时,协调者无需执行计算任务但占用计算资源,也即,在各个联邦参与方进行本地迭代训练时,浪费了协调者的计算资源,进而降低了协调者的计算资源利用率,也即,现有技术中存在联邦学习系统里协调者计算资源利用率低的技术问题。
发明内容
本申请的主要目的在于提供一种联邦学习建模方法、装置、设备及可读存储介质,旨在解决现有技术中联邦学习系统里协调者计算资源利用率低的技术问题。
为实现上述目的,本申请提供一种联邦学习建模方法,所述联邦学习建 模方法应用于第一设备,所述联邦学习建模方法包括:
与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;
获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
为实现上述目的,本申请还提供一种联邦学习建模方法,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:
与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务;
若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务。
本申请还提供一种联邦学习建模装置,所述联邦学习建模装置为虚拟装置,且所述联邦学习建模装置应用于第一设备,所述联邦学习建模装置包括:
协商模块,用于与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;
协调模块,用于获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
为实现上述目的,本申请还提供一种联邦学习建模装置,所述联邦学习建模装置应用于第二设备,所述联邦学习建模装置还包括:
交互模块,用于与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务;
联邦学习建模模块,用于若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成 模型训练任务。
本申请还提供一种联邦学习建模设备,所述联邦学习建模设备为实体设备,所述联邦学习建模设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述联邦学习建模方法的程序,所述联邦学习建模方法的程序被处理器执行时可实现如上述的联邦学习建模方法的步骤。
本申请还提供一种可读存储介质,所述可读存储介质上存储有实现联邦学习建模方法的程序,所述联邦学习建模方法的程序被处理器执行时实现如上述的联邦学习建模方法的步骤。
本申请通过与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备,进而获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。也即,本申请提供了一种基于时分的方式进行联邦学习的方法,也即,在进行联邦学习建模之前,通过与各所述第二设备进行交互,确定需要执行的各待完成模型训练任务,进而确定每一所述待完成模型训练任务对应的各模型训练参与设备和模型训练时间段,进而协调者可基于各所述模型训练时间段,分别协调每一所述待完成模型训练任务对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务,也即,在一所述待完成训练模型的各个模型参与设备正在进行本地迭代训练时,协调者可协调其他待完成模型训练任务对应的各模型训练参与设备进行联邦学习建模,进而避免了在各个联邦参与方进行本地迭代训练时,协调者无需执行计算任务但占用计算资源的情况发生,进而达到了充分利用协调者的计算资源的目的,提高了协调者的计算资源的利用率,所以,解决了联邦学习系统里协调者计算资源利用率低的技术问题。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请联邦学习建模方法第一实施例的流程示意图;
图2为本申请联邦学习建模方法第二实施例的流程示意图;
图3为本申请实施例方案涉及的硬件运行环境的设备结构示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例提供一种联邦学习建模方法,在本申请联邦学习建模方法的第一实施例中,参照图1,所述联邦学习建模方法应用于第一设备,所述联邦学习建模方法包括:
步骤S10,与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;
在本实施例中,需要说明的是,一所述待完成模型训练任务对应一个或者多个模型训练参与设备,所述第一设备为横向联邦学习的协调者,所述第二设备为横向联邦学习的参与者,所述模型训练参与设备为参与所述待完成训练任务的第二设备,所述待完成模型训练任务为基于横向联邦学习进行模型训练的任务,其中,一个待完成模型训练任务可用于训练一个或者多个目标模型,一个所述目标模型也可基于执行一个或者多个待完成模型训练任务而获得。
可选的,所述第二设备可选择在预设可信执行环境中执行所述待完成模型训练任务,例如,英特尔的SGX(Intel Software Guard Extensions)等。
与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备,具体地,与所述第一设备关联的各第二设备进行协商 交互,确定各待完成模型训练任务和每一所述待完成模型训练任务的模型训练信息,进而基于所述模型训练信息,在各所述第二设备中确定每一所述待完成模型训练任务对应的各模型训练参与设备。
其中,在步骤S10中,所述在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备的步骤包括:
步骤S11,获取各所述模型训练任务对应的模型训练信息;
在本实施例中,需要说明的是,所述模型训练信息包括模型名称信息、模型训练时间段等,其中,所述模型名称信息为对应的待训练模型的标识,例如,编码、字符串等,所述模型训练时间段为预估的模型训练所需时间信息。
步骤S12,基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在本实施例中,基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备,具体地,分别将各所述模型训练信息发送每一所述第二设备,以供各所述第二设备获取设备状态信息,并基于设备状态信息,分别确定是否参与各所述模型训练信息对应的待完成模型训练任务,若确定参与所述待完成模型训练任务,则向所述第一设备反馈所述待完成模型训练任务对应的确定信息,进而所述第一设备分别接收各确定信息,并将每一所述确定信息对应的第二设备标识为模型训练参与设备,并统计各所述待完成模型训练任务对应的一个或者多个模型训练参与设备。
其中,所述模型训练信息包括模型索引信息,
所述基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备的步骤包括:
步骤A10,将各所述模型索引信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的模型训练需求信息和各所述模型索引信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第一确定信息;
在本实施例中,需要说明的是,所述模型索引信息为对应的待完成模型训练任务的标识信息,例如,编码或者字符串等,所述第一确定信息为表明所述第二设备确定参与所述模型索引信息对应的待完成模型训练任务的信息,其中,所述第一确定信息可为所述第二设备单独回复的意愿信息、本地模型参数信息或者本地模型梯度信息等,以表明所述第二设备愿意参与对应的待完成模型训练任务,各所述待完成训练任务均对应一个执行任务的模型训练时间段。
将各所述模型索引信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的模型训练需求信息和各所述模型索引信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第一确定信息,具体地,在每一所述模型训练时间段开始前的预设时长内向各所述第二设备广播所述模型训练时间段对应的模型索引信息,以供所述第二设备基于所述模型索引信息确定对应的待完成模型训练任务,并基于获取的当前设备运行状态,其中,所述设备运行状态包括当前可用计算资源,进而确定是否参与所述待完成模型训练任务,若确定参与所述待完成模型训练任务,则向所述第一设备反馈第一确定信息,以表明参与所述待完成模型训练任务,若确定不参与所述待完成模型训练任务,则忽略所述模型索引信息,并等待接收下一所述模型索引信息。
步骤A20,基于各所述第二设备分别反馈的各第一确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在本实施例中,基于各所述第二设备分别反馈的各第一确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备,具体地,每一所述模型训练时间段开始前,分别接收各所述第二设备发送的所述模型索引信息对应的所述待完成模型训练任务对应的各所述第一确定信息,并将发送每一所述第一确定信息的各第二设备作为所述模型训练参与设备,其中,一所述第一确定信息对应一所述第二设备对应一所述模型训练参与设备。
其中,所述模型训练信息包括模型训练时间信息,
所述基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备的步骤包括:
步骤B10,将各所述模型训练时间信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的训练时间限制信息和各所述模型训练时间信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第二确定信息;
在本实施例中,需要说明的是,所述第二确定信息为表明所述第二设备确定参与所述模型训练时间信息对应的待完成模型训练任务的信息,各所述待完成训练任务均对应一个执行任务的模型训练时间段,且所述第二确定信息将由所述第二设备在所述第二确定信息对应的待完成模型训练任务对应的模型训练时间段开始之前发送至所述第一设备。
将各所述模型训练时间信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的训练时间限制信息和各所述模型训练时间信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第二确定信息,具体地,在每一所述待完成模型训练任务对应的模型训练时间段之前,将各所述模型训练时间信息分别发送至各所述第二设备,以供各所述第二设备分别获取训练时间限制信息,其中,所述训练时间限制信息为表明所述第二设备在所述模型训练时间段内是否有空闲时间和足够的计算资源参与所述待完成模型训练任务,进各所述第二设备将基于所述训练限制信息和所述模型训练时间信息,确定是否参与所述模型训练时间信息对应的待完成模型训练任务,若确定参与所述待完成模型训练任务,则向所述第一设备反馈第二确定信息,以表明参与所述待完成模型训练任务,若确定不参与所述待完成模型训练任务,则忽略所述模型训练时间信息,并等待接收下一所述模型训练时间信息。
步骤B20,基于各所述第二设备分别反馈的各第二确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在本实施例中,基于各所述第二设备分别反馈的各第二确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备,具体地,每一所述模型训练时间段开始前,分别接收各所述第二设备发送的所述模型训练时间信息对应的所述待完成模型训练任务对应的各所述第二确定信息,并将发送各每一所述第二确定信息的第二设备作为所述模型训练参与设备,其中,一所述第一确定信息对应一所述第二设备对应一所述模型训练参与设 备。
步骤S20,获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
在本实施例中,需要说明的是,所述预设联邦学习建模流程为进行联邦学习的流程,各所述模型训练时间段包括第一模型训练时间段和第二模型训练时间段。
获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务,具体地,获取各所述待完成模型训练任务对应的模型训练时间段,并在每一所述模型训练时间段内,接收对应的各所述模型训练参与设备发送的本地模型参数,并基于预设聚合规则,计算各所述本地模型参数对应的最新联邦模型参数,其中,所述预设聚合规则包括加权求平均、求和等,并确定所述最新联邦模型参数是否达到预设训练任务完成条件,若所述最新联邦模型参数达到所述训练任务完成条件,则将所述最新联邦模型参数分别发送至各所述第二设备,以供各所述第二设备基于所述最新联邦模型参数更新各自的本地模型,若最新联邦模型参数未达到所述训练任务完成条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供各所述模型训练参与设备更新各自的本地模型,并基于更新后的本地模型重新进行联邦学习,重新计算最新联邦模型参数,直至所述最新联邦模型参数达到训练任务完成条件,其中,所述训练任务完成条件包括损失函数收敛、模型达到最大迭代次数等,其中,若各所述模型训练时间段存在交集时间段,则在所述交集时间段内,所述第一设备将根据接收每一所述待完成模型训练任务对应的各本地模型参数的时间先后顺序,确定计算各所述待完成模型训练任务对应的最新联邦模型参数的先后顺序,例如,假设各所述待完成模型训练任务包括任务A和任务B,则所述第一设备在9点零7分,已全部接收所述任务A对应的模型训练参与设备发送的各本地模型参数,在9点零9分,已全部接收任务B对应的模型训练参与设备发送的各本地模型参数,则所述第一设备优先计算 任务A对应的最新联邦模型参数,再计算任务B对应的最新联邦模型参数。
在一实施例中,所述第一设备可选择在所述预设可信执行环境执行基于预设聚合规则,计算各所述本地模型参数对应的最新联邦模型参数的步骤。
其中,所述基于各所述模型训练时间段,分别协调每一所述待完成模型训练任务对应的各所述模型训练参与设备进行预设联邦学习建模流程的步骤包括:
步骤S21,在各所述模型训练时间段内,分别接收所述模型训练时间段对应的各所述模型训练参与设备发送的本地模型参数,并基于预设聚合规则,计算最新联邦模型参数;
在本实施例中,需要说明的是,所述本地模型参数包括模型网络参数和梯度信息等,其中,所述模型网络参数为所述模型训练参与设备对自身持有的本地模型迭代训练预设次数后,迭代训练后的所述本地模型的网络参数,例如假设所述本地模型为线性模型Y=β
0+β
1X
1+β
2X
2+…+β
nX
n,则所述网络参数为向量(β
0,β
1,β
2,…,β
n)。
在各所述模型训练时间段内,分别接收所述模型训练时间段对应的各所述模型训练参与设备发送的本地模型参数,并基于预设聚合规则,计算最新联邦模型参数,具体地,每一所述模型训练时间段内,接收所述模型训练时间段对应的各所述模型训练参与设备发送的本地模型参数,其中,每一所述本地模型参数均为所述模型训练参与设备对所述本地模型参数对应的联邦参与模型进行预设次数的迭代训练获得的,其中,所述联邦参与模型为所述模型训练参与设备的本地模型,进而基于预设聚合规则,对各所述本地模型参数进行加权求平均,获得所述最新联邦模型参数。
步骤S22,确定所述最新联邦模型参数是否满足预设训练任务结束条件;
在本实施例中,需要说明的是,所述预设训练任务结束条件包括训练达到最大迭代次数、损失函数训练收敛等。
确定所述最新联邦模型参数是否满足预设训练任务结束条件,具体地,若所述最新联邦模型参数与所述上一轮最新联邦模型参数的差值小于预设差值阀值,则判定所述最新联邦模型参数达到所述预设训练任务结束条件,
步骤S23,若所述最新联邦模型参数满足所述预设训练任务结束条件,则将所述最新联邦模型参数发送至各所述第二设备,以供各所述第二设备更新 各自的本地模型;
在本实施例中,若所述最新联邦模型参数满足所述预设训练任务结束条件,则将所述最新联邦模型参数发送至各所述第二设备,以供各所述第二设备更新各自的本地模型,具体地,若所述最新联邦模型参数满足所述预设训练任务结束条件,则将所述最新联邦模型参数发送至各所述第二设备,以供各所述第二设备基于所述最新联邦模型参数,对本地模型中对应的模型参数进行替换更新为所述最新联邦模型参数。
另外地,还可设置若所述最新联邦模型参数与所述上一轮最新联邦模型参数的差值小于预设差值阀值的情况连续出现预设次数,则判定所述最新联邦模型参数达到所述预设训练任务结束条件。
步骤S23,若所述最新联邦模型参数不满足所述预设训练任务结束条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供各所述模型参与设备更新各自的联邦参与模型,以重新计算所述最新联邦模型参数,直至所述最新联邦模型参数满足所述预设训练任务结束条件。
在本实施例中,若所述最新联邦模型参数不满足所述预设训练任务结束条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供各所述模型参与设备更新各自的联邦参与模型,以重新计算所述最新联邦模型参数,直至所述最新联邦模型参数满足所述预设训练任务结束条件,具体地,若所述最新联邦模型参数不满足所述预设训练任务结束条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供每一所述模型训练参与设备基于所述最新联邦模型参数,更新各自持有的联邦参与模型,并对更新后的联邦参与模型进行迭代训练,进而当迭代训练的次数达到预设迭代训练次数时,重新获取迭代训练后的所述联邦参与模型的本地模型参数,并将重新计算的各所述本地模型参数发送至所述第一设备,以供所述第一设备基于各所述第二设备发送的重新计算的各本地模型参数和所述预设聚合规则,重新计算所述最新联邦模型参数,直至所述最新联邦模型参数满足所述预设训练任务结束条件。
本实施例通过与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备,进而获取各所述待完成模型训练任务对 应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。也即,本实施例提供了一种基于时分的方式进行联邦学习的方法,也即,在进行联邦学习建模之前,通过与各所述第二设备进行交互,确定需要执行的各待完成模型训练任务,进而确定每一所述待完成模型训练任务对应的各模型训练参与设备和模型训练时间段,进而协调者可基于各所述模型训练时间段,分别协调每一所述待完成模型训练任务对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务,也即,在一所述待完成训练模型的各个模型参与设备正在进行本地迭代训练时,协调者可协调其他待完成模型训练任务对应的各模型训练参与设备进行联邦学习建模,进而避免了在各个联邦参与方进行本地迭代训练时,协调者无需执行计算任务和消耗计算资源的情况发生,进而达到了充分利用协调者的计算资源的目的,提高了协调者的计算资源的利用率,所以,解决了联邦学习系统里协调者计算资源利用率低的技术问题。
进一步地,参照图2,基于本申请中第一实施例,在本申请的另一实施例中,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:
步骤C10,与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务;
在本实施例中,所述模型训练信息包括模型索引信息和模型训练时间信息,所述设备状态信息包括所述第二设备的可用计算资源,其中,所述可用计算资源为所述第二设备在所述待完成模型训练任务对应的模型训练时间段内可调用的计算资源。
在进行步骤C10之前,所述二设备与所述第一设备进行协商交互,确定各待完成模型训练任务。
与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务,具体地,与所述第一设备进行协商交互,获取模型训练信息,并确定可用计算资源,进而判断所述可用计算资源是否满足所述模型训练信 息对应的待完成模型训练任务,若所述可用计算资源满足所述待完成模型训练任务,则确定参与所述待完成模型训练任务,若所述可用计算资源不满足所述待完成模型训练任务,则确定不参与所述待完成模型训练任务,例如,假设所述待完成模型训练任务需占用所述第二设备的所有计算资源的50%,而所述第二设备可调用的可用计算资源为40%,则所述可用计算资源不满足所述待完成模型训练任务,进而确定不参与所述待完成模型训练任务。
步骤C20,若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务。
在本实施例中,若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务,具体地,若参与所述待完成模型训练任务,则确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代训练次数,则获取迭代训练后的所述待训练模型的本地模型参数,并将所述本地模型参数发送至所述第一设备,以供所述第一设备基于各所述第二设备发送的本地模型参数,计算最新联邦模型参数,并将所述最新联邦模型参数广播至各所述第二设备,进而所述第二设备接所述最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,并判断更新后的待训练模型是否满足预设迭代结束条件,若更新后的待训练模型满足预设迭代结束条件,则判定完成所述待完成模型训练任务,若更新后的待训练模型不满足预设迭代结束条件,则重新对所述待训练模型进行迭代训练,以供所述第一设备重新计算所述最新联邦模型参数,以重新对所述待训练模型进行更新,直至更新后的待训练模型满足预设迭代结束条件。
其中,在步骤C20中,所述通过与所述第一设备进行协调交互,执行预设联邦学习建模流程的步骤包括:
步骤C21,确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代次数,获取所述待训练模型对应的本地模型参数;
在本实施例中,确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代次数,获取所述待训练模型对应的本地模型参数,具体地,确定所述待完成模型训练 任务对应的待训练模型,并对所述待训练模型进行迭代训练更新,直至所述待训练模型达到预设迭代次数,并提取迭代训练更新后的所述待训练模型的本地模型参数。
步骤C22,将所述本地模型参数发送至所述第一设备,以供所述第一设备基于所述本地模型参数,计算最新联邦模型参数;
在本实施例中,将所述本地模型参数发送至所述第一设备,以供所述第一设备基于所述本地模型参数,计算最新联邦模型参数,具体地,将所述本地模型参数发送至所述第一设备,以供所述第一设备基于关联的各第二设备发送的本地模型参数,通过预设聚合规则,计算各所述本地模型参数对应的最新联邦模型参数,其中,所述预设聚合规则包括加权求平均、求和等。
步骤C23,接收所述第一设备反馈的最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,直至所述本地模型达到预设训练结束条件,获得所述待完成模型训练任务对应的目标建模模型。
在本实施例中,接收所述第一设备反馈的最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,直至所述本地模型达到预设训练结束条件,获得所述待完成模型训练任务对应的目标建模模型,具体地,接收所述第一设备反馈的最新联邦模型参数,并将所述待训练模型中的本地模型参数替换更新为所述最新联邦模型参数,获得替换更新后的待训练模型,并判断替换更新后的所述待训练模型是否满足预设迭代训练结束条件,若替换更新后的所述待训练模型满足预设迭代训练结束条件,则将替换更新后的所述待训练模型作为所述目标建模模型,若替换更新后的所述待训练模型不满足预设迭代训练结束条件,则重新对所述待训练模型进行迭代训练,以对所述待训练模型进行替换更新,直至替换更新后的所述待训练模型满足预设迭代训练结束条件。
本实施例通过与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务,进而若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务。也即,本实施例提供了一种基于联邦学习建模方法,也即,在进行联邦学习建模前,通过与所述第一设备进行协商交互和获取自身的设 备运行状态,确定是否参与所述模型训练信息对应的待完成模型训练任务,进而若确定参与,即可与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述模型训练任务,也即,所述第二设备在每次进行联邦学习建模之前,均可自主选择参与所述待完成模型训练任务,进而为解决联邦学习系统里协调者计算资源利用率低的技术问题奠定了基础。
参照图3,图3是本申请实施例方案涉及的硬件运行环境的设备结构示意图。
如图3所示,该联邦学习建模设备可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储设备。
在一实施例中,该联邦学习建模设备还可以包括矩形用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。矩形用户接口可以包括显示屏(Display)、输入子模块比如键盘(Keyboard),可选矩形用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
本领域技术人员可以理解,图3中示出的联邦学习建模设备结构并不构成对联邦学习建模设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图3所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及联邦学习建模程序。操作系统是管理和控制联邦学习建模设备硬件和软件资源的程序,支持联邦学习建模程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与联邦学习建模系统中其它硬件和软件之间通信。
在图3所示的联邦学习建模设备中,处理器1001用于执行存储器1005中存储的联邦学习建模程序,实现上述任一项所述的联邦学习建模方法的步骤。
本申请联邦学习建模设备具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。
本申请实施例还提供一种联邦学习建模装置,所述联邦学习建模装置应用于第一设备,所述联邦学习建模装置包括:
协商模块,用于与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;
协调模块,用于获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
在一实施例中,所述协商模块包括:
获取单元,用于获取各所述模型训练任务对应的模型训练信息;
确定单元,用于基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在一实施例中,所述确定单元包括:
第一发送子单元,用于将各所述模型索引信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的模型训练需求信息和各所述模型索引信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第一确定信息;
第一确定子单元,用于基于各所述第二设备分别反馈的各第一确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在一实施例中,所述确定单元还包括:
第二发送子单元,用于将各所述模型训练时间信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的训练时间限制信息和各所述模型训练时间信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第二确定信息;
第二确定子单元,用于基于各所述第二设备分别反馈的各第二确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
在一实施例中,所述协调模块包括:
计算单元,用于在各所述模型训练时间段内,分别接收所述模型训练时 间段对应的各所述模型训练参与设备发送的本地模型参数,并基于预设聚合规则,计算最新联邦模型参数;
第一判定单元,用于确定所述最新联邦模型参数是否满足预设训练任务结束条件;
更新单元,用于若所述最新联邦模型参数满足所述预设训练任务结束条件,则将所述最新联邦模型参数发送至各所述第二设备,以供各所述第二设备更新各自的本地模型;
第二判定单元,用于若所述最新联邦模型参数不满足所述预设训练任务结束条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供各所述模型参与设备更新各自的联邦参与模型,以重新计算所述最新联邦模型参数,直至所述最新联邦模型参数满足所述预设训练任务结束条件。
本申请联邦学习建模装置的具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。
为实现上述目的,本申请实施例还提供一种联邦学习建模装置,所述联邦学习建模装置应用于第二设备,所述联邦学习建模装置包括:
交互模块,用于与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务;
联邦学习建模模块,用于若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务。
在一实施例中,所述联邦学习建模模块包括:
迭代训练单元,用于确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代次数,获取所述待训练模型对应的本地模型参数;
发送单元,用于将所述本地模型参数发送至所述第一设备,以供所述第一设备基于所述本地模型参数,计算最新联邦模型参数;
更新单元,用于接收所述第一设备反馈的最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,直至所述本地模型达到预设训练结束条件,获得所述待完成模型训练任务对应的目标建模模型。
本申请联邦学习建模装置的具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。
本申请实施例提供了一种可读存储介质,且所述可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于实现上述任一项所述的联邦学习建模方法的步骤。
本申请可读存储介质具体实施方式与上述联邦学习建模方法各实施例基本相同,在此不再赘述。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利处理范围内。
Claims (20)
- 一种联邦学习建模方法,其中,所述联邦学习建模方法应用于第一设备,所述联邦学习建模方法包括:与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
- 如权利要求1所述的联邦学习建模方法,其中,所述在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备的步骤包括:获取各所述模型训练任务对应的模型训练信息;基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
- 如权利要求2所述的联邦学习建模方法,其中,所述模型训练信息包括模型索引信息。
- 如权利要求3所述的联邦学习建模方法,其中,所述基于各所述模型训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备的步骤包括:将各所述模型索引信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的模型训练需求信息和各所述模型索引信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第一确定信息;基于各所述第二设备分别反馈的各第一确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
- 如权利要求2所述的联邦学习建模方法,其中,所述模型训练信息包括模型训练时间信息。
- 如权利要求5所述的联邦学习建模方法,其中,所述基于各所述模型 训练信息,通过与各所述第二设备进行意愿确认交互,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备的步骤包括:将各所述模型训练时间信息分别发送至各所述第二设备,以供各所述第二设备分别基于获取的训练时间限制信息和各所述模型训练时间信息,在各所述模型训练任务中确定参与的各目标模型训练任务,并生成各所述目标模型训练任务对应的第二确定信息;基于各所述第二设备分别反馈的各第二确定信息,确定各所述待完成模型训练任务分别对应的各所述模型训练参与设备。
- 如权利要求1所述的联邦学习建模方法,其中,所述基于各所述模型训练时间段,分别协调每一所述待完成模型训练任务对应的各所述模型训练参与设备进行预设联邦学习建模流程的步骤包括:在各所述模型训练时间段内,分别接收所述模型训练时间段对应的各所述模型训练参与设备发送的本地模型参数,并基于预设聚合规则,计算最新联邦模型参数;确定所述最新联邦模型参数是否满足预设训练任务结束条件;若所述最新联邦模型参数满足所述预设训练任务结束条件,则将所述最新联邦模型参数发送至各所述第二设备,以供各所述第二设备更新各自的本地模型;若所述最新联邦模型参数不满足所述预设训练任务结束条件,则将所述最新联邦模型参数分别发送至各所述模型训练参与设备,以供各所述模型参与设备更新各自的联邦参与模型,以重新计算所述最新联邦模型参数,直至所述最新联邦模型参数满足所述预设训练任务结束条件。
- 如权利要求3所述的联邦学习建模方法,其中,所述模型索引信息为对应的待完成模型训练任务的标识信息。
- 如权利要求4所述的联邦学习建模方法,其中,所述第一确定信息为所述第二设备单独回复的意愿信息、本地模型参数信息或者本地模型梯度信息。
- 如权利要求6所述的联邦学习建模方法,其中,所述第二确定信息为表明所述第二设备确定参与所述模型训练时间信息对应的待完成模型训练任务的信息。
- 如权利要求1所述的联邦学习建模方法,其中,所述预设联邦学习建模流程为进行联邦学习的流程,各所述模型训练时间段包括第一模型训练时间段和第二模型训练时间段。
- 一种联邦学习建模方法,其中,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:与所述第一设备进行交互,确定模型训练信息,并获取设备状态信息,以基于所述设备状态信息,确定是否参与所述模型训练信息对应的待完成模型训练任务;若参与所述待完成模型训练任务,则通过与所述第一设备进行协调交互,执行预设联邦学习建模流程,以完成所述待完成模型训练任务。
- 如权利要求12所述的联邦学习建模方法,其中,所述通过与所述第一设备进行协调交互,执行预设联邦学习建模流程的步骤包括:确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代次数,获取所述待训练模型对应的本地模型参数;将所述本地模型参数发送至所述第一设备,以供所述第一设备基于所述本地模型参数,计算最新联邦模型参数;接收所述第一设备反馈的最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,直至所述本地模型达到预设训练结束条件,获得所述待完成模型训练任务对应的目标建模模型。
- 如权利要求13所述的联邦学习建模方法,其中,所述确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练,直至所述待训练模型达到预设迭代次数,获取所述待训练模型对应的本地模型参数的步骤包括:确定所述待完成模型训练任务对应的待训练模型,并对所述待训练模型进行迭代训练更新,直至所述待训练模型达到预设迭代次数,并提取迭代训练更新后的所述待训练模型的本地模型参数。
- 如权利要求13所述的联邦学习建模方法,其中,所述将所述本地模型参数发送至所述第一设备,以供所述第一设备基于所述本地模型参数,计算最新联邦模型参数的步骤包括:将所述本地模型参数发送至所述第一设备,以供所述第一设备基于关联的各第二设备发送的本地模型参数,通过预设聚合规则,计算各所述本地模型参数对应的最新联邦模型参数。
- 如权利要求15所述的联邦学习建模方法,其中,所述预设聚合规则包括加权求平均、求和。
- 如权利要求13所述的联邦学习建模方法,其中,所述接收所述第一设备反馈的最新联邦模型参数,并基于所述最新联邦模型参数,更新所述待训练模型,直至所述本地模型达到预设训练结束条件,获得所述待完成模型训练任务对应的目标建模模型的步骤包括:接收所述第一设备反馈的最新联邦模型参数,并将所述待训练模型中的本地模型参数替换更新为所述最新联邦模型参数,获得替换更新后的待训练模型,并判断替换更新后的所述待训练模型是否满足预设迭代训练结束条件;若替换更新后的所述待训练模型满足预设迭代训练结束条件,则将替换更新后的所述待训练模型作为所述目标建模模型;若替换更新后的所述待训练模型不满足预设迭代训练结束条件,则重新对所述待训练模型进行迭代训练,以对所述待训练模型进行替换更新,直至替换更新后的所述待训练模型满足预设迭代训练结束条件。
- 一种联邦学习建模装置,其中,所述联邦学习建模装置包括:协商模块,用于与所述第一设备关联的各第二设备进行协商交互,确定各待完成模型训练任务,并在各所述第二设备中确定各所述待完成模型训练任务分别对应的各模型训练参与设备;协调模块,用于获取各所述待完成模型训练任务对应的模型训练时间段,并基于各所述模型训练时间段,协调各所述待完成模型训练任务分别对应的各所述模型训练参与设备进行预设联邦学习建模流程,以完成各所述待完成模型训练任务。
- 一种联邦学习建模设备,其中,所述联邦学习建模设备包括:存储器、处理器以及存储在存储器上的用于实现所述联邦学习建模方法的程序,所述存储器用于存储实现联邦学习建模方法的程序;所述处理器用于执行实现所述联邦学习建模方法的程序,以实现如权利要求1至17中任一项所述的联邦学习建模方法的步骤。
- 一种可读存储介质,其中,所述可读存储介质上存储有实现联邦学习建模方法的程序,所述实现联邦学习建模方法的程序被处理器执行以实现如权利要求1至17中任一项所述的联邦学习建模方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010360246.5 | 2020-04-29 | ||
CN202010360246.5A CN111538598A (zh) | 2020-04-29 | 2020-04-29 | 联邦学习建模方法、装置、设备及可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021219053A1 true WO2021219053A1 (zh) | 2021-11-04 |
Family
ID=71979068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/090823 WO2021219053A1 (zh) | 2020-04-29 | 2021-04-29 | 联邦学习建模方法、装置、设备及可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111538598A (zh) |
WO (1) | WO2021219053A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114139731A (zh) * | 2021-12-03 | 2022-03-04 | 深圳前海微众银行股份有限公司 | 纵向联邦学习建模优化方法、设备、介质及程序产品 |
CN114168295A (zh) * | 2021-12-10 | 2022-03-11 | 深圳致星科技有限公司 | 混合架构系统及基于历史任务效果的任务调度方法 |
CN114492179A (zh) * | 2022-01-13 | 2022-05-13 | 工赋(青岛)科技有限公司 | 信息处理系统、方法、装置、设备及存储介质 |
CN115345317A (zh) * | 2022-08-05 | 2022-11-15 | 北京交通大学 | 一种基于公平理论的面向联邦学习的公平奖励分配方法 |
CN115577876A (zh) * | 2022-09-27 | 2023-01-06 | 广西综合交通大数据研究院 | 基于区块链和联邦学习的网络货运平台运单准点预测方法 |
CN115987985A (zh) * | 2022-12-22 | 2023-04-18 | 中国联合网络通信集团有限公司 | 模型协同构建方法、中心云、边缘节点及介质 |
CN116055335A (zh) * | 2022-12-21 | 2023-05-02 | 深圳信息职业技术学院 | 基于联邦学习的车联网入侵检测模型训练方法、入侵检测方法及设备 |
CN116186341A (zh) * | 2023-04-25 | 2023-05-30 | 北京数牍科技有限公司 | 一种联邦图计算方法、装置、设备及存储介质 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111538598A (zh) * | 2020-04-29 | 2020-08-14 | 深圳前海微众银行股份有限公司 | 联邦学习建模方法、装置、设备及可读存储介质 |
US11283609B2 (en) | 2020-08-21 | 2022-03-22 | Huawei Technologies Co., Ltd. | Method and apparatus for supporting secure data routing |
US11588907B2 (en) | 2020-08-21 | 2023-02-21 | Huawei Technologies Co., Ltd. | System and methods for supporting artificial intelligence service in a network |
CN112164224A (zh) * | 2020-09-29 | 2021-01-01 | 杭州锘崴信息科技有限公司 | 信息安全的交通信息处理系统、方法、设备及存储介质 |
CN112232518B (zh) * | 2020-10-15 | 2024-01-09 | 成都数融科技有限公司 | 一种轻量级分布式联邦学习系统及方法 |
CN112232519B (zh) * | 2020-10-15 | 2024-01-09 | 成都数融科技有限公司 | 一种基于联邦学习的联合建模方法 |
CN112434818B (zh) * | 2020-11-19 | 2023-09-26 | 脸萌有限公司 | 模型构建方法、装置、介质及电子设备 |
CN114548472A (zh) * | 2020-11-26 | 2022-05-27 | 新智数字科技有限公司 | 一种资源分配方法、装置、可读介质及电子设备 |
CN112650583B (zh) * | 2020-12-23 | 2024-07-02 | 新奥新智科技有限公司 | 一种资源分配方法、装置、可读介质及电子设备 |
CN112700013A (zh) * | 2020-12-30 | 2021-04-23 | 深圳前海微众银行股份有限公司 | 基于联邦学习的参数配置方法、装置、设备及存储介质 |
EP4282135A1 (en) * | 2021-01-25 | 2023-11-29 | Nokia Technologies Oy | Enablement of federated machine learning for terminals to improve their machine learning capabilities |
CN112994981B (zh) * | 2021-03-03 | 2022-05-10 | 上海明略人工智能(集团)有限公司 | 时延数据的调整方法和装置、电子设备和存储介质 |
CN113011602B (zh) * | 2021-03-03 | 2023-05-30 | 中国科学技术大学苏州高等研究院 | 一种联邦模型训练方法、装置、电子设备和存储介质 |
CN113191090A (zh) * | 2021-05-31 | 2021-07-30 | 中国银行股份有限公司 | 基于区块链的联邦建模方法及装置 |
CN113469377B (zh) * | 2021-07-06 | 2023-01-13 | 建信金融科技有限责任公司 | 联邦学习审计方法和装置 |
FI20216284A1 (en) * | 2021-12-16 | 2023-06-17 | Nokia Solutions & Networks Oy | Parameter initialization for machine learning agents in wireless communication networks |
CN116432018A (zh) * | 2021-12-30 | 2023-07-14 | 维沃移动通信有限公司 | 模型训练方法、装置及通信设备 |
CN116567702A (zh) * | 2022-01-26 | 2023-08-08 | 展讯通信(上海)有限公司 | 一种用户设备选择方法、装置、芯片及模组设备 |
FI20225086A1 (en) * | 2022-02-02 | 2023-08-03 | Nokia Solutions & Networks Oy | Iterative initialization of machine-learning agent parameters in wireless communication network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263908A (zh) * | 2019-06-20 | 2019-09-20 | 深圳前海微众银行股份有限公司 | 联邦学习模型训练方法、设备、系统及存储介质 |
US20190340534A1 (en) * | 2016-09-26 | 2019-11-07 | Google Llc | Communication Efficient Federated Learning |
CN110598870A (zh) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | 一种联邦学习方法及装置 |
CN111538598A (zh) * | 2020-04-29 | 2020-08-14 | 深圳前海微众银行股份有限公司 | 联邦学习建模方法、装置、设备及可读存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11488054B2 (en) * | 2017-12-06 | 2022-11-01 | Google Llc | Systems and methods for distributed on-device learning with data-correlated availability |
CN109670684B (zh) * | 2018-12-03 | 2021-03-19 | 北京顺丰同城科技有限公司 | 基于时间窗口的货运车辆的调度方法及电子设备 |
CN110442457A (zh) * | 2019-08-12 | 2019-11-12 | 北京大学深圳研究生院 | 基于联邦学习的模型训练方法、装置及服务器 |
-
2020
- 2020-04-29 CN CN202010360246.5A patent/CN111538598A/zh active Pending
-
2021
- 2021-04-29 WO PCT/CN2021/090823 patent/WO2021219053A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190340534A1 (en) * | 2016-09-26 | 2019-11-07 | Google Llc | Communication Efficient Federated Learning |
CN110263908A (zh) * | 2019-06-20 | 2019-09-20 | 深圳前海微众银行股份有限公司 | 联邦学习模型训练方法、设备、系统及存储介质 |
CN110598870A (zh) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | 一种联邦学习方法及装置 |
CN111538598A (zh) * | 2020-04-29 | 2020-08-14 | 深圳前海微众银行股份有限公司 | 联邦学习建模方法、装置、设备及可读存储介质 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114139731A (zh) * | 2021-12-03 | 2022-03-04 | 深圳前海微众银行股份有限公司 | 纵向联邦学习建模优化方法、设备、介质及程序产品 |
CN114168295A (zh) * | 2021-12-10 | 2022-03-11 | 深圳致星科技有限公司 | 混合架构系统及基于历史任务效果的任务调度方法 |
CN114492179A (zh) * | 2022-01-13 | 2022-05-13 | 工赋(青岛)科技有限公司 | 信息处理系统、方法、装置、设备及存储介质 |
CN115345317A (zh) * | 2022-08-05 | 2022-11-15 | 北京交通大学 | 一种基于公平理论的面向联邦学习的公平奖励分配方法 |
CN115577876A (zh) * | 2022-09-27 | 2023-01-06 | 广西综合交通大数据研究院 | 基于区块链和联邦学习的网络货运平台运单准点预测方法 |
CN116055335A (zh) * | 2022-12-21 | 2023-05-02 | 深圳信息职业技术学院 | 基于联邦学习的车联网入侵检测模型训练方法、入侵检测方法及设备 |
CN116055335B (zh) * | 2022-12-21 | 2023-12-19 | 深圳信息职业技术学院 | 基于联邦学习的车联网入侵检测模型训练方法、入侵检测方法及设备 |
CN115987985A (zh) * | 2022-12-22 | 2023-04-18 | 中国联合网络通信集团有限公司 | 模型协同构建方法、中心云、边缘节点及介质 |
CN115987985B (zh) * | 2022-12-22 | 2024-02-27 | 中国联合网络通信集团有限公司 | 模型协同构建方法、中心云、边缘节点及介质 |
CN116186341A (zh) * | 2023-04-25 | 2023-05-30 | 北京数牍科技有限公司 | 一种联邦图计算方法、装置、设备及存储介质 |
CN116186341B (zh) * | 2023-04-25 | 2023-08-15 | 北京数牍科技有限公司 | 一种联邦图计算方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111538598A (zh) | 2020-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021219053A1 (zh) | 联邦学习建模方法、装置、设备及可读存储介质 | |
WO2023005133A1 (zh) | 联邦学习建模优化方法、设备、可读存储介质及程序产品 | |
WO2021083276A1 (zh) | 横向联邦和纵向联邦联合方法、装置、设备及介质 | |
WO2019174595A1 (zh) | 资源配置方法、装置、终端及存储介质 | |
CN113157422A (zh) | 基于深度强化学习的云数据中心集群资源调度方法及装置 | |
CN114298322B (zh) | 联邦学习方法和装置、系统、电子设备、计算机可读介质 | |
US20240176906A1 (en) | Methods, apparatuses, and systems for collaboratively updating model by multiple parties for implementing privacy protection | |
CN111429142B (zh) | 一种数据处理方法、装置及计算机可读存储介质 | |
CN111428884A (zh) | 基于向前法的联邦建模方法、设备和可读存储介质 | |
CN111898768A (zh) | 数据处理方法、装置、设备及介质 | |
CN113645637B (zh) | 超密集网络任务卸载方法、装置、计算机设备和存储介质 | |
CN113163006A (zh) | 基于云-边缘协同计算的任务卸载方法及系统 | |
CN106293947B (zh) | 虚拟化云环境下gpu-cpu混合资源分配系统和方法 | |
CN111338808B (zh) | 一种协同计算方法及系统 | |
CN114065864A (zh) | 联邦学习方法、联邦学习装置、电子设备以及存储介质 | |
CN112541570A (zh) | 一种多模型训练方法、装置、电子设备及存储介质 | |
CN108650248A (zh) | 一种基于区块链与ar可视化技术的创意增值系统 | |
KR102590112B1 (ko) | 사물인터넷 환경에서 분산 머신 러닝 학습을 위한 코딩 및 인센티브 기반 메커니즘 | |
CN108289115B (zh) | 一种信息处理方法及系统 | |
WO2023109246A1 (zh) | 一种面向断点隐私保护的方法、装置、设备及介质 | |
US11281890B2 (en) | Method, system, and computer-readable media for image correction via facial ratio | |
WO2020206696A1 (zh) | 应用清理方法、装置、存储介质及电子设备 | |
CN115334321B (zh) | 视频流的访问热度的获取方法、装置、电子设备及介质 | |
CN111443806A (zh) | 交互任务的控制方法、装置、电子设备及存储介质 | |
CN113139764A (zh) | 派单方法、装置、存储介质及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21796649 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21796649 Country of ref document: EP Kind code of ref document: A1 |