CN110598870A - Method and device for federated learning - Google Patents

Method and device for federated learning Download PDF

Info

Publication number
CN110598870A
CN110598870A CN201910824202.0A CN201910824202A CN110598870A CN 110598870 A CN110598870 A CN 110598870A CN 201910824202 A CN201910824202 A CN 201910824202A CN 110598870 A CN110598870 A CN 110598870A
Authority
CN
China
Prior art keywords
participants
participant
coordinator
report
federal learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910824202.0A
Other languages
Chinese (zh)
Other versions
CN110598870B (en
Inventor
程勇
衣志昊
刘洋
陈天健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201910824202.0A priority Critical patent/CN110598870B/en
Publication of CN110598870A publication Critical patent/CN110598870A/en
Application granted granted Critical
Publication of CN110598870B publication Critical patent/CN110598870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a device for federated learning, wherein the method comprises the following steps: the coordinator receives reports of a plurality of participants; the coordinator determines participants meeting preset conditions according to the reports of the multiple participants and takes the participants as participants participating in federal learning; wherein the report characterizes an expected available resource profile for the participant; and the coordinator carries out federated learning model training through the participants participating in federated learning. When the method is applied to financial science and technology (Fintech), the participants which do not meet the expected available resource condition are removed as much as possible, so that the influence of the transmission efficiency of the participants on the performance of the federal learning model in the process of the coordinator performing the federal learning through the participants participating in the federal learning is reduced.

Description

Method and device for federated learning
Technical Field
The invention relates to the field of financial technology (Fintech) and the field of federal learning, in particular to a method and a device for federated learning.
Background
With the development of computer technology, more and more technologies (big data, distributed, Blockchain (Blockchain), artificial intelligence, etc.) are applied in the financial field, and the traditional financial industry is gradually changing to financial technology (Fintech). Currently, many financial strategies in the field of financial science and technology are adjusted depending on the result of federal learning of a large amount of financial transaction data, and adjustment of the corresponding financial strategies is likely to affect profit and loss of financial institutions. Thus, the performance of the federal learning model is critical to a financial institution.
When the number of the participants of the federal learning is large, especially when the participants are mobile terminals, the difference between the participants is large, for example, the online time of many participants is irregular, unstable and easy to drop and interrupt, which affects the training of the federal learning model, and the performance of the obtained federal learning model can not meet the predetermined requirement. This is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a federated learning method and device, and solves the problem that the performance of a federated learning model obtained in the prior art cannot meet preset requirements.
In a first aspect, an embodiment of the present application provides a federated learning method, including: the coordinator receives reports of a plurality of participants; the coordinator determines participants meeting preset conditions according to the reports of the multiple participants and takes the participants as participants participating in federal learning; wherein the report characterizes an expected available resource profile for the participant; and the coordinator carries out federated learning model training through the participants participating in federated learning.
According to the method, the coordinator receives the reports of the multiple participants, the coordinator screens out the participants meeting the preset conditions in the reports of the multiple participants, and the preset conditions represent the expected available resource conditions of the participants, so that the influence of the transmission efficiency of the participants on the performance of a federal learning model in the process of the coordinator performing the federal learning through the participants participating in the federal learning is reduced.
In an alternative embodiment, the report of the participant participating in federal learning includes an expected idle period for the participant participating in federal learning, the method further comprising: and the coordinator takes the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
In the method, the coordinator uses the public time period in the expected idle time period of the participants participating in the federal learning as the time period of the federal learning model training, so that the federal learning training is carried out in the idle time period of all the participants participating in the federal learning, and the smooth running of the federal learning model training is ensured.
In an alternative embodiment, the coordinator sends a model update request to the participants participating in federated learning; the model update request instructs the participants participating in federated learning to conduct federated learning model training over the common time period.
In the method, the coordinator sends a model updating request to the participants participating in the federal learning, and after receiving the model updating request, the participants can learn the common time period of the federal learning so as to transmit information in the process of the federal learning.
In an optional embodiment, the report of the participant includes at least one operation index, and each operation index corresponds to a preset weight value; the coordinator determines whether the report of the participant satisfies a preset condition in the following manner: the coordinator determines the scores of all the operation indexes in at least one operation index according to at least one operation index and a preset scoring rule; the coordinator determines the report score of the participant according to the score of each operation index in the at least one operation index and the corresponding preset weight value; the coordinator determines whether the report of the participant meets a preset condition according to the report score of the participant; the preset condition is that the report score of the participant is greater than or equal to a preset score threshold value.
In the above manner, according to at least one operation index and a preset scoring rule, determining the score of each operation index in the at least one operation index, and then according to the determined score of each operation index in the at least one operation index and a corresponding preset weight value, determining the report score of the participant; thereby determining whether the report of the participant satisfies a preset condition; the coordinator can comprehensively consider the influence of each operation index according to the weight value of each operation index according to the importance degree of each operation index according to specific situations, and the accuracy of determining whether the report of the participant meets the preset condition is improved.
In an alternative embodiment, the report of the plurality of participants is used to instruct the plurality of participants to apply for participation in federal learning.
In this manner, the plurality of participants select the scope of participants participating in federal learning by actively pushing reports to inform the coordinator that the plurality of participants are expected to participate in federal learning.
In an alternative embodiment, the reports of the plurality of participants include operational conditions of the plurality of participants.
In the above manner, the report of the multiple participants includes the operating conditions of the multiple participants, so that the coordinator can conveniently select the appropriate participants.
In a second aspect, the present application provides a bang learning device, comprising: a receiving module for receiving reports of a plurality of participants; the processing module is used for determining participants meeting preset conditions according to the reports of the multiple participants, and the participants are used as participants participating in federal learning and participants participating in the federal learning; wherein the report characterizes an expected available resource profile for the participant; and for conducting federal learning model training by the participants participating in federal learning.
In an alternative embodiment, the report of the participant participating in the federal learning includes an expected idle period for the participant participating in the federal learning, and the processing module is further configured to: and taking the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
In an optional embodiment, the processing module is further configured to: sending a model update request to the participants participating in federated learning; the model update request instructs the participants participating in federated learning to conduct federated learning model training over the common time period.
In an optional embodiment, the report of the participant includes at least one operation index, and each operation index corresponds to a preset weight value; the processing module is specifically configured to: determining whether the report of the participant satisfies a preset condition in the following manner: determining the score of at least one operation index according to at least one operation index and a preset scoring rule; according to the determined value of the at least one operation index and the corresponding preset weight value, determining the report value of the participant; determining whether the report of the participant meets a preset condition according to the report score of the participant; the preset condition is that the report score of the participant is greater than or equal to a preset score threshold value.
In an alternative embodiment, the report of the plurality of participants is used to instruct the plurality of participants to apply for participation in federal learning.
In an alternative embodiment, the reports of the plurality of participants include operational conditions of the plurality of participants.
For the advantages of the second aspect and the embodiments of the second aspect, reference may be made to the advantages of the first aspect and the embodiments of the first aspect, which are not described herein again.
In a third aspect, an embodiment of the present application provides a computer device, which includes a program or instructions, and when the program or instructions are executed, the computer device is configured to perform the method of each embodiment of the first aspect and the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium, which includes a program or instructions, and when the program or instructions are executed, the program or instructions are configured to perform the method of the first aspect and the embodiments of the first aspect.
Drawings
Fig. 1 is a schematic diagram of an architecture for federal learning provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a process of federated learning provided in the embodiment of the present application;
fig. 3 is a schematic flowchart illustrating steps of a federated learning method according to an embodiment of the present application;
fig. 4 is a timing diagram illustrating a federated learning method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a bang learning device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions will be described in detail below with reference to the drawings and the specific embodiments of the specification, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, but not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Federal learning refers to a method of machine learning by federating different participants. As shown in fig. 1, one joint model parameter update of federal learning is divided into two steps: (a) each participant (also called data owner, or client) trains the machine learning model and obtains model parameter updates using only its own locally owned data, and sends model parameter updates, e.g., model weights or gradient information, to a coordinator (also called parameter server, or aggregation server); (b) the coordinator fuses (e.g., takes a weighted average) the model parameter updates received from the different participants and redistributes the fused model parameter updates to the individual participants. In the federal study, the participants do not need to expose own data to other participants and coordinators, so that the federal study can well protect the privacy of users and guarantee the data security.
When the data characteristics of each participant are overlapped more and the user overlap less, a part of data with the same user data characteristics of the participants and not identical users is taken out for the joint machine learning (the joint machine learning mode is hereinafter referred to as a first joint learning mode).
For example, if two banks in different regions exist, their user groups are respectively from the regions where they are located, and the intersection of the user groups is very small. But their services are very similar and the recorded user data characteristics are the same. A first federal learning mode can be used to help two banks build a federated model to predict customer behavior. For another example, a mobile user input prediction (input autocomplete and recommendation) model may be built by a first federated learning approach in conjunction with a plurality of mobile terminals. For another example, a mobile user search keyword prediction (keyword autocompletion and recommendation) model may be constructed by combining a plurality of mobile terminals in a first federated learning manner. The first federal learning approach can also be used in the field of edge computing and Internet of Things (IoT) in order to solve the problem that Internet of Things devices (e.g., video sensor nodes and infinite cameras) do not have sufficient bandwidth to upload large amounts of data.
As shown in fig. 2, in a first step of model parameter updating in the federal learning manner, a possible procedure is that the coordinator first sends a model update request (model _ update _ request) message to the participants. The model update request message may have 2 main roles: (a) the model updating request message is used for informing the participant to start model parameter updating, namely used as a starting signal for training a local model of the participant; (b) the model update request message may also carry the latest federated model parameters owned by the coordinator, i.e., the model update request message may be used to distribute the latest federated model parameters to the participants. The joint model parameters may be parameters of a federated learning model, e.g., weight parameter values for connections between nodes of a neural network; alternatively, the joint model parameter may be gradient information of the federal learning model, for example, gradient information in a neural network gradient descent algorithm. The participants may use the joint model parameters as a starting point for local model training in order to continue training the local model.
The coordinator may send the model parameter update message separately to each participant using point-to-point communication. Alternatively, the coordinator may send the model update request message to multiple participants simultaneously using multicast or broadcast.
After the coordinator finishes sending the model updating request message, the coordinator enters a waiting state to wait for the model parameter updating sent by the receiving participant.
After receiving the model parameter update message, a participant a can obtain the latest joint model parameters from the message, and perform model training locally or continue model training by using data locally owned by the participant a. The coordinator-distributed joint model parameters received by participant a may serve as initial values for model parameters for local machine learning model training of participant a, e.g., may serve as initial model parameters, or initial gradient information.
After participant a completes the model parameter update locally, participant a may send the model parameter update that participant a obtained locally to the coordinator. Participant a may send model parameter updates to the coordinator by way of encryption, for example, using homomorphic encryption techniques.
The joint model constructed by using the first federated learning manner may be a conventional machine learning model (e.g., linear regression, support vector machine, etc.) or various deep learning models (i.e., deep neural network models).
When a first federal learning mode technology is applied to jointly construct a machine learning model by combining a plurality of mobile terminals (for example, smart phones), participants are mobile terminals, and the number of the participants is generally large, for example, thousands of participants or even hundreds of thousands of participants are provided. In an application scenario of the internet of things, participants of the first federal learning mode may be a large number of wireless sensor nodes, for example, wireless cameras.
In such a mobile internet application scenario, for the coordinator, the management of the first federal learning mode training process, including selection of participants, model training start time, model training end time, model performance test, retraining of the (periodic) model (periodic model update), is a complex management task. In particular, unlike the fixed terminal and the server, the online time of the mobile terminal is random, i.e., the network connection of the mobile terminal may be interrupted at any time or even for a long time. If the online time of many selected first federal learning mode participants in a group of selected first federal learning mode participants is irregular and unstable, for example, the online time is easy to drop and break, the training of the first federal learning mode model is affected, for example, the training of the model cannot be completed, or the model does not converge, or the performance of the obtained model cannot meet the preset requirement.
For mobile terminals, participation in federal learning can introduce additional overhead, including communication overhead (e.g., mobile network traffic overhead), network bandwidth, power overhead, computing resource overhead, and storage resource overhead. With the popularity of 4G networks and Wi-Fi, traffic overhead has no longer been a major concern for mobile terminal users. However, network speed, battery endurance (electric quantity) and computing resources are always the key points of the mobile terminal user's attention and are also the key indexes of user experience. If the coordinator selects an inappropriate user to perform the first federal learning mode model training in an inappropriate time, the computing resource overhead of local model training, the network resource overhead of sending model parameter updates, the electric quantity overhead of participants and the like are likely to seriously affect the user experience of the mobile terminal. The next time the mobile terminal user is likely to decline to participate in the first federal learning mode model training. For example, if the coordinator chooses a user to engage a user's cell phone in a first federal learning mode model training while playing a network game with the cell phone, the user is likely to discontinue engagement with the first federal learning mode model training after the user finds that game play is affected. If the mobile terminal often refuses to participate or interrupts the ongoing first federal learning mode model training, the coordinator cannot successfully complete the intended model training.
Further, APP management and monitoring software is generally available on the current mobile terminals. If it is found that a certain APP based on federal learning occupies a large amount of communication resources, or computing resources, or consumes a large amount of power, which affects user experience, the monitoring software may remind the user, and the user may choose to delete the APP. This can affect the popularization of APP applications based on federal learning and the commercialization of federal learning.
In the operation process of financial institutions (banking institutions, insurance institutions or security institutions) in business (such as loan businesses, deposit businesses and the like of banks), adjustment of a plurality of financial strategies in the field of financial science and technology depends on the result of federal learning of a large amount of financial transaction data, and adjustment of the corresponding financial strategies probably influences profit and loss of the financial institutions. When the number of the participants of the federal learning is large, especially when the participants are mobile terminals, the difference between the participants is large, for example, the online time of many participants is irregular, unstable and easy to drop and interrupt, which affects the training of the federal learning model, and the performance of the obtained federal learning model can not meet the predetermined requirement. This situation does not meet the requirements of financial institutions such as banks, and the efficient operation of various services of the financial institutions cannot be ensured.
For this reason, as shown in fig. 3, the embodiment of the present application provides a method for bang learning.
Step 301: the coordinator receives reports of multiple participants.
Step 302: and the coordinator determines the participants meeting the preset conditions according to the reports of the participants as the participants participating in the federal learning.
In particular, the reports may include operating conditions or available resource conditions. The report characterizes the anticipated availability of resources by the participant. It should be noted that, in the case of the available resources, there may be a series of information such as "computing resources of the participants", "power of the participants", "network bandwidth of the participants", and the like.
Step 303: and the coordinator carries out federated learning model training through the participants participating in federated learning.
In the method of steps 301 to 303, the coordinator receives the reports of the multiple participants, and screens out the participants meeting the preset conditions in the reports of the multiple participants, and the preset conditions represent the expected available resource conditions of the participants, so that the influence of the transmission efficiency of the participants on the performance of the federal learning model in the process of the coordinator performing the federal learning through the participants participating in the federal learning is reduced.
In step 301, the report of each participant in the report of the plurality of participants includes, but is not limited to, at least one of: the difference between the current performance index and the expected performance index of the participant; the amount of data that the participant owns; the power of the participant; the computing resources of the participant; the network bandwidth of the participant. The inclusion of reports is only exemplified herein.
In an alternative embodiment of step 301, the report of the plurality of participants is used to instruct the plurality of participants to apply for participation in federal learning.
In another alternative embodiment of step 301, the reports of the plurality of participants include operational conditions of the plurality of participants.
In an optional implementation manner of step 302, the report of the participant includes at least one operation index, and each operation index corresponds to a preset weight value; the coordinator determines whether the report of the participant satisfies a preset condition in the following manner: the coordinator determines the score of at least one operation index according to at least one operation index and a preset scoring rule; the coordinator determines the report score of the participant according to the score of the at least one running index and the corresponding preset weight value; the coordinator determines whether the report of the participant meets a preset condition according to the report score of the participant; the preset condition is that the report score of the participant is greater than or equal to a preset score threshold value.
It should be noted that the score of the at least one operation index is defined as follows: (1) when at least one operation index only contains one index, the score of at least one operation index is the score of only one index; (2) and when the at least one operation index contains two or more indexes, the score of the at least one operation index is the combination of the scores of all the operation indexes of the at least one operation index.
The value of the at least one operation index and the corresponding preset weight value mean as follows: (1) when at least one operation index only contains one index, the score and the corresponding preset weight value of the at least one operation index are the score and the preset weight value of the only one index; (2) when the at least one operation index contains two or more indexes, the value of the at least one operation index and the corresponding preset weight value are the combination of the value of each operation index of the at least one operation index and the corresponding preset weight value.
In the above manner, according to at least one operation index and a preset scoring rule, determining the score of each operation index in the at least one operation index, and then according to the determined score of each operation index in the at least one operation index and a corresponding preset weight value, determining the report score of the participant; thereby determining whether the report of the participant satisfies a preset condition; the coordinator can comprehensively consider the influence of each operation index according to the weight value of each operation index according to the importance degree of each operation index according to specific situations, and the accuracy of determining whether the report of the participant meets the preset condition is improved.
For example, the coordinator takes into account two factors: network bandwidth of the participants, power of the participants. The coordinator considers the network bandwidth of the participant to be more important, the network bandwidth weight value of the participant is set to be 10, and the power quantity weight value of the participant is set to be 8. The network bandwidth of participant 1 is 6MB/s (megabytes/per second), the network bandwidth of participant 2 is 8 MB/s; the power of participant 1 was 1600mAh and the power of participant 2 was 1000 mAh. The preset scoring rule of the network bandwidth is that the corresponding score of the physical bandwidth is 10 x (network bandwidth/10 MB/s) in the range of 0-10 MB/s; the preset scoring rule of the electric quantity is that the corresponding score of the electric quantity is 10 x (electric quantity/2000 mAh) in the range of 0-2000 mAh. Participant 1 therefore reported a score of 10 × 6+8 × 8 — 124; participant 2 reported scores of 10 x 8+8 x 5-120, assuming a preset score threshold of 110, participant 1 and participant 2 reports met the preset condition.
In addition, the participant may be scored in conjunction with operational metrics such as the difference between the current performance metric and the expected performance metric of the participant, the amount of data owned by the participant, and the computing resources of the participant. For example, the preset scoring rule of the difference value between the current performance index and the expected performance index of the participant is that the participant scores according to the ratio of the difference value to the expected performance index, and a mapping relation between the ratio and the score is established; the preset scoring rule of the data volume owned by the participant is that scoring is carried out according to the proportion of the data volume of the participant in the data volumes of all participants participating in the Federal learning model training, and the higher the proportion is, the higher the score is; the preset scoring rule of the computing resource of the participant is that because the computing resource is usually a limited value and has an upper limit, a mapping relation between the computing resource of the participant and the score can be established.
In steps 301 to 303, the period of federal learning may be determined according to the following alternative embodiments:
the report of the participant participating in federated learning includes an expected idle period for the participant participating in federated learning, the method further comprising: and the coordinator takes the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
In the method, the coordinator uses the public time period in the expected idle time period of the participants participating in the federal learning as the period of the federal learning, so that the federal learning training is carried out in the idle time period of all the participants participating in the federal learning, and the available resource condition of the participants in the process of the federal learning is ensured.
In an optional implementation manner of steps 301 to 303, the coordinator sends a model update request to the participant participating in the federal learning; the model update request instructs the participants participating in federated learning to conduct federated learning model training over the common time period.
In the method, the coordinator sends a model updating request to the participants participating in the federal learning, and after receiving the model updating request, the participants can learn the common time period of the federal learning so as to transmit information in the process of the federal learning.
A method for bang learning provided in the embodiment of the present application is described in detail below with reference to fig. 4.
The core idea of the technical scheme provided by the application is that a mobile terminal (or IoT equipment) actively selects to apply for participating in the training of the horizontal federal learning model to a coordinator, and actively reports the running state and the network connection condition of the mobile terminal. After the coordinator receives the applications and reports of a plurality of mobile terminals (or IoT devices), the coordinator selects participants from the alternative mobile terminals (or IoT devices), and determines the starting time of the federal learning model training according to the report contents sent by the devices. In other words, it is the coordinator that collects information of the alternative participants and the alternative participants for the horizontal federal learning through active application and reporting of the mobile terminal (or IoT device).
As illustrated in fig. 4, after the coordinator (also referred to as a parameter server or a convergence server) finishes booting up, the coordinator sends a message to the mobile terminal or IoT device that the coordinator is ready or that the coordinator is ready (coordinator _ ready). The "coordinator _ ready" message is mainly used to inform the mobile terminal or the IoT device that the IoT device may start sending a device report to the coordinator, and the "coordinator _ ready" message may carry key information at the same time. The coordinator may send the "coordinator _ ready" message to the mobile terminal or the IoT device through peer-to-peer communication, or the coordinator may send the "coordinator _ ready" message to multiple mobile terminals or IoT devices simultaneously through multicast or broadcast. For convenience of description, hereinafter, a mobile terminal or an IoT device of a possible application is simply referred to as a device, and other possible devices, such as a fixed wireless terminal, are also included. As shown in fig. 4, device D does not send a device report to the coordinator; the device C is not selected to participate in the horizontal federal learning model training; only devices A and B are selected to participate in lateral federated learning model training
After receiving the coordinator _ ready message sent by the coordinator, a device may choose to send a device report (device _ report) message to the coordinator. The mobile terminal or IoT device actively sends a "device report" to the coordinator, applying for the coordinator to participate in the horizontal federal learning model training and reporting the device report to the coordinator. The coordinator collects information of the alternative participants by receiving the 'equipment reports' sent by the plurality of equipment, and selects the participants of the horizontal federal learning model training from the alternative participants and determines the starting time of the model training according to the content of the 'equipment reports'. The coordinator sends a "model update request" message to the selected participants.
Specifically, the "device _ report" message may have two roles.
The first role is that a device applies to the coordinator via a "device _ report" message that it wishes to add lateral federal learning model training, e.g., the coordinator may consider a device to send to the coordinator. The "device _ report" message indicates that the device wishes to engage in lateral federal learning model training.
The second role is that a device reports some of its operating conditions to the coordinator via a "device _ report" message. The device operation status, i.e. the "device _ report" message, may include one or more of the following information: the urgency with which the device needs to update the model (e.g., the difference between the performance of the existing model and the predetermined performance index, the larger the difference indicates that federal learning is needed, the more urgent the model is updated), the amount of data that the device has that can be used for the training of the federal learning model (i.e., the size of the training set), the amount of data that the device has that can be used for testing the performance of the federal learning model (i.e., the size of the test set), the power of the device (e.g., the device is in an active power supply state), the computing resources of the device (e.g., the device is idle), the network connection and bandwidth of the device (e.g., in a Wi-Fi high-speed connection), and the time that the current operating condition of the device can be maintained (. The time that the current operation condition of the equipment can be kept can be understood as the time that the equipment can stably participate in the training of the transverse federal learning model.
The device may obtain an estimate of the operational status of the device by counting historical usage/operational conditions of the device, for example, a smartphone may count that the smartphone is idle at 12 o' clock night every day, maintain Wi-Fi connectivity, and may be plugged in or have sufficient power.
After receiving the device _ report sent by the devices, that is, the coordinator collects the alternative device information by receiving the device reports sent by the devices, the coordinator may select some or all of the devices to participate in the horizontal federal learning model training, for example, the coordinator may sort the devices according to the data amount and the network bandwidth, and then select the devices.
After selecting the participants, the coordinator may determine a start time for the federal learning model training based on the time that these participants may subsequently stably participate in the federal learning model training. The coordinator may use the latest time to start entering idle in the selected participants as the starting time of the horizontal federal learning model training, for example, if there are 3 participants a, B, and C that will enter idle at 1, 2, and 3 points in the morning and can participate in horizontal federal learning model training, the coordinator may choose to start horizontal federal learning model training at 3 points in the morning.
Further, the coordinator may determine the total time of model training and the end time of training, for example, the total duration of model training is 2 hours.
After the coordinator determines the start time for the federated learning model training, the coordinator may send a "model update request" message to the selected device instructing the participant to start participant local model training or to continue model training. The coordinator can send the message of model _ update _ request to the selected device in a point-to-point unicast mode; or the coordinator can simultaneously send the "model _ update _ request" message to a plurality of selected devices in a multicast or multicast mode. Or the coordinator may send the "model _ update _ request" message to multiple selected devices simultaneously in a broadcast manner, in this implementation manner, the broadcasted "model _ update _ request" message needs to carry ID information of the selected devices, so as to distinguish the selected devices from the devices that are not selected.
In the technical scheme provided by the application, a mobile terminal or an IoT device actively sends an application participating in horizontal federal learning and a report of the mobile terminal or the IoT device to a coordinator (a device report). This facilitates the coordinator collecting information for the alternative devices by receiving a "device report". Therefore, the situation that the coordinator needs to actively inquire the report of each participant can be effectively avoided, and the operation and maintenance burden of the coordinator can be effectively reduced. Especially, in a scene that a coordinator is required to inquire reports of tens of thousands of mobile terminals and even hundreds of thousands of mobile terminals, the operation and maintenance cost of the coordinator can be obviously reduced.
The equipment actively reports the report of the equipment (the equipment report), the coordinator can fully know the running state and the network connection condition of each piece of equipment, and can determine the starting time of the federal learning model training according to the idle time and the network connection condition of the selected federal learning participant, so that enough time can be ensured to finish the model training of the federal learning, and the condition that the network connection of part of participants is interrupted can be effectively avoided.
The equipment actively reports the report of the equipment (the equipment report) and applies for participating in the horizontal federal learning model training, the self will of the mobile terminal is met, and the influence on the user experience of the mobile terminal can be reduced as much as possible.
As shown in fig. 5, the present application provides a bang learning device, comprising: a receiving module for receiving reports of a plurality of participants; the processing module is used for determining participants meeting preset conditions according to the reports of the multiple participants, and the participants are used as participants participating in federal learning and participants participating in the federal learning; wherein the report characterizes an expected available resource profile for the participant; and for conducting federal learning model training by the participants participating in federal learning.
In an alternative embodiment, the report of the participant participating in the federal learning includes an expected idle period of the participant participating in the federal learning, and the processing module 502 is further configured to: and taking the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
In an optional implementation, the processing module 502 is further configured to: sending a model update request to the participants participating in federated learning; the model update request instructs the participants participating in federated learning to conduct federated learning model training over the common time period.
In an optional embodiment, the report of the participant includes at least one operation index, and each operation index corresponds to a preset weight value; the processing module 502 is specifically configured to: determining whether the report of the participant satisfies a preset condition in the following manner: determining the score of each operation index in at least one operation index according to at least one operation index and a preset scoring rule; determining the report score of the participant according to the scores of all the running indexes in the at least one running index and the corresponding preset weight values; determining whether the report of the participant meets a preset condition according to the report score of the participant; the preset condition is that the report score of the participant is greater than or equal to a preset score threshold value.
In an alternative embodiment, the report of the plurality of participants is used to instruct the plurality of participants to apply for participation in federal learning.
In an alternative embodiment, the reports of the plurality of participants include operational conditions of the plurality of participants.
Embodiments of the present application provide a computer device, which includes a program or an instruction, and when the program or the instruction is executed, the program or the instruction is configured to execute a method for federated learning and any optional method provided in embodiments of the present application.
Embodiments of the present application provide a storage medium, which includes a program or an instruction, and when the program or the instruction is executed, the program or the instruction is used to execute a method for federated learning and any optional method provided in embodiments of the present application.
Finally, it should be noted that: as will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for federated learning, comprising:
the coordinator receives reports of a plurality of participants;
the coordinator determines participants meeting preset conditions according to the reports of the multiple participants and takes the participants as participants participating in federal learning; wherein the report characterizes an expected available resource profile for the participant;
and the coordinator carries out federated learning model training through the participants participating in federated learning.
2. The method of claim 1, wherein the report of the participant participating in federated learning includes an expected idle period for the participant participating in federated learning, the method further comprising:
and the coordinator takes the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
3. The method of claim 2, wherein the coordinator determines the participants meeting the preset condition according to the reports of the plurality of participants, and after the participants participating in the federal learning are determined, before the coordinator performs the federal learning model training by the participants participating in the federal learning, the method further comprises:
the coordinator sends a model update request to the participants participating in the federated learning; the model update request instructs the participants participating in federated learning to conduct federated learning model training over the common time period.
4. The method of any of claims 1 to 3, wherein the participant report includes at least one operational indicator, each operational indicator corresponding to a predetermined weight value; the coordinator determines whether the report of the participant satisfies a preset condition in the following manner:
the coordinator determines the score of at least one operation index according to at least one operation index and a preset scoring rule;
the coordinator determines the report score of the participant according to the score of the at least one running index and the corresponding preset weight value;
the coordinator determines whether the report of the participant meets a preset condition according to the report score of the participant; the preset condition is that the report score of the participant is greater than or equal to a preset score threshold value.
5. The method of any of claims 1 to 3, wherein the report of the plurality of participants is used to instruct the plurality of participants to apply for participation in federal learning.
6. A method as claimed in any one of claims 1 to 3, wherein the reports of the plurality of participants include operating conditions of the plurality of participants.
7. The utility model provides a bang learning device which characterized in that includes:
a receiving module for receiving reports of a plurality of participants;
the processing module is used for determining the participants meeting the preset conditions according to the reports of the multiple participants, and the participants are used as the participants participating in the federal learning; wherein the report characterizes an expected available resource profile for the participant; and for conducting federal learning model training by the participants participating in federal learning.
8. The apparatus of claim 7, wherein the report of the participant participating in federated learning includes an expected idle period for the participant participating in federated learning, the processing module further to:
and taking the common time period in the idle time period expected by the participants participating in the federal learning as the time period for the federal learning model training.
9. A computer device comprising a program or instructions that, when executed, perform the method of any of claims 1 to 4.
10. A storage medium comprising a program or instructions which, when executed, perform the method of any one of claims 1 to 4.
CN201910824202.0A 2019-09-02 2019-09-02 Federal learning method and device Active CN110598870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910824202.0A CN110598870B (en) 2019-09-02 2019-09-02 Federal learning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824202.0A CN110598870B (en) 2019-09-02 2019-09-02 Federal learning method and device

Publications (2)

Publication Number Publication Date
CN110598870A true CN110598870A (en) 2019-12-20
CN110598870B CN110598870B (en) 2024-04-30

Family

ID=68857106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824202.0A Active CN110598870B (en) 2019-09-02 2019-09-02 Federal learning method and device

Country Status (1)

Country Link
CN (1) CN110598870B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275491A (en) * 2020-01-21 2020-06-12 深圳前海微众银行股份有限公司 Data processing method and device
CN111325572A (en) * 2020-01-21 2020-06-23 深圳前海微众银行股份有限公司 Data processing method and device
CN111428885A (en) * 2020-03-31 2020-07-17 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN111522669A (en) * 2020-04-29 2020-08-11 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN111539731A (en) * 2020-06-19 2020-08-14 支付宝(杭州)信息技术有限公司 Block chain-based federal learning method and device and electronic equipment
CN111538598A (en) * 2020-04-29 2020-08-14 深圳前海微众银行股份有限公司 Federal learning modeling method, device, equipment and readable storage medium
CN111880568A (en) * 2020-07-31 2020-11-03 深圳前海微众银行股份有限公司 Optimization training method, device and equipment for automatic control of unmanned aerial vehicle and storage medium
CN111985649A (en) * 2020-06-22 2020-11-24 华为技术有限公司 Data processing method and device based on federal learning
CN112418439A (en) * 2020-11-25 2021-02-26 脸萌有限公司 Model using method, device, storage medium and equipment
CN112508205A (en) * 2020-12-04 2021-03-16 中国科学院深圳先进技术研究院 Method, device and system for scheduling federated learning
CN112671613A (en) * 2020-12-28 2021-04-16 深圳市彬讯科技有限公司 Federal learning cluster monitoring method, device, equipment and medium
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
WO2021147373A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Method and device for implementing model update
CN113255924A (en) * 2020-11-25 2021-08-13 中兴通讯股份有限公司 Federal learning participant selection method, device, equipment and storage medium
WO2021174883A1 (en) * 2020-09-22 2021-09-10 平安科技(深圳)有限公司 Voiceprint identity-verification model training method, apparatus, medium, and electronic device
WO2021185197A1 (en) * 2020-03-18 2021-09-23 索尼集团公司 Apparatus and method for federated learning, and storage medium
WO2021190638A1 (en) * 2020-11-24 2021-09-30 平安科技(深圳)有限公司 Federated modelling method based on non-uniformly distributed data, and related device
WO2021201370A1 (en) * 2020-03-31 2021-10-07 한국전자기술연구원 Federated learning resource management apparatus and system, and resource efficiency method therefor
CN113537518A (en) * 2021-07-19 2021-10-22 哈尔滨工业大学 Model training method and device based on federal learning, equipment and storage medium
GB2595849A (en) * 2020-06-02 2021-12-15 Nokia Technologies Oy Collaborative machine learning
CN113836809A (en) * 2021-09-26 2021-12-24 上海万向区块链股份公司 Cross-industry data joint modeling method and system based on block chain and federal learning
WO2021259357A1 (en) * 2020-06-24 2021-12-30 Jingdong Technology Holding Co., Ltd. Privacy-preserving asynchronous federated learning for vertical partitioned data
WO2022014731A1 (en) * 2020-07-14 2022-01-20 엘지전자 주식회사 Scheduling method and device for aircomp-based federated learning
CN114039864A (en) * 2020-07-21 2022-02-11 中国移动通信有限公司研究院 Multi-device cooperation model generation method, device and equipment
WO2022077232A1 (en) * 2020-10-13 2022-04-21 北京小米移动软件有限公司 Wireless communication method and apparatus, communication device, and storage medium
WO2022110248A1 (en) * 2020-11-30 2022-06-02 华为技术有限公司 Federated learning method, device and system
CN114626542A (en) * 2020-12-11 2022-06-14 新智数字科技有限公司 Joint learning monitoring method and device and readable medium
CN114723441A (en) * 2021-01-05 2022-07-08 中国移动通信有限公司研究院 Method, device and equipment for constraining behaviors of demander and participator
WO2022156910A1 (en) * 2021-01-25 2022-07-28 Nokia Technologies Oy Enablement of federated machine learning for terminals to improve their machine learning capabilities
CN115021883A (en) * 2022-07-13 2022-09-06 北京物资学院 Signaling mechanism for application of federal learning in wireless cellular systems
WO2023040958A1 (en) * 2021-09-18 2023-03-23 大唐移动通信设备有限公司 Federated learning group processing method and apparatus, and functional entity
WO2023071789A1 (en) * 2021-10-26 2023-05-04 展讯通信(上海)有限公司 Federated learning method and apparatus, and communication method and apparatus
US11711348B2 (en) 2021-02-22 2023-07-25 Begin Ai Inc. Method for maintaining trust and credibility in a federated learning environment
WO2023143082A1 (en) * 2022-01-26 2023-08-03 展讯通信(上海)有限公司 User device selection method and apparatus, and chip and module device
EP4120631A4 (en) * 2020-04-08 2023-08-09 Beijing Bytedance Network Technology Co., Ltd. Network connection method and device for training participant end of common training model
US11755954B2 (en) 2021-03-11 2023-09-12 International Business Machines Corporation Scheduled federated learning for enhanced search

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110047230A1 (en) * 2006-11-17 2011-02-24 Mcgee Steven J Method / process / procedure to enable: The Heart Beacon Rainbow Force Tracking
CN104618381A (en) * 2015-02-06 2015-05-13 百度在线网络技术(北京)有限公司 Information interaction method and device
CN105303365A (en) * 2015-10-16 2016-02-03 东华大学 Office schedule planning method based on intelligent terminals
CN108460567A (en) * 2017-02-20 2018-08-28 百度在线网络技术(北京)有限公司 Determination method, apparatus, equipment and the storage medium of activity time
CN108683743A (en) * 2018-05-21 2018-10-19 陕西师范大学 A kind of method for allocating tasks by multiple mobile terminal gathered datas
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
CN109299728A (en) * 2018-08-10 2019-02-01 深圳前海微众银行股份有限公司 Federal learning method, system and readable storage medium storing program for executing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110047230A1 (en) * 2006-11-17 2011-02-24 Mcgee Steven J Method / process / procedure to enable: The Heart Beacon Rainbow Force Tracking
CN104618381A (en) * 2015-02-06 2015-05-13 百度在线网络技术(北京)有限公司 Information interaction method and device
CN105303365A (en) * 2015-10-16 2016-02-03 东华大学 Office schedule planning method based on intelligent terminals
CN108460567A (en) * 2017-02-20 2018-08-28 百度在线网络技术(北京)有限公司 Determination method, apparatus, equipment and the storage medium of activity time
CN108683743A (en) * 2018-05-21 2018-10-19 陕西师范大学 A kind of method for allocating tasks by multiple mobile terminal gathered datas
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
CN109299728A (en) * 2018-08-10 2019-02-01 深圳前海微众银行股份有限公司 Federal learning method, system and readable storage medium storing program for executing

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325572A (en) * 2020-01-21 2020-06-23 深圳前海微众银行股份有限公司 Data processing method and device
CN111325572B (en) * 2020-01-21 2024-05-03 深圳前海微众银行股份有限公司 Data processing method and device
CN111275491A (en) * 2020-01-21 2020-06-12 深圳前海微众银行股份有限公司 Data processing method and device
WO2021147486A1 (en) * 2020-01-21 2021-07-29 深圳前海微众银行股份有限公司 Data processing method and apparatus
CN111275491B (en) * 2020-01-21 2023-12-26 深圳前海微众银行股份有限公司 Data processing method and device
WO2021147373A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Method and device for implementing model update
WO2021185197A1 (en) * 2020-03-18 2021-09-23 索尼集团公司 Apparatus and method for federated learning, and storage medium
CN111428885B (en) * 2020-03-31 2021-06-04 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN111428885A (en) * 2020-03-31 2020-07-17 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
WO2021201370A1 (en) * 2020-03-31 2021-10-07 한국전자기술연구원 Federated learning resource management apparatus and system, and resource efficiency method therefor
WO2021197388A1 (en) * 2020-03-31 2021-10-07 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
EP4120631A4 (en) * 2020-04-08 2023-08-09 Beijing Bytedance Network Technology Co., Ltd. Network connection method and device for training participant end of common training model
US11811864B2 (en) 2020-04-08 2023-11-07 Douyin Vision Co., Ltd. Network connection method and device for training participant end of common training model
WO2021219053A1 (en) * 2020-04-29 2021-11-04 深圳前海微众银行股份有限公司 Federated learning modeling method, apparatus and device, and readable storage medium
CN111522669A (en) * 2020-04-29 2020-08-11 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN111538598A (en) * 2020-04-29 2020-08-14 深圳前海微众银行股份有限公司 Federal learning modeling method, device, equipment and readable storage medium
GB2595849A (en) * 2020-06-02 2021-12-15 Nokia Technologies Oy Collaborative machine learning
CN111539731A (en) * 2020-06-19 2020-08-14 支付宝(杭州)信息技术有限公司 Block chain-based federal learning method and device and electronic equipment
CN111985649A (en) * 2020-06-22 2020-11-24 华为技术有限公司 Data processing method and device based on federal learning
US11429903B2 (en) 2020-06-24 2022-08-30 Jingdong Digits Technology Holding Co., Ltd. Privacy-preserving asynchronous federated learning for vertical partitioned data
WO2021259357A1 (en) * 2020-06-24 2021-12-30 Jingdong Technology Holding Co., Ltd. Privacy-preserving asynchronous federated learning for vertical partitioned data
WO2022014731A1 (en) * 2020-07-14 2022-01-20 엘지전자 주식회사 Scheduling method and device for aircomp-based federated learning
CN114039864A (en) * 2020-07-21 2022-02-11 中国移动通信有限公司研究院 Multi-device cooperation model generation method, device and equipment
CN111880568B (en) * 2020-07-31 2024-08-02 深圳前海微众银行股份有限公司 Unmanned aerial vehicle automatic control optimization training method, device, equipment and storage medium
CN111880568A (en) * 2020-07-31 2020-11-03 深圳前海微众银行股份有限公司 Optimization training method, device and equipment for automatic control of unmanned aerial vehicle and storage medium
WO2021174883A1 (en) * 2020-09-22 2021-09-10 平安科技(深圳)有限公司 Voiceprint identity-verification model training method, apparatus, medium, and electronic device
WO2022077232A1 (en) * 2020-10-13 2022-04-21 北京小米移动软件有限公司 Wireless communication method and apparatus, communication device, and storage medium
WO2021190638A1 (en) * 2020-11-24 2021-09-30 平安科技(深圳)有限公司 Federated modelling method based on non-uniformly distributed data, and related device
CN113255924A (en) * 2020-11-25 2021-08-13 中兴通讯股份有限公司 Federal learning participant selection method, device, equipment and storage medium
WO2022110975A1 (en) * 2020-11-25 2022-06-02 中兴通讯股份有限公司 Federated learning participant selection method and apparatus, and device and storage medium
CN112418439A (en) * 2020-11-25 2021-02-26 脸萌有限公司 Model using method, device, storage medium and equipment
CN112418439B (en) * 2020-11-25 2023-09-26 脸萌有限公司 Model using method, device, storage medium and equipment
WO2022110248A1 (en) * 2020-11-30 2022-06-02 华为技术有限公司 Federated learning method, device and system
CN112508205B (en) * 2020-12-04 2024-07-16 中国科学院深圳先进技术研究院 Federal learning scheduling method, device and system
CN112508205A (en) * 2020-12-04 2021-03-16 中国科学院深圳先进技术研究院 Method, device and system for scheduling federated learning
CN114626542A (en) * 2020-12-11 2022-06-14 新智数字科技有限公司 Joint learning monitoring method and device and readable medium
CN112671613A (en) * 2020-12-28 2021-04-16 深圳市彬讯科技有限公司 Federal learning cluster monitoring method, device, equipment and medium
CN114723441A (en) * 2021-01-05 2022-07-08 中国移动通信有限公司研究院 Method, device and equipment for constraining behaviors of demander and participator
CN112766514B (en) * 2021-01-22 2021-12-24 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
WO2022156910A1 (en) * 2021-01-25 2022-07-28 Nokia Technologies Oy Enablement of federated machine learning for terminals to improve their machine learning capabilities
US11711348B2 (en) 2021-02-22 2023-07-25 Begin Ai Inc. Method for maintaining trust and credibility in a federated learning environment
US11755954B2 (en) 2021-03-11 2023-09-12 International Business Machines Corporation Scheduled federated learning for enhanced search
CN113537518A (en) * 2021-07-19 2021-10-22 哈尔滨工业大学 Model training method and device based on federal learning, equipment and storage medium
WO2023040958A1 (en) * 2021-09-18 2023-03-23 大唐移动通信设备有限公司 Federated learning group processing method and apparatus, and functional entity
CN113836809B (en) * 2021-09-26 2023-12-01 上海万向区块链股份公司 Cross-industry data joint modeling method and system based on block chain and federal learning
CN113836809A (en) * 2021-09-26 2021-12-24 上海万向区块链股份公司 Cross-industry data joint modeling method and system based on block chain and federal learning
WO2023071789A1 (en) * 2021-10-26 2023-05-04 展讯通信(上海)有限公司 Federated learning method and apparatus, and communication method and apparatus
WO2023143082A1 (en) * 2022-01-26 2023-08-03 展讯通信(上海)有限公司 User device selection method and apparatus, and chip and module device
CN115021883B (en) * 2022-07-13 2022-12-27 北京物资学院 Signaling mechanism for application of federal learning in wireless cellular systems
CN115021883A (en) * 2022-07-13 2022-09-06 北京物资学院 Signaling mechanism for application of federal learning in wireless cellular systems

Also Published As

Publication number Publication date
CN110598870B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN110598870B (en) Federal learning method and device
CN110443375B (en) Method and device for federated learning
Kuang et al. Offloading decision methods for multiple users with structured tasks in edge computing for smart cities
Wang et al. A novel reputation-aware client selection scheme for federated learning within mobile environments
Xia et al. Federated-learning-based client scheduling for low-latency wireless communications
Duan et al. Motivating smartphone collaboration in data acquisition and distributed computing
CN103444232B (en) Smart connection manager
Zhao et al. Social-aware incentive mechanism for vehicular crowdsensing by deep reinforcement learning
US10491535B2 (en) Adaptive data synchronization
CN114650227B (en) Network topology construction method and system in hierarchical federation learning scene
Pei et al. Blockchain-enabled dynamic spectrum access: cooperative spectrum sensing, access and mining
US12033044B2 (en) Interactive and dynamic mapping engine (IDME)
CN114358307A (en) Federal learning method and device based on differential privacy law
CN114548416A (en) Data model training method and device
Zhao et al. An incentive mechanism for big data trading in end-edge-cloud hierarchical federated learning
CN115174404A (en) Multi-device federal learning system based on SDN networking
CN117392483A (en) Album classification model training acceleration method, system and medium based on reinforcement learning
Brik et al. GSS-VF: A game-theoretic approach for service discovery in fog network of vehicles
CN116954926A (en) Server resource allocation method and device
CN116843016A (en) Federal learning method, system and medium based on reinforcement learning under mobile edge computing network
CN114401192B (en) Multi-SDN controller cooperative training method
CN112911620B (en) Information processing method and device, electronic equipment and storage medium
Arouj et al. Towards Energy-Aware Federated Learning via Collaborative Computing Approach
CN112396151B (en) Rumor event analysis method, rumor event analysis device, rumor event analysis equipment and computer readable storage medium
CN102348239B (en) Service-based consultation method in mobile ad-hoc networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant