CN112636989B - Method and device for federated learning communication - Google Patents

Method and device for federated learning communication Download PDF

Info

Publication number
CN112636989B
CN112636989B CN202011640351.0A CN202011640351A CN112636989B CN 112636989 B CN112636989 B CN 112636989B CN 202011640351 A CN202011640351 A CN 202011640351A CN 112636989 B CN112636989 B CN 112636989B
Authority
CN
China
Prior art keywords
modeling
participating
devices
central server
groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011640351.0A
Other languages
Chinese (zh)
Other versions
CN112636989A (en
Inventor
樊明璐
徐安滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202011640351.0A priority Critical patent/CN112636989B/en
Publication of CN112636989A publication Critical patent/CN112636989A/en
Application granted granted Critical
Publication of CN112636989B publication Critical patent/CN112636989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/046Network management architectures or arrangements comprising network management agents or mobile agents therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Abstract

The embodiment of the application discloses a method and a device for federated learning communication, wherein the method comprises the following steps: a plurality of modeling participating devices send modeling requests to a central server, wherein the modeling requests carry target information; receiving grouping information of dividing a plurality of modeling participation devices into a plurality of participation groups by a central server according to the target information; determining agent communication equipment of the participating group by using an algorithm in the participating group; sending the encryption parameters to the agent communication devices of the participating groups, so that the agent communication devices of the participating groups send the encryption parameters to the central server; and receiving the processing result of the encryption parameters from the central server, which is sent by the agent communication equipment of the participating group. Therefore, the central server only carries out network communication with the agent communication equipment of each participating group, the whole network communication overhead is saved, the high network communication load of the central server is relieved, and the network communication efficiency is improved.

Description

Method and device for federated learning communication
Technical Field
The application relates to the field of computers, in particular to a method and a device for federated learning communication.
Background
Federal Learning (Federal Learning) is a novel artificial intelligence basic technology and is used for solving the problem of updating a model of terminal equipment, and the design aim is to carry out efficient machine Learning among multiple parties (equipment) or multiple computing nodes on the premise of guaranteeing information safety during large data exchange, protecting the privacy of terminal equipment data and guaranteeing legal compliance. Generally, a federally learned communication network includes a plurality of devices: a central server, which may be referred to as a coordinator, and a plurality of devices participating in federal learning, which may be referred to as a plurality of participants. In the process of establishing the model by federal learning, the plurality of devices need to continuously establish communication connection with the central server for communication, namely, the central server frequently performs network communication with the plurality of devices, at the moment, the load of the network communication of the central server is high, the communication overhead of the whole network communication is high, and the efficiency of the network communication is low.
In summary, in the current federal learning model building process, the load of network communication of the central server is high, and the communication overhead of the whole network communication is high, resulting in low efficiency of network communication.
Disclosure of Invention
In order to solve the problems that in the prior art, in the process of establishing a model by federal learning, the network communication load of a central server is very high, the communication overhead of the whole network communication is very large, and the efficiency of the network communication is low, the method for realizing the federated learning communication is provided, the whole network communication overhead can be saved, and the high network communication load of the central server is relieved.
The embodiment of the application provides a federated learning communication method, which comprises the following steps:
a plurality of modeling participating devices send modeling requests to a central server, wherein the modeling requests carry target information of the modeling participating devices;
the plurality of modeling participation devices receive grouping information that the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information;
the plurality of modeling participant devices determine agent communication devices of the participant groups by utilizing an algorithm in the participant groups;
the plurality of modeling participant devices send encryption parameters to the agent communication devices of the participating groups, so that the agent communication devices of the participating groups send the encryption parameters to the central server;
and the plurality of modeling participation devices receive the processing result of the encryption parameter from the central server, which is sent by the agent communication device of the participation group.
Optionally, the target information includes location information, communication bandwidth, and survival status information;
the plurality of modeling participant devices receiving grouping information that the central server divides the plurality of modeling participant devices into a plurality of participation groups according to the objective information comprises:
the central server equally divides the plurality of modeling participant devices into a plurality of participant groups according to the location information, communication bandwidth and survival status information of the plurality of modeling participant devices.
Optionally, the determining, by the plurality of modeling participant devices, the participant group in the participant group using an algorithm includes:
and the modeling participant devices in the plurality of participant groups perform mutual scoring by using an algorithm according to the agent influence factors, and the modeling participant device with the highest score is selected as the agent communication device of the participant group.
Optionally, the method further includes:
and the central server monitors the agent communication equipment of the plurality of participating groups, and if the agent communication equipment fails or the agent tenure is ended, the central server instructs the participating group where the agent communication equipment is located to reinitiate an election process to determine new agent communication equipment.
Optionally, before the plurality of modeling participant devices send the encryption parameters to the proxy communication device of the participating group, the method further includes:
the proxy communication devices of the plurality of participating groups send acknowledgement proxy messages to the central server and other modeling participating devices within the plurality of participating groups.
Optionally, when the plurality of modeling participating devices receive grouping information that the central server divides the plurality of modeling participating devices into a plurality of participating groups according to the target information, the plurality of modeling participating devices also receive a modeling public key sent by the central server;
the plurality of modeling participant devices transmitting the encryption parameters to the proxy communication device of the participating group in which they are located includes:
the modeling participating device encrypts the intermediate parameters of the modeling participating device by using the modeling public key to obtain the encrypted parameters of the modeling participating device;
the plurality of modeling participant devices send the encryption parameters to the proxy communication devices of the participating groups in which the plurality of modeling participant devices are located.
Optionally, the method further includes:
the proxy communication devices of the participating groups receive the survival state information of other modeling participating devices of the participating group, if the survival state information of the other modeling participating devices is not received within the preset time, the loss of contact information of the other modeling participating devices which lose communication connection is confirmed, and the proxy communication devices of the participating groups send the loss of contact information to the central server.
The embodiment of the application further provides a federated learning communication method, which comprises the following steps:
the method comprises the steps that a central server receives modeling requests sent by a plurality of modeling participation devices, wherein the modeling requests carry target information of the modeling participation devices;
the central server divides the modeling participation devices into a plurality of participation groups according to the target information to obtain grouping information;
the central server sending the grouping information to the plurality of modeling participant devices;
the central server receives encryption parameters sent by a plurality of agent communication devices participating in a group;
the central server processes the encryption parameters to obtain a processing result;
the central server transmits the processing result to the agent communication devices of the plurality of participating groups.
The embodiment of the present application further provides a communication device for federated learning, the device includes:
the first sending unit is used for sending a modeling request to the central server, wherein the modeling request carries target information of the plurality of modeling participating devices;
a first receiving unit configured to receive grouping information in which the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information;
the determining unit is used for determining the proxy communication equipment of the participating group by utilizing an algorithm in the participating group;
a second sending unit, configured to send the encryption parameter to the proxy communication device of the participating group where the second sending unit is located, so that the proxy communication devices of the participating groups send the encryption parameter to the central server;
and the second receiving unit is used for receiving the processing result of the encryption parameter from the central server, which is sent by the proxy communication equipment of the participating group.
The embodiment of the application further provides a communication device for federated learning, the device includes:
the modeling system comprises a first receiving unit, a second receiving unit and a third receiving unit, wherein the first receiving unit is used for receiving modeling requests sent by a plurality of modeling participating devices, and the modeling requests carry target information of the plurality of modeling participating devices;
the dividing unit is used for dividing the modeling participation devices into a plurality of participation groups according to the target information to obtain grouping information;
a first transmitting unit configured to transmit the grouping information to the plurality of modeling participant devices;
a second receiving unit, configured to receive encryption parameters sent by a plurality of proxy communication devices participating in a group;
the processing unit is used for processing the encryption parameters to obtain a processing result;
and a second sending unit, configured to send the processing result to the proxy communication devices participating in the group.
Compared with the prior art, the method has the advantages that:
the embodiment of the application provides a federated learning communication method, which comprises the following steps: a plurality of modeling participating devices send modeling requests to a central server, wherein the modeling requests carry target information of the modeling participating devices; the plurality of modeling participation devices receive grouping information that the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information; the plurality of modeling participant devices determine agent communication devices of the participant groups by utilizing an algorithm in the participant groups; the plurality of modeling participant devices send encryption parameters to the agent communication devices of the participating groups in which they are located, so that the agent communication devices of the participating groups send the encryption parameters to the central server; and the plurality of modeling participation devices receive the processing result of the encryption parameter from the central server, which is sent by the agent communication device of the participation group. Therefore, according to the embodiment of the application, the plurality of modeling participation devices are grouped, the proxy communication device of each participation group is determined, and the central server only carries out network communication with the proxy communication device of each participation group, so that the overall network communication overhead is saved, the high network communication load of the central server is relieved, and the network communication efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an embodiment of a federated learning communication method provided in the present application;
fig. 2 is a schematic diagram of network communication connections of a participant, an agent and a coordinator in a federated learning communication method provided in the present application;
FIG. 3 is a flow diagram of another federated learning communication method embodiment that is provided herein;
FIG. 4 is a block diagram illustrating an embodiment of a federated learning communication device;
fig. 5 is a block diagram of another embodiment of a federal learning communications device provided in the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
As described in the background art, with the rapid development of computers, a mode is continuously sought in a data application scene, the limit of data isolated island can be broken, the value of multi-party data is maximized, win-win is realized, and federal learning is produced. In a scene with a central node, such as horizontal federal learning, original data are stored in a local remote terminal device, and a global model can be constructed only by continuously interacting with a central server.
Based on this, the embodiment of the application provides a federated learning communication method, by grouping a plurality of modeling participating devices and determining the proxy communication device of each participating group, the central server only performs network communication with the proxy communication device of each participating group, thereby saving the overall network communication overhead, relieving the high network communication load of the central server, and improving the network communication efficiency.
Referring to fig. 1, the figure is a flowchart of a federated learning communication method provided in an embodiment of the present application.
The federal learning communication method provided by the embodiment comprises the following steps:
step S101: and the modeling participating devices send modeling requests to the central server, wherein the modeling requests carry target information of the modeling participating devices.
In an embodiment of the application, the plurality of modeling participant devices are participants in federal learning and the central server is a coordinator of federal learning. The multiple modeling participating devices send modeling requests to the central server, the modeling requests carry target information of the multiple modeling participating devices, namely the multiple participants send the modeling requests to the coordinator, and the modeling requests carry information of the participants. Specifically, the target information may include location information, communication bandwidth, and survival status information. The position information is the geographic position information of the participant, the communication bandwidth is the communication bandwidth of the participant or the communication delay of the participant, the survival state information is the state that the participant regularly reports the working state of the participant to the central server, and if the survival state information is not sent within the preset time, the participant is determined to be down and cannot participate in the modeling. The goal information may also include the amount of data owned by the participant, the time required for the participant to make model parameter updates, the network configuration information of the participant, the priority of the participant, etc.
Step S102: the plurality of modeling participant devices receive grouping information that the central server divides the plurality of modeling participant devices into a plurality of participating groups according to the objective information.
In an embodiment of the application, the plurality of modeling participant devices receive grouping information of the central server, that is, the plurality of participants receive grouping information sent by the coordinator, and the grouping information is that the coordinator groups the plurality of participants according to target information of the plurality of participants to obtain a plurality of participant groups, where each participant group may have two or more participants. Fig. 2 shows a grouping diagram of a plurality of participants dividing a participation group.
The coordinator is configured to equally divide the plurality of participants into a plurality of participating groups according to the location information, the communication bandwidth, and the survival status information of the plurality of participants. Specifically, the coordinator may also use the data volume owned by the participant, the time required by the participant to perform model parameter update, the network configuration information of the participant, the priority of the participant, and the like as the reference factors of the grouping.
Step S103: the plurality of modeling participant devices determine the proxy communication devices of the participant group by using an algorithm in the participant group.
In the embodiment of the application, the plurality of modeling participant devices determine the participant group in which the modeling participant device is located according to the grouping information, that is, each participant determines the participant group in which the modeling participant device is located. The modeled participant devices within each participant group determine the proxy communication devices of the participant group according to an algorithm, i.e., the participants within each participant group determine the proxies of the participant group according to an algorithm. Specifically, the algorithm may be a raft algorithm.
It should be noted that, the modeling participant devices in the plurality of participating groups score each other by using an algorithm according to the agent influence factors, and the modeling participant device with the highest score is selected as the agent communication device of the participating group. Specifically, the agent influencing factors may include network communication bandwidth, computing resources, mutual distance, and the like.
It should be noted that, after each participating group competes for the agent communication device in the participating group through the algorithm, the agent communication device of each participating group may send a confirmation agent message to the central server and other modeling participating devices in each participating group to determine the identity of its own agent.
It is noted that each agent communication device participating in the group has an expiration period, and when the expiration period is over, the agent communication device needs to be re-elected. The central server monitors the agent communication equipment of a plurality of participating groups, and if the agent communication equipment fails or an agent deadline is ended, the central server instructs the participating group in which the agent communication equipment is positioned to reinitiate an election process to determine new agent communication equipment.
Step S104: the plurality of modeling participant devices send encryption parameters to the proxy communication devices of the participating groups in which they are located so that the proxy communication devices of the participating groups send the encryption parameters to the central server.
In an embodiment of the application, a plurality of modeling participant devices send encryption parameters to the proxy communication devices of the participating groups in which they are located, and the proxy communication devices of each participating group send the encryption parameters to the central server. Referring to fig. 2, a schematic diagram of network communication connections of participants, agents and coordinators is shown.
When the plurality of modeling participant devices receive grouping information in which the central server divides the plurality of modeling participant devices into a plurality of participant groups based on the target information, the plurality of modeling participant devices also receive a modeling public key transmitted from the central server.
In practical application, the multiple modeling participating devices execute local modeling operation according to respective sample data to acquire respective intermediate parameters. The modeling public keys sent by the central server are obtained by the modeling participating devices, and homomorphic encryption is carried out on respective intermediate parameters by utilizing the modeling public keys to obtain encryption parameters. The plurality of modeling participant devices send the encryption parameters to the proxy communication devices of the participating group in which they are located. And after the agent communication equipment of each participating group receives the encryption parameters of the plurality of modeling participating devices in the group, the encryption parameters are sent to the central server. Specifically, the intermediate parameters include gradient values and loss values.
It should be noted that after the plurality of modeling participant devices are divided into a plurality of participant groups, each modeling participant device periodically sends the survival status information to the proxy communication device in the participant group during each modeling process. That is to say, the agent communication devices of the participating groups receive the survival status information of the other modeling participating devices of the participating group, if the survival status information of the other modeling participating devices is not received within the preset time, the loss information of the communication connection of the other modeling participating devices is confirmed, the agent communication devices of the participating groups send the loss information to the central server, and the modeling process abandons the modeling participating devices losing the communication connection. Specifically, the preset time is preset, and for example, the preset time may be 1 second.
Step S105: and the plurality of modeling participation devices receive the processing result of the encryption parameter from the central server, which is sent by the agent communication device of the participation group.
In the embodiment of the application, the central server processes the encryption parameters to obtain a processing result, and sends the processing result to the proxy communication device of each participating group, and the proxy communication device of each participating group sends the processing result to the plurality of modeling participating devices of the participating group.
In practical application, a coordinator receives each group of encryption parameters sent by each agent participating in the group, the encryption parameters are combined to obtain total encryption parameters, the total encryption parameters are decrypted to obtain the total parameters, and then the coordinator judges whether the federal learning modeling is finished. And if the iteration number of the federated learning modeling exceeds a threshold value or the total parameter is converged, the coordinator informs the agent of each participating group to end the federated learning modeling, and the agent of each participating group synchronously informs a plurality of participants in the participating group to stop the federated learning modeling, so that the federated learning modeling is ended. And if the iteration number of the federated learning modeling does not exceed the threshold or the total parameter is not converged, continuing to perform the next iteration, sending the total parameter to the agent of each participating group by the coordinator, distributing the total parameter to a plurality of participants in the participating group by the agent of each participating group, updating the model parameters after each participant receives the total parameter, and repeating the step S104 and the step S105 until the federated learning modeling is finished. Specifically, the combining the encryption parameters may be performing weighted averaging on the encryption parameters.
The embodiment of the application provides a federated learning communication method, which comprises the following steps: a plurality of modeling participating devices send modeling requests to a central server, wherein the modeling requests carry target information of the modeling participating devices; the plurality of modeling participation devices receive grouping information that the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information; the plurality of modeling participant devices determine agent communication devices of the participant groups by utilizing an algorithm in the participant groups; the plurality of modeling participant devices send encryption parameters to the agent communication devices of the participating groups, so that the agent communication devices of the participating groups send the encryption parameters to the central server; and the plurality of modeling participation devices receive the processing result of the encryption parameter from the central server, which is sent by the agent communication device of the participation group. Therefore, according to the embodiment of the application, the plurality of modeling participation devices are grouped, the proxy communication device of each participation group is determined, and the central server only carries out network communication with the proxy communication device of each participation group, so that the overall network communication overhead is saved, the high network communication load of the central server is relieved, and the network communication efficiency is improved. In addition, the method of the embodiment of the application supports the communication process in the heterogeneous environment, timely acquires the survival state information of each participant, and guarantees the smooth modeling process. The participating groups are divided based on reference factors such as geographic positions, and communication time delay caused by distance problems is reduced. Participants in each participating group compete with each other, the scoring is carried out by combining agent influence factors such as network bandwidth, computing resources and mutual distance, the final high-scoring person becomes an agent, and each agent has an idle period, so that the fairness in the election process is guaranteed.
Referring to fig. 3, this figure is a flowchart of a federated learning communication method provided in the embodiment of the present application.
The federal learning communication method provided by the embodiment comprises the following steps:
step S301: the method comprises the steps that a central server receives modeling requests sent by a plurality of modeling participation devices, and the modeling requests carry target information of the modeling participation devices.
In an embodiment of the application, the plurality of modeling participants are participants in federated learning and the central server is a coordinator of federated learning. The multiple modeling participating devices send modeling requests to the central server, the modeling requests carry target information of the multiple modeling participating devices, namely the multiple participants send the modeling requests to the coordinator, and the modeling requests carry information of the participants. Specifically, the target information may include location information, communication bandwidth, and survival status information. The position information is the geographic position information of the participant, the communication bandwidth is the communication bandwidth of the participant or the communication delay of the participant, the survival state information is the state that the participant regularly reports the working state of the participant to the central server, and if the survival state information is not sent within the preset time, the participant is determined to be down and cannot participate in the modeling. The target information may also include the amount of data owned by the participant, the time required for the participant to perform model parameter updates, network configuration information for the participant, the priority of the participant, and the like.
Step S302: and the central server divides the modeling participation devices into a plurality of participation groups according to the target information to obtain grouping information.
Step S303: the central server sends the grouping information to the plurality of modeling participant devices.
In an embodiment of the present application, the central server sends grouping information to the multiple modeling participant devices, that is, the coordinator sends grouping information to the multiple participants, and the grouping information is that the coordinator groups the multiple participants according to the target information of the multiple participants to obtain multiple participant groups, where each participant group may have two or more participants. Fig. 2 shows a grouping diagram of a plurality of participants dividing a participation group.
The coordinator is configured to equally divide the plurality of participants into a plurality of participating groups according to the location information, the communication bandwidth, and the survival status information of the plurality of participants. Specifically, the coordinator may also use the data volume owned by the participant, the time required by the participant to perform model parameter update, the network configuration information of the participant, the priority of the participant, and the like as the reference factors of the grouping.
Step S304: the central server receives encryption parameters sent by a plurality of agent communication devices participating in the group.
In an embodiment of the application, a plurality of modeling participant devices transmit encryption parameters to the proxy communication devices of the participating groups, and the proxy communication devices of each participating group transmit the encryption parameters to the central server. Referring to fig. 2, a schematic diagram of network communication connections of participants, agents and coordinators is shown.
Step S305: and the central server processes the encryption parameters to obtain a processing result.
Step S306: the central server transmits the processing result to the agent communication devices of the plurality of participating groups.
In the embodiment of the application, the central server processes the encryption parameters to obtain a processing result, the processing result is sent to the proxy communication equipment of each participating group, and the proxy communication equipment of each participating group sends the processing result to the multiple modeling participating devices of the participating group.
Therefore, according to the embodiment of the application, the plurality of modeling participation devices are grouped, the proxy communication device of each participation group is determined, and the central server only carries out network communication with the proxy communication device of each participation group, so that the overall network communication overhead is saved, the high network communication load of the central server is relieved, and the network communication efficiency is improved. In addition, the method of the embodiment of the application supports the communication process in the heterogeneous environment, timely acquires the survival state information of each participant, and guarantees the smooth proceeding of the modeling process. The participating groups are divided based on reference factors such as geographic positions, and communication time delay caused by distance problems is reduced. Participants in each participating group compete with each other, the scoring is carried out by combining agent influence factors such as network bandwidth, computing resources and mutual distance, and finally, the high-scoring person becomes an agent, and each agent has an optional period, so that the fairness in the election process is guaranteed.
Based on the method for communication of federated learning provided by the above embodiment, the embodiment of the present application further provides a communication device for federated learning, and the working principle thereof is described in detail below with reference to the accompanying drawings.
Referring to fig. 4, this figure is a block diagram of a structure of a federated learning communication device according to an embodiment of the present application.
The federal learning communication device 400 provided in this embodiment includes:
a first sending unit 410, configured to send a modeling request to a central server, where the modeling request carries target information of the multiple modeling participating devices;
a first receiving unit 420 for receiving grouping information that the central server divides the plurality of modeling participant devices into a plurality of participant groups according to the target information;
a determining unit 430, configured to determine, by using an algorithm, a proxy communication device of the participating group within the participating group;
a second sending unit 440, configured to send the encryption parameter to the agent communication devices of the participating group, so that the agent communication devices of the participating groups send the encryption parameter to the central server;
the second receiving unit 450 is configured to receive a processing result of the encryption parameter from the central server, where the proxy communication device of the participating group is located.
Based on the method for communication of federated learning provided by the above embodiment, the embodiment of the present application further provides a communication device for federated learning, and the working principle thereof is described in detail below with reference to the accompanying drawings.
Referring to fig. 5, this figure is a block diagram of a structure of a federated learning communication device according to an embodiment of the present application.
The federal learning communication device 500 provided in this embodiment includes:
a first receiving unit 510, configured to receive modeling requests sent by multiple modeling participating devices, where the modeling requests carry target information of the multiple modeling participating devices;
a dividing unit 520, configured to divide the multiple modeling participation devices into multiple participation groups according to the target information, so as to obtain grouping information;
a first transmitting unit 530, configured to transmit the grouping information to the plurality of modeling participant devices;
a second receiving unit 540, configured to receive encryption parameters sent by a plurality of proxy communication devices participating in a group;
a processing unit 550, configured to process the encryption parameter to obtain a processing result;
a second sending unit 560, configured to send the processing result to the proxy communication devices participating in the group.
When introducing elements of various embodiments of the present application, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements.
It should be noted that, a person skilled in the art can understand that all or part of the processes in the above method embodiments can be implemented by a computer program to instruct related hardware, where the program can be stored in a computer readable storage medium, and when executed, the program can include the processes in the above method embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the units and modules described as separate components may or may not be physically separate. In addition, some or all of the units and modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (9)

1. A method for federated learning communication, the method comprising:
a plurality of modeling participating devices send modeling requests to a central server, wherein the modeling requests carry target information of the plurality of modeling participating devices;
the plurality of modeling participation devices receive grouping information that the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information;
the plurality of modeling participant devices determine agent communication devices of the participant groups by utilizing an algorithm in the participant groups; the step of determining the proxy communication device of the participation group by the plurality of modeling participation devices in the participation group by using an algorithm comprises the following steps: the modeling participating devices in the participating groups score mutually by using an algorithm according to the agent influence factors, and the modeling participating devices with the highest scores are selected as the agent communication devices of the participating groups;
the plurality of modeling participant devices send encryption parameters to the agent communication devices of the participating groups, so that the agent communication devices of the participating groups send the encryption parameters to the central server;
and the plurality of modeling participation devices receive the processing result of the encryption parameter from the central server, which is sent by the agent communication device of the participation group.
2. The method of claim 1, wherein the target information comprises location information, communication bandwidth, and survival status information;
the plurality of modeling participant devices receiving grouping information that the central server divides the plurality of modeling participant devices into a plurality of participation groups according to the objective information comprises:
the central server equally divides the plurality of modeling participant devices into a plurality of participant groups according to the location information, communication bandwidth and survival status information of the plurality of modeling participant devices.
3. The method of claim 1, further comprising:
and the central server monitors the agent communication equipment of the plurality of participating groups, and if the agent communication equipment fails or the agent tenure is ended, the central server instructs the participating group where the agent communication equipment is located to reinitiate an election process to determine new agent communication equipment.
4. The method of claim 1, wherein before the plurality of modeling participant devices send encryption parameters to the proxy communication device of the participating group in which they are located, the method further comprises:
the proxy communication devices of the plurality of participating groups send acknowledgement proxy messages to the central server and other modeling participating devices within the plurality of participating groups.
5. The method according to claim 1, wherein when the plurality of modeling participant devices receive grouping information that the central server divides the plurality of modeling participant devices into a plurality of participant groups according to the target information, a modeling public key transmitted by the central server is also received;
the plurality of modeling participant devices sending encryption parameters to the proxy communication device of the participating group includes:
the modeling participating equipment encrypts the intermediate parameters of the modeling participating equipment by using the modeling public key to obtain the encryption parameters of the modeling participating equipment;
the plurality of modeling participant devices send the encryption parameters to the proxy communication devices of the participating groups in which the plurality of modeling participant devices are located.
6. The method of claim 1, further comprising:
the agent communication devices of the participating groups receive the survival state information of other modeling participating devices of the participating group, if the survival state information of the other modeling participating devices is not received within the preset time, the loss of contact information of the other modeling participating devices losing communication connection is confirmed, and the agent communication devices of the participating groups send the loss of contact information to the central server.
7. A method for federated learning communication, the method comprising:
the method comprises the steps that a central server receives modeling requests sent by a plurality of modeling participation devices, wherein the modeling requests carry target information of the modeling participation devices;
the central server divides the modeling participation devices into a plurality of participation groups according to the target information to obtain grouping information;
the central server sending the grouping information to the plurality of modeling participant devices; the central server receives encryption parameters sent by the agent communication devices of a plurality of participating groups, wherein the agent communication devices of the participating groups perform mutual grading according to agent influence factors by using an algorithm, and determine the modeling participating device with the highest score to be selected as the agent communication device of the participating group;
the central server processes the encryption parameters to obtain a processing result;
and the central server sends the processing result to the agent communication equipment of the plurality of participating groups.
8. A federated learning communication device, the device comprising:
the system comprises a first sending unit, a first processing unit and a second sending unit, wherein the first sending unit is used for sending a modeling request to a central server, and the modeling request carries target information of a plurality of modeling participating devices;
a first receiving unit configured to receive grouping information in which the central server divides the plurality of modeling participation devices into a plurality of participation groups according to the target information;
a determining unit, configured to determine, by using an algorithm, the proxy communication device of the participating group in the participating group, where the plurality of modeling participating devices are located, includes: the modeling participating devices in the participating groups score mutually by using an algorithm according to the agent influence factors, and the modeling participating devices with the highest scores are selected as the agent communication devices of the participating groups;
the second sending unit is used for sending the encryption parameters to the agent communication equipment of the participating group, so that the agent communication equipment of the participating group sends the encryption parameters to the central server;
and the second receiving unit is used for receiving the processing result of the encryption parameter from the central server, which is sent by the proxy communication equipment of the participating group.
9. A federated learning communication device, the device comprising:
the modeling system comprises a first receiving unit, a second receiving unit and a processing unit, wherein the first receiving unit is used for receiving modeling requests sent by a plurality of modeling participating devices, and the modeling requests carry target information of the plurality of modeling participating devices;
the dividing unit is used for dividing the modeling participation devices into a plurality of participation groups according to the target information to obtain grouping information;
a first transmitting unit configured to transmit the grouping information to the plurality of modeling participant apparatuses;
the second receiving unit is used for receiving encryption parameters sent by the agent communication devices of the plurality of participating groups, wherein the agent communication devices of the plurality of participating groups are the modeling participating devices in the plurality of participating groups, score mutually according to agent influence factors by using an algorithm, and determine the modeling participating device with the highest score to be selected as the agent communication device of the participating group;
the processing unit is used for processing the encryption parameters to obtain a processing result;
and a second sending unit, configured to send the processing result to the proxy communication devices participating in the group.
CN202011640351.0A 2020-12-31 2020-12-31 Method and device for federated learning communication Active CN112636989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011640351.0A CN112636989B (en) 2020-12-31 2020-12-31 Method and device for federated learning communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011640351.0A CN112636989B (en) 2020-12-31 2020-12-31 Method and device for federated learning communication

Publications (2)

Publication Number Publication Date
CN112636989A CN112636989A (en) 2021-04-09
CN112636989B true CN112636989B (en) 2022-12-27

Family

ID=75290137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011640351.0A Active CN112636989B (en) 2020-12-31 2020-12-31 Method and device for federated learning communication

Country Status (1)

Country Link
CN (1) CN112636989B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112029B (en) * 2021-04-22 2022-09-16 中国科学院计算技术研究所 Federal learning system and method applied to heterogeneous computing equipment
KR20230105472A (en) * 2022-01-04 2023-07-11 숭실대학교산학협력단 Federated learning system using Local Groups and method of controlling the same
KR20230118380A (en) * 2022-02-04 2023-08-11 숭실대학교산학협력단 Group signaure based federated learning mehod and system, and recording medium for performing the same
CN114186213B (en) * 2022-02-16 2022-07-05 深圳致星科技有限公司 Data transmission method, device, equipment and medium based on federal learning
CN117014449A (en) * 2022-04-29 2023-11-07 索尼集团公司 Electronic device, method and storage medium for wireless communication system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259527A (en) * 2016-12-28 2018-07-06 华为技术有限公司 Method for processing business, device and network element device based on agency

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007149687A2 (en) * 2006-05-30 2007-12-27 Riverbed Technology, Inc. Selecting proxies from among autodiscovered proxies
WO2019032728A1 (en) * 2017-08-08 2019-02-14 Sentinel Labs, Inc. Methods, systems, and devices for dynamically modeling and grouping endpoints for edge networking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259527A (en) * 2016-12-28 2018-07-06 华为技术有限公司 Method for processing business, device and network element device based on agency

Also Published As

Publication number Publication date
CN112636989A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112636989B (en) Method and device for federated learning communication
Duan et al. Authentication handover and privacy protection in 5G hetnets using software-defined networking
Yu et al. Game theoretic analysis of cooperation stimulation and security in autonomous mobile ad hoc networks
CN109327548A (en) A kind of block chain update method and block chain more new system
US20020018448A1 (en) Clusterhead selection in wireless ad hoc networks
Rose et al. Ensemble polling strategies for increased paging capacity in mobile communication networks
JP4131454B2 (en) A method for mutual authentication of devices in a network using the challenge-response method
CN112737770B (en) Network bidirectional authentication and key agreement method and device based on PUF
CN106254410A (en) Network system and method for establishing data connection
Fu et al. Resource allocation for blockchain-enabled distributed network function virtualization (NFV) with mobile edge cloud (MEC)
Pathak et al. Comparative study of clustering algorithms for MANETs
Altman Competition and cooperation between nodes in delay tolerant networks with two hop routing
CN113156803A (en) Task-oriented unmanned aerial vehicle cluster resource management and fault-tolerant control method
CN108965398B (en) Control method of Internet of things equipment based on block chain
CN107968825B (en) Message forwarding control method and device
CN112491935A (en) Water wave type broadcasting method and system for block chain
KR102522917B1 (en) SYSTEM FOR DATA COMMUNICATION BASED ON BLOCKCHAIN IN MOBILE IoT ENVIRONMENT AND METHOD THEREOF
CN105634947A (en) Message forwarding method based on hotspot in opportunistic mobile social network
WO2022218516A1 (en) Devices and methods for collaborative learning of a transmission policy in wireless networks
CN112055054B (en) Multi-edge cluster data synchronization method and system based on multiple consistency protocols
CN113612732A (en) Resource calling method and device and multi-party secure computing system
CN110114987A (en) A kind of switching method, terminal and domain host node
CN112653506A (en) Block chain-based handover flow method for spatial information network
Schmidt et al. Performance analysis of multicast mobility in a hierarchical mobile IP proxy environment
CN115002120B (en) Method for determining main node in cluster network based on data synchronization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant