CN112990276A - Federal learning method, device, equipment and storage medium based on self-organizing cluster - Google Patents
Federal learning method, device, equipment and storage medium based on self-organizing cluster Download PDFInfo
- Publication number
- CN112990276A CN112990276A CN202110193253.5A CN202110193253A CN112990276A CN 112990276 A CN112990276 A CN 112990276A CN 202110193253 A CN202110193253 A CN 202110193253A CN 112990276 A CN112990276 A CN 112990276A
- Authority
- CN
- China
- Prior art keywords
- user equipment
- cluster
- model
- aggregation
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Mobile Radio Communication Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to the technical field of artificial intelligence, and discloses a federal learning method, a federal learning device, computer equipment and a computer readable storage medium based on a self-organizing cluster, wherein the method comprises the following steps: acquiring broadcast signals sent by each user equipment to generate corresponding clusters; determining target user equipment in the cluster according to the cluster, and taking the target user equipment as a central node; receiving model parameters sent by each user equipment, and sending each model parameter to the central node; acquiring an aggregation model parameter obtained after the central node aggregates the model parameters and learns the federation; and sending the aggregation model parameters to each user equipment, updating the model parameters of the preset model in each user equipment, realizing the purpose of providing combined FL model training without using a predetermined centralized cloud server, and effectively avoiding the problem of single point failure of the predetermined centralized server.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a federal learning method and device based on a self-organizing cluster, computer equipment and a computer readable storage medium.
Background
In conventional machine learning training, data is usually stored centrally in a central entity, which must first collect data from various data sources for later learning, which brings a series of problems of security and privacy. Federal Learning (FL) provides a distributed learning mode, iterative exchange of parameters is continuously carried out on the basis of a learning model between terminal equipment and a centralized server until a global FL model converges to a certain precision level, data does not need to be migrated from the terminal equipment to the centralized server in the whole process, and the distributed learning mode is a promising training mode for machine learning.
Although FL shows great advantages in protecting data privacy while enabling collaborative machine learning, it still faces some problems. Since FL requires the use of a centralized server for iterative parameter modeling and parameter aggregation with participating clients during the training process, failure of the server can lead to failure of the FL process if physically damaged or attacked.
Disclosure of Invention
The application mainly aims to provide a federated learning method, a federated learning device, a computer device and a computer readable storage medium based on a self-organizing cluster, and aims to solve the technical problem that when an existing centralized server is used for training, the centralized server is physically damaged or attacked, and the failure of the FL process is caused by the failure of the centralized server.
In a first aspect, the present application provides a self-organizing cluster-based federated learning method, which includes the following steps:
acquiring broadcast signals sent by each user equipment to generate corresponding clusters;
determining target user equipment in the cluster according to the cluster, and taking the target user equipment as a central node;
receiving model parameters sent by each user equipment, and sending each model parameter to the central node;
acquiring an aggregation model parameter obtained after the central node aggregates the model parameters and learns the federation;
and sending the aggregation model parameters to each user equipment, and updating the model parameters of the preset models in each user equipment.
In a second aspect, the present application further provides a self-organizing group-based federal chemistry device, including:
the generating module is used for acquiring broadcast signals sent by each user equipment and generating corresponding clusters;
the determining module is used for determining target user equipment in the cluster according to the cluster and taking the target user equipment as a central node;
the receiving and sending module is used for receiving the model parameters sent by each piece of user equipment and sending each model parameter to the central node;
the acquisition module is used for acquiring the aggregation model parameters returned after the central node performs aggregation federal learning on the model parameters;
and the updating module is used for sending the aggregation model parameters to each user equipment and updating the model parameters of the preset model in each user equipment.
In a third aspect, the present application also provides a computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the ad hoc cluster-based federated learning method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements the steps of the ad hoc cluster-based federated learning method as described above.
The application provides a federal learning method, a federal learning device, computer equipment and a computer readable storage medium based on a self-organizing cluster, wherein a corresponding cluster is generated by acquiring broadcast signals sent by each piece of user equipment; determining target user equipment in the cluster according to the cluster, and taking the target user equipment as a central node; receiving model parameters sent by each user equipment, and sending each model parameter to the central node; acquiring an aggregation model parameter obtained after the central node aggregates the model parameters and learns the federation; and sending the aggregation model parameters to each user equipment, updating the model parameters of the preset model in each user equipment, realizing the purpose of providing combined FL model training without using a predetermined centralized cloud server, and effectively avoiding the problem of single point failure of the predetermined centralized server.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a federated learning method based on self-organizing clusters provided in an embodiment of the present application;
FIG. 2 is a flow diagram illustrating sub-steps of the federated learning method based on self-organizing clusters of FIG. 1;
FIG. 3 is a flow diagram illustrating sub-steps of the federated learning method based on self-organizing clusters of FIG. 1;
fig. 4 is a schematic flowchart of another federate learning method based on self-organizing clusters according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a federal chemical device based on a self-organizing cluster according to an embodiment of the present application;
fig. 6 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The embodiment of the application provides a federal learning method and device based on a self-organizing cluster, computer equipment and a computer readable storage medium. The federated learning method based on the self-organizing cluster can be applied to computer equipment, and the computer equipment can be electronic equipment such as a notebook computer and a desktop computer.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of a federal learning method based on a self-organizing cluster according to an embodiment of the present application.
As shown in fig. 1, the federal learning method based on ad hoc clusters includes steps S101 to S105.
Step S101, acquiring broadcast signals sent by each user equipment, and generating corresponding clusters.
Exemplarily, the broadcast signals sent by each user equipment are acquired, and the user equipment of the acquired broadcast signals is taken as a cluster. For example, broadcast signals of 10 pieces of user equipment are acquired within a preset time period, and the acquired 10 pieces of user equipment are regarded as a cluster. The broadcast signal includes identification information. For example, after acquiring the identifier carried in the broadcast signal sent by each user equipment, the ID number of each user equipment is acquired through the identifier, and the user equipment corresponding to the ID number is used as a cluster.
In an embodiment, specifically referring to fig. 2, step S101 includes: substep S1011 to substep S1013.
In sub-step S1011, a corresponding graph group is generated according to the acquired broadcast signal transmitted by each user equipment.
Exemplarily, the broadcast signal transmitted by each user equipment is acquired, and a corresponding graph group is generated through the broadcast signal transmitted by each user equipment. The community of the graph is a subset of vertices, the vertices in each subset being more closely connected relative to other vertices of the network. For example, broadcast signals transmitted by each user equipment are acquired, the user equipments transmitting the broadcast signals to each other are connected, and a corresponding graph group is generated, wherein each point is each user equipment.
In an embodiment, the generating a corresponding graph group according to the acquired broadcast signals sent by each user equipment includes: acquiring broadcast signals sent by each user equipment; determining the associated information among the user equipment through the broadcast signals sent by the user equipment; and generating a corresponding graph group through the association information among the user equipment.
Exemplarily, the broadcast signals sent by each user equipment are acquired, and the association information between each user equipment is determined through the broadcast signals sent by each user equipment. For example, broadcast signals sent by each user equipment are acquired, and the identification user equipment corresponding to the broadcast signals sent by each user equipment is determined, so that the association relationship between each user equipment and the corresponding identification user equipment is determined. The method comprises the steps of acquiring a first broadcast signal sent by first user equipment, determining that the second user equipment receives the first broadcast signal through the first broadcast signal, and determining that association information exists between the first user equipment and the second user equipment. Or acquiring a first broadcast signal sent by first user equipment and a second broadcast signal sent by second user equipment, and if the third user equipment is detected to receive the first broadcast signal and the second broadcast signal, respectively associating the first user equipment and the second user equipment with the third user equipment; or acquiring a first broadcast signal sent by the first user equipment, and detecting that the second user equipment and the third user equipment respectively receive the first broadcast signal, the first user equipment respectively has an association relationship with the second user equipment and the third user equipment. When the association information among the user equipment is acquired, each user equipment is used as one point, and the points are connected through the association information among the user equipment, so that a corresponding graph group is generated.
Substep S1012, determining whether the user equipments are in the same cluster by performing aggregation calculation on the vertices of the user equipments in the graph group.
Illustratively, in generating a graph community, vertices for individual user devices in the graph community are obtained. And determining whether the user equipment are clustered by performing aggregation calculation on the vertexes of the user equipment. For example, each user equipment is a vertex in the generated graph group, the vertex of the first user equipment and the vertex of the second user equipment are subjected to aggregation calculation, and if the vertex of the first user equipment is connected with the vertex of the second user equipment, the first user equipment and the second user equipment are determined to be in the same cluster; and if the vertex of the first user equipment is not connected with the vertex of the second user equipment, determining that the first user equipment and the second user equipment are not in the same cluster.
In an embodiment, the determining whether the user devices are in the same cluster includes performing an aggregation calculation on vertices of the user devices in the graph community; calculating the peak of each user equipment in the graph group through a preset aggregation formula to obtain the aggregation parameter of each user equipment; if the aggregation parameters are first preset threshold values, determining that the user equipment are in the same cluster; and if the aggregation parameters are second preset threshold values, determining that the aggregation parameters are not the same cluster.
Exemplarily, a preset aggregation formula is obtained to calculate the vertex of the user equipment in the graph group, and aggregation parameters corresponding to each user are obtainedAnd (4) counting. For example, obtaining a preset aggregation formulaWhere M is a preset module value, L represents the number of edges contained in the graph community, N represents the number of vertices, KiDegree, K, representing vertex ijDegree, A, of vertex jijIs a preset value in the adjacency matrix, CiRepresenting the clustering of vertices i, CjRepresenting the clustering of vertices j, δ is the Kronecker-delta function. Obtaining delta (C) by presetting a polymerization formulaiCj) And (4) parameters.
And after the aggregation parameter is obtained, determining whether the user equipment is in the same cluster or not through the aggregation parameter. If each aggregation parameter is a first preset threshold value, determining that each user equipment is in the same cluster; and if the aggregation parameters are the second preset threshold, determining that the clustering parameters are not the same clustering. For example, when the polymerization parameter is δ (C)iCj) When the parameter is, if delta (C)iCj) If the parameter is 1, determining that the user equipment j and the user equipment i are in the same cluster; if delta (C)iCj) If the parameter is 0, it is determined that the user equipment j and the user equipment i are not in the same cluster.
And a substep S1013 of determining the user equipments in the same cluster, and using the user equipments in the same cluster as a cluster.
Exemplarily, when the user equipment in the same cluster is acquired, the user equipment in the same cluster is taken as a cluster. For example, when multiple user devices in the same cluster are acquired, the multiple user devices in the same cluster are regarded as one cluster.
Step S102, according to the cluster, determining target user equipment in the cluster, and taking the target user equipment as a central node.
Exemplarily, when a cluster of the same cluster is acquired, any user equipment in the cluster is determined to be a target user equipment, and the target user equipment is taken as a central node. For example, the cluster includes a first user device, a second user device, and the like, and it is determined that the first user device is a target user device, and the number of the target user devices is one.
In an embodiment, specifically referring to fig. 3, step S102 includes: substeps 1021 to substep S1022.
And a substep S1021, obtaining social centrality information of each user equipment in the cluster.
Exemplarily, when a cluster of the same cluster is obtained, social centrality information of each user equipment in the cluster is obtained, and the social centrality information has a higher social relationship with other nodes and is regarded as a selection criterion of a central node.
In an embodiment, the obtaining social centrality information of each user equipment in the cluster includes: obtaining social relations among the user equipment in the cluster; obtaining social centrality vector information of each user equipment through the social relationship; and calculating the social centrality vector information of each user equipment to obtain the social centrality information of each user equipment.
Exemplarily, the social relationship between the user devices in the cluster is obtained. For example, when the first user equipment is acquired to be connected with the second user equipment and the third user equipment respectively, the connection relationship between the first user equipment and the second user equipment and the connection relationship between the first user equipment and the third user equipment are taken as the social relationship of the first user equipment. And obtaining social center vector information of each user equipment through the social relationship of each user equipment. For example, if the first user equipment is connected to the second user equipment and the third user equipment respectively, the social center vector information of the first user equipment is S1(S2,S3) (ii) a Or the first user equipment is respectively connected with the second user equipment, the third user equipment and the fourth user equipment, and the social center vector information of the first user equipment is S1(S2,S3,S4). And calculating the social center vector information of each user equipment to obtain the social center information of each user equipment. For example, obtaining the social center vector information of the first user equipment as S1(S2,S3) Then, it is determined that the social centrality information of the first user equipment is 2. Or obtaining the social center vector information of the first user equipment as S1(S2,S3,S4) And determining that the social centrality information of the first user equipment is 3.
And a substep S1022, determining a corresponding target user equipment according to the social centrality information of each user equipment, and using the target user equipment as a central node.
Exemplarily, when the social centrality information of each user equipment is obtained, the target user equipment is determined by comparing the social centrality information of each user equipment. For example, when the social centrality information of the first user equipment is acquired as 3 and the social centrality information of the second user equipment is acquired as 4, the second user equipment is determined as the target user equipment, and the determined target user equipment is used as the central node.
Step S103, receiving the model parameters sent by each user equipment, and sending each model parameter to the central node.
Exemplarily, each user equipment includes a preset model, and the preset model includes a preset neural network model, a deep learning model, a pre-training language model, and the like. And when receiving that each user equipment sends the model parameters in the current preset model, sending the model parameters in the current preset model sent by each user equipment to the central node.
And S104, acquiring the aggregation model parameters after the central node aggregates the model parameters and performs federated learning on the model parameters.
Exemplarily, the central node includes a preset aggregation federation model, sends an upload request to the central node, receives an encryption public key sent by the central node, encrypts model parameters of each preset model through the encryption public key, and sends the encrypted model parameters to the central node. And when the central node receives the encrypted model parameters, decrypting each encrypted model parameter respectively to obtain the decrypted model parameters of each preset model. And learning each model parameter through a preset aggregation federation model in the central node to obtain a corresponding aggregation model parameter. The aggregation federal model comprises an aggregation horizontal federal model, an aggregation longitudinal federal model, an aggregation federal migration model and the like.
It should be noted that federal learning refers to a method for machine learning modeling by combining different clients or participants. In the federal study, the client does not need to expose own data to other clients and coordinators (also called servers), so that the federal study can well protect the privacy of users and guarantee the data security, and can solve the data island problem. Federal learning has the following advantages: data isolation is realized, so that data cannot be leaked to the outside, and the requirements of user privacy protection and data security are met; the method can ensure that the quality of the federal learning model is not damaged, negative migration cannot occur, and the federal learning model has better effect than a cracked independent model; the client-side can be ensured to carry out encryption exchange of information and model parameters under the condition of keeping independence, and growth is obtained at the same time.
Step S105, the aggregation model parameters are sent to each user equipment, and the model parameters of the preset model in each user equipment are updated.
Exemplarily, after learning of each model parameter is obtained through a preset aggregation federal model in the central node, and a corresponding aggregation model parameter is obtained, the aggregation model parameter is sent to each user equipment, and the model parameter of the preset model in each user equipment is updated.
In the embodiment of the application, the corresponding cluster is generated by acquiring the broadcast signal of each user equipment, so that the target user equipment in the cluster is determined, the user equipment is used as a central node to receive the model parameters of each user equipment for aggregation federal learning, the aggregation model parameters are updated to the model parameters of the preset model in each user equipment, the purpose that combined FL model training can be provided without using a predetermined centralized cloud server is achieved, and the problem of single-point failure of the predetermined centralized server is effectively avoided.
Referring to fig. 4, fig. 4 is a schematic flowchart of another federate learning method based on self-organizing clusters according to an embodiment of the present application.
As shown in fig. 4, the federal learning method based on ad hoc clusters includes steps S201 to S203.
Step S201, determining whether the preset model is in a convergence state.
Illustratively, it is determined whether the preset model is in a converged state. For example, the aggregation model parameter is compared with the aggregation model parameter recorded before, and if the aggregation model parameter is the same as the aggregation model parameter recorded before, or the difference between the aggregation model parameter and the aggregation model parameter recorded before is smaller than a preset difference, it is determined that the preset model is in a convergence state.
And step S202, if the preset model is in a convergence state, taking the preset model as a corresponding aggregation model.
Exemplarily, if the aggregation model parameter information is the same as the previously recorded aggregation model parameter, or the difference between the aggregation model parameter information and the previously recorded aggregation model parameter is smaller than a preset difference, the preset model is used as the corresponding aggregation model.
Step S203, if the preset model is not in a convergence state, receiving a second model parameter sent by each ue, and training the preset model according to the second model parameter.
Exemplarily, if it is determined that the preset model is not in the convergence state, continuously obtaining second model parameters of the preset model in each user equipment, performing aggregation federation learning on each second model parameter through the central node, and obtaining second aggregation model parameters after aggregation federation learning. And sending the second aggregation model parameters to each user equipment, and updating the aggregation model parameters of the preset model in the user equipment.
In the embodiment of the application, whether the preset model is in the convergence state or not is detected, and the preset model is continuously trained when the preset model is not in the convergence state, so that the preset model is ensured to be in the convergence state, and the preset result of the preset model is effectively prevented from being inaccurate when the preset model is not in the convergence state.
Referring to fig. 5, fig. 5 is a schematic block diagram of a federate device based on an ad hoc cluster according to an embodiment of the present application.
As shown in fig. 5, the federal device 400 based on ad hoc clustering includes: the device comprises a generating module 401, a determining module 402, a receiving and sending module 403, an obtaining module 404 and an updating module 405.
A generating module 401, configured to obtain broadcast signals sent by each ue, and generate a corresponding cluster;
a determining module 402, configured to determine, according to the cluster, a target user equipment in the cluster, and use the target user equipment as a central node;
a receiving and sending module 403, configured to receive the model parameters sent by each piece of user equipment, and send each model parameter to the central node;
an obtaining module 404, configured to obtain an aggregation model parameter returned by the central node after performing aggregation federation learning on each model parameter;
an updating module 405, configured to send the aggregation model parameter to each user equipment, and update the model parameter of the preset model in each user equipment.
Wherein, the generating module 401 is specifically further configured to:
generating a corresponding graph group according to the acquired broadcast signals sent by each user equipment;
determining whether the user equipment are clustered by performing aggregation calculation on the vertexes of the user equipment in the graph group;
and determining the user equipment in the same cluster, and taking the user equipment in the same cluster as a cluster.
Wherein, the generating module 401 is specifically further configured to:
acquiring broadcast signals sent by each user equipment;
determining the associated information among the user equipment through the broadcast signals sent by the user equipment;
and generating a corresponding graph group through the association information among the user equipment.
Wherein, the generating module 401 is specifically further configured to:
calculating the peak of each user equipment in the graph group through a preset aggregation formula to obtain the aggregation parameter of each user equipment;
if the aggregation parameters are first preset threshold values, determining that the user equipment are in the same cluster;
and if the aggregation parameters are second preset threshold values, determining that the aggregation parameters are not the same cluster.
Wherein the determining module 402 is further specifically configured to:
acquiring social centrality information of each user equipment in the cluster;
and determining corresponding target user equipment according to the social centrality information of each user equipment, and taking the target user equipment as a central node.
Wherein the determining module 402 is further specifically configured to:
obtaining social relations among the user equipment in the cluster;
obtaining social centrality vector information of each user equipment through the social relationship;
and calculating the social centrality vector information of each user equipment to obtain the social centrality information of each user equipment.
Wherein, the self-organizing cluster-based federal chemistry device is further specifically configured to:
determining whether the preset model is in a convergence state;
if the preset model is in a convergence state, taking the preset model as a corresponding aggregation model;
and if the preset model is not in a convergence state, receiving second model parameters sent by each user equipment, and training the preset model through the second model parameters.
It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the modules and units described above may refer to the corresponding processes in the foregoing federate learning method embodiment based on the self-organizing cluster, and are not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program, which can be run on a computer device as shown in fig. 6.
Referring to fig. 6, fig. 6 is a schematic block diagram illustrating a structure of a computer device according to an embodiment of the present disclosure. The computer device may be a terminal.
As shown in fig. 6, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any of the ad hoc cluster-based federal learning approaches.
The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.
The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which when executed by the processor, causes the processor to perform any of the self-organizing cluster-based federated learning methods.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
acquiring broadcast signals sent by each user equipment to generate corresponding clusters;
determining target user equipment in the cluster according to the cluster, and taking the target user equipment as a central node;
receiving model parameters sent by each user equipment, and sending each model parameter to the central node;
acquiring an aggregation model parameter obtained after the central node aggregates the model parameters and learns the federation;
and sending the aggregation model parameters to each user equipment, and updating the model parameters of the preset models in each user equipment.
In an embodiment, when the processor acquires the broadcast signal sent by each ue and generates a corresponding cluster implementation, the processor is configured to implement:
generating a corresponding graph group according to the acquired broadcast signals sent by each user equipment;
determining whether the user equipment are clustered by performing aggregation calculation on the vertexes of the user equipment in the graph group;
and determining the user equipment in the same cluster, and taking the user equipment in the same cluster as a cluster.
In an embodiment, when the processor generates a corresponding graph group according to the acquired broadcast signal sent by each user equipment, the processor is configured to implement:
acquiring broadcast signals sent by each user equipment;
determining the associated information among the user equipment through the broadcast signals sent by the user equipment;
and generating a corresponding graph group through the association information among the user equipment.
In one embodiment, the processor is configured to perform, when determining whether the same cluster is implemented between the user equipments by performing an aggregation calculation on vertices of the user equipments in the graph community, the user equipments:
calculating the peak of each user equipment in the graph group through a preset aggregation formula to obtain the aggregation parameter of each user equipment;
if the aggregation parameters are first preset threshold values, determining that the user equipment are in the same cluster;
and if the aggregation parameters are second preset threshold values, determining that the aggregation parameters are not the same cluster.
In an embodiment, when the processor determines, according to the cluster, a target user equipment in the cluster, and implements the target user equipment as a central node, the processor is configured to implement:
acquiring social centrality information of each user equipment in the cluster;
and determining corresponding target user equipment according to the social centrality information of each user equipment, and taking the target user equipment as a central node.
In one embodiment, the processor, when obtaining the socially central information of each of the user devices in the cluster, is configured to:
obtaining social relations among the user equipment in the cluster;
obtaining social centrality vector information of each user equipment through the social relationship;
and calculating the social centrality vector information of each user equipment to obtain the social centrality information of each user equipment.
In an embodiment, when the processor sends the aggregated model parameter to each of the user equipments and is implemented after updating the model parameter of the preset model in the update of each of the user equipments, the processor is configured to implement:
determining whether the preset model is in a convergence state;
if the preset model is in a convergence state, taking the preset model as a corresponding aggregation model;
and if the preset model is not in a convergence state, receiving second model parameters sent by each user equipment, and training the preset model through the second model parameters.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program includes program instructions, and a method implemented when the program instructions are executed may refer to various embodiments of the federate learning method based on self-organizing clusters of the present application.
The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain referred by the application is a novel application mode of computer technologies such as storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like of a preset model. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A federated learning method based on self-organizing clusters is characterized by comprising the following steps:
acquiring broadcast signals sent by each user equipment to generate corresponding clusters;
determining target user equipment in the cluster according to the cluster, and taking the target user equipment as a central node;
receiving model parameters sent by each user equipment, and sending each model parameter to the central node;
acquiring an aggregation model parameter obtained after the central node aggregates the model parameters and learns the federation;
and sending the aggregation model parameters to each user equipment, and updating the model parameters of the preset models in each user equipment.
2. The federated learning method based on ad-hoc clusters as claimed in claim 1, wherein the obtaining broadcast signals sent by each ue and generating the corresponding cluster comprises:
generating a corresponding graph group according to the acquired broadcast signals sent by each user equipment;
determining whether the user equipment are clustered by performing aggregation calculation on the vertexes of the user equipment in the graph group;
and determining the user equipment in the same cluster, and taking the user equipment in the same cluster as a cluster.
3. The federated learning method based on ad-hoc clusters as claimed in claim 2, wherein the generating a corresponding graph group according to the broadcast signal obtained from each user equipment includes:
acquiring broadcast signals sent by each user equipment;
determining the associated information among the user equipment through the broadcast signals sent by the user equipment;
and generating a corresponding graph group through the association information among the user equipment.
4. The method for federated learning based on ad-hoc clusters according to claim 2, wherein the determining whether each of the user devices are in the same cluster by performing an aggregation calculation on vertices of each of the user devices in the graph community comprises;
calculating the peak of each user equipment in the graph group through a preset aggregation formula to obtain the aggregation parameter of each user equipment;
if the aggregation parameters are first preset threshold values, determining that the user equipment are in the same cluster;
and if the aggregation parameters are second preset threshold values, determining that the aggregation parameters are not the same cluster.
5. The method for federal learning based on ad hoc clusters as claimed in claim 1, wherein said determining a target ue in the cluster according to the cluster and using the target ue as a central node comprises:
acquiring social centrality information of each user equipment in the cluster;
and determining corresponding target user equipment according to the social centrality information of each user equipment, and taking the target user equipment as a central node.
6. The method for federated learning based on ad-hoc clusters as claimed in claim 5, wherein the obtaining of socially central information of each of the user equipments in the cluster comprises:
obtaining social relations among the user equipment in the cluster;
obtaining social centrality vector information of each user equipment through the social relationship;
and calculating the social centrality vector information of each user equipment to obtain the social centrality information of each user equipment.
7. The federated learning method based on ad-hoc cluster according to claim 1, wherein after sending the aggregated model parameter to each of the user equipments and updating the model parameter of the preset model in each of the user equipment updates, further comprising:
determining whether the preset model is in a convergence state;
if the preset model is in a convergence state, taking the preset model as a corresponding aggregation model;
and if the preset model is not in a convergence state, receiving second model parameters sent by each user equipment, and training the preset model through the second model parameters.
8. A federal learning device based on ad hoc clustering, comprising:
the generating module is used for acquiring broadcast signals sent by each user equipment and generating corresponding clusters;
the determining module is used for determining target user equipment in the cluster according to the cluster and taking the target user equipment as a central node;
the receiving and sending module is used for receiving the model parameters sent by each piece of user equipment and sending each model parameter to the central node;
the acquisition module is used for acquiring the aggregation model parameters returned after the central node performs aggregation federal learning on the model parameters;
and the updating module is used for sending the aggregation model parameters to each user equipment and updating the model parameters of the preset model in each user equipment.
9. A computer arrangement comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the ad hoc cluster based federal learning method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the ad hoc cluster-based federated learning method of any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110193253.5A CN112990276B (en) | 2021-02-20 | 2021-02-20 | Federal learning method, device, equipment and storage medium based on self-organizing cluster |
PCT/CN2021/097409 WO2022174533A1 (en) | 2021-02-20 | 2021-05-31 | Federated learning method and apparatus based on self-organized cluster, device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110193253.5A CN112990276B (en) | 2021-02-20 | 2021-02-20 | Federal learning method, device, equipment and storage medium based on self-organizing cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112990276A true CN112990276A (en) | 2021-06-18 |
CN112990276B CN112990276B (en) | 2023-07-21 |
Family
ID=76393764
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110193253.5A Active CN112990276B (en) | 2021-02-20 | 2021-02-20 | Federal learning method, device, equipment and storage medium based on self-organizing cluster |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112990276B (en) |
WO (1) | WO2022174533A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255937A (en) * | 2021-06-28 | 2021-08-13 | 江苏奥斯汀光电科技股份有限公司 | Federal learning method and system for different intelligent agents in intelligent workshop |
CN113469373A (en) * | 2021-08-17 | 2021-10-01 | 北京神州新桥科技有限公司 | Model training method, system, equipment and storage medium based on federal learning |
CN113487041A (en) * | 2021-07-15 | 2021-10-08 | Oppo广东移动通信有限公司 | Horizontal federal learning method, device and storage medium |
CN113723509A (en) * | 2021-08-30 | 2021-11-30 | 平安科技(深圳)有限公司 | Follow-up monitoring method and device based on federal reinforcement learning and related equipment |
CN114662340A (en) * | 2022-04-29 | 2022-06-24 | 烟台创迹软件有限公司 | Weighing model scheme determination method and device, computer equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117808126B (en) * | 2024-02-29 | 2024-05-28 | 浪潮电子信息产业股份有限公司 | Machine learning method, device, equipment, federal learning system and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271416A (en) * | 2018-09-03 | 2019-01-25 | 中国平安人寿保险股份有限公司 | Time management recommended method, electronic device and readable storage medium storing program for executing |
CN111600707A (en) * | 2020-05-15 | 2020-08-28 | 华南师范大学 | Decentralized federal machine learning method under privacy protection |
WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
CN112200263A (en) * | 2020-10-22 | 2021-01-08 | 国网山东省电力公司电力科学研究院 | Self-organizing federal clustering method applied to power distribution internet of things |
CN112329940A (en) * | 2020-11-02 | 2021-02-05 | 北京邮电大学 | Personalized model training method and system combining federal learning and user portrait |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11475350B2 (en) * | 2018-01-22 | 2022-10-18 | Google Llc | Training user-level differentially private machine-learned models |
CN111212110B (en) * | 2019-12-13 | 2022-06-03 | 清华大学深圳国际研究生院 | Block chain-based federal learning system and method |
CN111966698B (en) * | 2020-07-03 | 2023-06-13 | 华南师范大学 | Block chain-based trusted federation learning method, system, device and medium |
CN112232527B (en) * | 2020-09-21 | 2024-01-23 | 北京邮电大学 | Safe distributed federal deep learning method |
-
2021
- 2021-02-20 CN CN202110193253.5A patent/CN112990276B/en active Active
- 2021-05-31 WO PCT/CN2021/097409 patent/WO2022174533A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271416A (en) * | 2018-09-03 | 2019-01-25 | 中国平安人寿保险股份有限公司 | Time management recommended method, electronic device and readable storage medium storing program for executing |
WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
CN111600707A (en) * | 2020-05-15 | 2020-08-28 | 华南师范大学 | Decentralized federal machine learning method under privacy protection |
CN112200263A (en) * | 2020-10-22 | 2021-01-08 | 国网山东省电力公司电力科学研究院 | Self-organizing federal clustering method applied to power distribution internet of things |
CN112329940A (en) * | 2020-11-02 | 2021-02-05 | 北京邮电大学 | Personalized model training method and system combining federal learning and user portrait |
Non-Patent Citations (1)
Title |
---|
JONY0917: "聚类分析(二):图团体检测", Retrieved from the Internet <URL:https://blog.csdn.net/gaofeipaopaotang/article/details/80094656> * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255937A (en) * | 2021-06-28 | 2021-08-13 | 江苏奥斯汀光电科技股份有限公司 | Federal learning method and system for different intelligent agents in intelligent workshop |
CN113255937B (en) * | 2021-06-28 | 2021-11-09 | 江苏奥斯汀光电科技股份有限公司 | Federal learning method and system for different intelligent agents in intelligent workshop |
CN113487041A (en) * | 2021-07-15 | 2021-10-08 | Oppo广东移动通信有限公司 | Horizontal federal learning method, device and storage medium |
CN113487041B (en) * | 2021-07-15 | 2024-05-07 | 深圳市与飞科技有限公司 | Transverse federal learning method, device and storage medium |
CN113469373A (en) * | 2021-08-17 | 2021-10-01 | 北京神州新桥科技有限公司 | Model training method, system, equipment and storage medium based on federal learning |
CN113469373B (en) * | 2021-08-17 | 2023-06-30 | 北京神州新桥科技有限公司 | Model training method, system, equipment and storage medium based on federal learning |
CN113723509A (en) * | 2021-08-30 | 2021-11-30 | 平安科技(深圳)有限公司 | Follow-up monitoring method and device based on federal reinforcement learning and related equipment |
CN113723509B (en) * | 2021-08-30 | 2024-01-16 | 平安科技(深圳)有限公司 | Follow-up monitoring method and device based on federal reinforcement learning and related equipment |
CN114662340A (en) * | 2022-04-29 | 2022-06-24 | 烟台创迹软件有限公司 | Weighing model scheme determination method and device, computer equipment and storage medium |
CN114662340B (en) * | 2022-04-29 | 2023-02-28 | 烟台创迹软件有限公司 | Weighing model scheme determination method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112990276B (en) | 2023-07-21 |
WO2022174533A1 (en) | 2022-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112990276B (en) | Federal learning method, device, equipment and storage medium based on self-organizing cluster | |
US20210158216A1 (en) | Method and system for federated learning | |
US20230039182A1 (en) | Method, apparatus, computer device, storage medium, and program product for processing data | |
US20190012592A1 (en) | Secure federated neural networks | |
CN112001502B (en) | Federal learning training method and device for high-delay network environment robustness | |
CN113505882B (en) | Data processing method based on federal neural network model, related equipment and medium | |
CN113515760B (en) | Horizontal federal learning method, apparatus, computer device, and storage medium | |
WO2023138152A1 (en) | Federated learning method and system based on blockchain | |
JP2021515271A (en) | Computer-based voting process and system | |
US20230106985A1 (en) | Developing machine-learning models | |
CN102272728B (en) | Method, apparatus, and computer program product for polynomial-based data transformation and utilization | |
CN112801307B (en) | Block chain-based federal learning method and device and computer equipment | |
Kurupathi et al. | Survey on federated learning towards privacy preserving AI | |
Alqarni et al. | Authenticated wireless links between a drone and sensors using a blockchain: Case of smart farming | |
CN111079153B (en) | Security modeling method and device, electronic equipment and storage medium | |
Kanimozhi et al. | Secure sharing of IOT data in cloud environment using attribute-based encryption | |
CN117349685A (en) | Clustering method, system, terminal and medium for communication data | |
EP4266220A1 (en) | Method for efficient machine learning | |
Bernabé-Rodríguez et al. | A decentralized private data marketplace using blockchain and secure multi-party computation | |
CN116865938A (en) | Multi-server federation learning method based on secret sharing and homomorphic encryption | |
CN116208340A (en) | Trusted data flow platform system method based on privacy calculation and blockchain | |
CN112417478B (en) | Data processing method, device, equipment and storage medium | |
CN112765898B (en) | Multi-task joint training model method, system, electronic equipment and storage medium | |
Shah et al. | Secure featurization and applications to secure phishing detection | |
US20230297392A1 (en) | Method, system, and device for integrating replacement device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |