CN113487041A - Horizontal federal learning method, device and storage medium - Google Patents

Horizontal federal learning method, device and storage medium Download PDF

Info

Publication number
CN113487041A
CN113487041A CN202110801990.9A CN202110801990A CN113487041A CN 113487041 A CN113487041 A CN 113487041A CN 202110801990 A CN202110801990 A CN 202110801990A CN 113487041 A CN113487041 A CN 113487041A
Authority
CN
China
Prior art keywords
ith
model
node
local
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110801990.9A
Other languages
Chinese (zh)
Other versions
CN113487041B (en
Inventor
侯宪龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hefei Technology Co ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110801990.9A priority Critical patent/CN113487041B/en
Publication of CN113487041A publication Critical patent/CN113487041A/en
Application granted granted Critical
Publication of CN113487041B publication Critical patent/CN113487041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a transverse federated learning method, a device and a storage medium, belonging to the technical field of machine learning. Aiming at a transverse federated learning method, the ith client node in the application can update the global model of the previous round according to local data to obtain an ith local model; and calculating the target cluster category to which the ith local model belongs, uploading the target cluster category to the ith miner node so that the ith miner node aggregates the local models of the same target cluster category to generate a global model of the current round, and then downloading the global model of the current round from the ith miner node. Therefore, as the clustering method is adopted in the method, the miners' nodes aggregate the same type of local models in the process of updating the global model, so that the updated global model has better performance, the interference of the local model with larger deviation is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.

Description

Horizontal federal learning method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of machine learning, in particular to a horizontal federal learning method, a device and a storage medium.
Background
Horizontal federal Learning (Horizontal federal Learning) is widely used as a method for cooperatively executing machine Learning model training by a terminal and a cloud.
In the related art, a central server is arranged at the cloud end, and a terminal is used by each user. And the central server establishes a transverse federal learning system, and the terminals participate in the transverse federal learning system independently and update the global model in the transverse federal learning system together.
Disclosure of Invention
The embodiment of the application provides a method, a device and a storage medium for horizontal federal learning. The technical scheme is as follows:
according to an aspect of the application, a horizontal federated learning method is provided, and is applied to an ith client node in a horizontal federated learning system, where i is a positive integer, and the method includes:
acquiring a k-1 th round of global model, wherein the k-1 th round of global model is obtained after a k-1 th round of training in the horizontal federal learning process, k is greater than or equal to 2, and k is a positive integer;
updating the k-1 th round global model according to local data to obtain an ith local model;
calculating the target cluster category to which the ith local model belongs;
uploading the ith local model and the target cluster category to an ith miner node so that the ith miner node aggregates the local models belonging to the target cluster category into a kth round global model, wherein the ith miner node belongs to a block chain system;
and downloading the k round global model from the ith miner node.
According to another aspect of the present application, there is provided a horizontal federal learning method applied to an ith miner node in a blockchain system, where i is a positive integer, the method including:
receiving an ith local model and a target cluster category uploaded by an ith client node;
storing the ith local model into a candidate block;
downloading local models corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system based on the target cluster category and storing the local models in the candidate blocks;
responding to the fact that a target miner node in the block chain system finishes workload certification PoW operation, and aggregating to obtain a k-th round global model according to a local model corresponding to a target cluster category stored in a generating block generated by the target miner node; the generation block is a candidate block for the target mineworker node when completing a workload attestation PoW operation;
sending the k round global model to the i client node.
According to another aspect of the present application, there is provided a horizontal federal learning apparatus applied to an ith client node in a horizontal federal learning system, where i is a positive integer, the apparatus including:
the model acquisition module is used for acquiring a k-1 th round of global model, wherein the k-1 th round of global model is obtained after a k-1 th round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer;
the model updating module is used for updating the k-1 th round global model according to local data to obtain an ith local model;
the category calculation module is used for calculating the target cluster category to which the ith local model belongs;
the data uploading module is used for uploading the ith local model and the target cluster category to an ith miner node so that the ith miner node can aggregate the local models belonging to the target cluster category into a kth round global model, and the ith miner node belongs to a block chain system;
and the first downloading module is used for downloading the kth round global model from the ith miner node.
According to another aspect of the present application, there is provided a horizontal federal learning apparatus applied to an ith miner node in a blockchain system, where i is a positive integer, the apparatus including:
the data receiving module is used for receiving the ith local model and the target cluster category uploaded by the ith client node;
the model storage module is used for storing the ith local model into a candidate block;
the second downloading module is used for downloading a local model corresponding to the target clustering type from the broadcast data of other miner nodes in the blockchain system based on the target clustering type and storing the local model into the candidate block;
the model aggregation module is used for responding to the fact that a target miner node exists in the block chain system to finish workload certification PoW operation, and aggregating to obtain a kth round global model according to a local model corresponding to a target clustering category stored in a generating block generated by the target miner node; the generation block is a candidate block for the target mineworker node when completing a workload attestation PoW operation;
and the model sending module is used for sending the kth round global model to the ith client node.
According to another aspect of the present application, there is provided a client node comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement a lateral federated learning method as provided by the various aspects of the present application as applied in an ith client node in a lateral federated learning system.
According to another aspect of the present application, there is provided a miner node comprising a processor and a memory having stored therein at least one instruction loaded and executed by the processor to implement a lateral federal learning methodology as provided by aspects of the present application as applied in an ith miner node in a blockchain system.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federated learning method as provided by aspects of the present application as applied in an i-th client node in a lateral federated learning system.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement a method of horizontal federal learning as provided by the present application in various aspects of an ith miner node in a blockchain system.
According to one aspect of the present application, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of the computer device from the computer readable storage medium, and the computer instructions are executed by the processor to cause the computer device to execute the horizontal federal learning method provided by the present application in various aspects of the ith client node in the horizontal federal learning system.
According to one aspect of the present application, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the transverse federal learning method provided by various aspects of the application in the ith miner node in the blockchain system.
The beneficial effects brought by the technical scheme provided by the embodiment of the application can include:
aiming at a transverse federated learning method, the ith client node in the application can update the global model of the previous round according to local data to obtain an ith local model; and calculating the target cluster category to which the ith local model belongs, uploading the target cluster category to the ith miner node so that the ith miner node aggregates the local models of the same target cluster category to generate a global model of the current round, and then downloading the global model of the current round from the ith miner node to finish the model updating process of the current round of horizontal federal learning. Therefore, the clustering is performed locally at the client node, so that the miner node can update the local models of the same type as the clustering contract in the global model updating process, the updated global model has better performance, the interference of the local model with larger deviation is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
Drawings
In order to more clearly describe the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a block diagram of a client node according to an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a miner node provided in an exemplary embodiment of the present application;
FIG. 3 is a system architecture diagram of a horizontal federated learning system in the related art;
FIG. 4 is a system architecture diagram of a lateral federated learning system as provided by an embodiment of the present application;
FIG. 5 is a flow chart of a method of lateral federal learning provided in an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a method of lateral federal learning provided in an exemplary embodiment of the present application;
FIG. 7 is a flowchart of a method for horizontal federal learning provided by an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a lateral federated learning facility as provided in an exemplary embodiment of the present application;
fig. 9 is a block diagram of a horizontal federal learning device in accordance with an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As used herein, the term "if" is optionally interpreted as "when.. times", "at … …", "in response to a determination", or "in response to a detection", depending on the context. Similarly, the phrase "if determined … …" or "if (a stated condition or event) is detected" or "in response to (a stated condition or event) being detected", depending on the context.
It is noted that the use of personally identifiable information should follow privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining user privacy. In particular, personally identifiable information should explicitly specify to the user the nature of authorized use during administration and processing to minimize the risk of inadvertent or unauthorized access or use.
For example, the horizontal federal learning method disclosed in the embodiments of the present application may be applied to a client node, where the client node includes a display screen and an arithmetic unit. The client nodes may include cell phones, tablets, laptops, desktops, or all-in-one computers, among others.
Referring to fig. 1, fig. 1 is a block diagram of a client node according to an exemplary embodiment of the present application, and as shown in fig. 1, the client node includes a processor 120 and a memory 140, where the memory 140 stores at least one instruction, and the instruction is loaded and executed by the processor 120 to implement a horizontal federal learning method according to various method embodiments of the present application.
In the present application, the client node 100 is an electronic device having a data operation function. The client node 100 acquires a k-1 th round global model, wherein the k-1 th round global model is obtained after a k-1 th round of training in the horizontal federal learning process, k is greater than or equal to 2 and is a positive integer; updating the k-1 th round global model according to the local data to obtain an ith local model; calculating the target cluster category to which the ith local model belongs; uploading the ith local model and the target cluster category to the ith miner node so that the ith miner node aggregates the local models belonging to the same target cluster category to generate a kth round global model, wherein the ith miner node belongs to a block chain system; and downloading the k round global model from the ith miner node.
Processor 120 may include one or more processing cores. The processor 120 interfaces with various components throughout the client node 100 using various interfaces and lines to perform various functions of the client node 100 and to process data by executing or executing instructions, programs, sets of codes, or sets of instructions stored in the memory 140, as well as invoking data stored in the memory 140. Optionally, the processor 120 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 120 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 120, but may be implemented by a single chip.
The Memory 140 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 140 includes a non-transitory computer-readable medium. The memory 140 may be used to store instructions, programs, code sets, or instruction sets. The memory 140 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like; the storage data area may store data and the like referred to in the following respective method embodiments.
Referring to fig. 2, fig. 2 is a block diagram of a miner node according to an exemplary embodiment of the present application, and as shown in fig. 2, the miner node includes a processor 220 and a memory 240, where the memory 240 stores at least one instruction, and the instruction is loaded and executed by the processor 220 to implement a horizontal federal learning method according to various method embodiments of the present application.
In the present application, the miner node 200 is an electronic device having a data operation function. It should be noted that, since the miners' nodes are nodes in the blockchain system, they participate in the mining activity of the blockchain. Thus, the parallel computing performance of the miner node 200 may be higher. The miner node 200 receives the ith local model and the target cluster category uploaded by the ith client node; storing the ith local model into a candidate block; downloading a local model corresponding to the target clustering category from the broadcast data of other miner nodes in the block chain system based on the target clustering category and storing the local model into a candidate block; responding to the fact that a target miner node in the block chain system finishes workload certification PoW operation, and aggregating to obtain a kth round global model according to an ith local model corresponding to a target cluster category stored in a generating block generated by the target miner node; the generation block is a candidate block of the target miner node when the target miner node completes the workload proving PoW operation; and sending the k round global model to the ith client node.
Processor 220 may include one or more processing cores. The processor 220 interfaces with various components throughout the miner node 200 using various interfaces and lines to perform various functions of the miner node 200 and to process data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 240 and invoking data stored in the memory 240. Optionally, the processor 220 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 220 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 220, but may be implemented by a single chip.
The Memory 240 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 240 includes a non-transitory computer-readable medium. The memory 240 may be used to store instructions, programs, code sets, or instruction sets. The memory 240 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like; the storage data area may store data and the like referred to in the following respective method embodiments.
Referring to fig. 3, fig. 3 is a system architecture diagram of a horizontal federal learning system in the related art. In fig. 3, the horizontal federated learning system 300 includes a first client node 311, a second client node 312, and a central server 320. It is noted that the horizontal federated learning system 300 may include several client nodes, with the two client nodes shown in FIG. 3 being exemplary only.
In fig. 3, a first client node 311 and a second client node 312 are used to obtain a global model of the current round from a central server 320. After obtaining the global model of the current round, the first client node 311 and the second client node 312 each train the global model of the current round using the locally stored data, and obtain a first local model and a second local model. The first local model and the second local model are then transmitted back to the central server 320 by the respective client node to which they belong. The central server 320 will aggregate the first local model and the second local model to get the global model for the next round.
As can be seen from the above-described scheme provided in the related art, the role of the central server is crucial. If the central server is damaged or fails, the client node may not obtain a correct global model, so that the horizontal federal learning system cannot operate normally. Or, if the data in the central server is tampered, the central server issues the tampered global model to each client node, and the client node cannot effectively work according to the global model, even generates an erroneous operation result, thereby causing damage. Therefore, the central server in the related art needs to be heavily maintained to ensure the security of the system.
Based on the risks and challenges faced by the horizontal federal learning system in the related art, the application provides a horizontal federal learning method combined with blockchain technology. Referring to fig. 4, fig. 4 is a system architecture diagram of a horizontal federal learning system according to an embodiment of the present application. In fig. 4, a horizontal federal learning system 410 and a blockchain system 420 are included.
Optionally, the horizontal federated learning system 410 includes several client nodes, which may be several to hundreds of thousands, and the specific number of client nodes is not limited in this embodiment of the application. Illustratively, in fig. 4, a number of client nodes in the horizontal federated learning system 410 are represented, represented by four client nodes. Of these, the four client nodes are a1 st client node 411, a2 nd client node 412, a 3 rd client node 413, and a 4 th client node 414, respectively.
Similarly, the blockchain system 420 includes a number of miners 'nodes, which may be several to hundreds of thousands, and the specific number of miners' nodes is not limited in the embodiments of the present application. Schematically, in fig. 4, four mineworker nodes are represented, and a number of the mineworker nodes in the block chain system 420 are represented. The four miner nodes are a1 st miner node 421, a2 nd miner node 422, a 3 rd miner node 423 and a 4 th miner node 424, respectively.
In the system shown in fig. 4, the blockchain system 420 replaces the central server in the related art, so that the global model can no longer be easily tampered with. Meanwhile, the global model is maintained by the block chain, so that most client nodes can still receive the correct global model of the next round even if part of the miner nodes break down, and the normal operation of the transverse federated learning system is not influenced.
The present application will introduce the horizontal federal learning method provided by the present application based on the framework provided in fig. 4, which will be described in detail later.
Referring to fig. 5, fig. 5 is a flowchart of a method for horizontal federal learning according to an exemplary embodiment of the present application. The horizontal federal learning method can be applied to the ith client node in the horizontal federal learning system, wherein i is a positive integer. In fig. 5, the lateral federal learning method includes:
and step 510, obtaining a k-1 th round global model, wherein the k-1 th round global model is obtained after a k-1 th round of training in the horizontal federal learning process, k is greater than or equal to 2, and k is a positive integer.
In the embodiment of the present application, the ith client node may be a personal device actually used by the user. For example, the ith client node may be a smartphone, computer, or other device used by the user whose computing performance meets the computing performance requirements for the local device in the horizontal federal study.
It should be noted that the client node may perform the verification of the computational performance before adding to the horizontal federal learning system provided in the present application, and if the requirement of the computational performance is met, the client node may apply to add to the horizontal federal learning system. Optionally, the process of the client node joining the horizontal federated learning system may be confirmed by the User through a User Interface (UI) operation, or by the User agreeing to participate in a performance improvement plan or other ways to improve the performance of the local model in a package.
In the application, the ith client node acquires the (k-1) th round global model. In one possible approach, the global model of the (k-1) th round is already stored locally, and in this case, the ith client node only needs to obtain the global model of the (k-1) th round from the local. In another possible mode, the global model of the (k-1) th round is not stored in the local, and the (i) th client node needs to obtain the global model of the (k-1) th round from the corresponding miner node. Here, the miner node is the miner node corresponding to the ith client node when the global model is updated in the k-1 th round of the horizontal federal learning system.
And step 520, updating the k-1 th round global model according to the local data to obtain the ith local model.
In the application, if the ith client node already obtains the (k-1) th round global model, the ith client node can locally update the model, and the (k-1) th round global model is updated by using local data, so that the ith local model is obtained.
In one possible approach, the ith client node may have stored therein the algorithms or formulas required to update the global model. In different application scenarios, the algorithm or formula required for updating the global model is different. In the present application, one possible update formula is shown, please see equation (1).
Figure BDA0003164960850000101
In equation (1), FiThe loss function is represented. For example, the loss function may be mean Squared error MSE (mean Squared error)Or cross entropy. Illustratively, the choice of the penalty function depends on the downstream tasks performed by the machine learning model resulting from applying the lateral federated learning training. In one possible approach, the loss function may select the cross entropy if the downstream task is a classification task, and the mean square error MSE if the downstream task is a regression task.
Figure BDA0003164960850000102
This means that the distance between the update result of the model and the tag should be minimized as much as possible.
Figure BDA0003164960850000103
And expressing a regular term, wherein the regular term is used for controlling the parameters of the ith local model after the updating to be as close as possible to the parameters of the k-1 round global model, so that the convergence capability of the model is enhanced.
Step 530, calculating the target cluster category to which the ith local model belongs.
In the application, after the ith client node obtains the ith local model, the target cluster category to which the ith local model belongs can be calculated. In one possible approach, the ith client node has a specified feature algorithm stored locally. The ith client node can call a stored characteristic algorithm, the ith local model is used as input, and the target cluster category to which the ith local model belongs is obtained after operation.
Illustratively, the number of the cluster categories can be set according to the actual application scene by the horizontal federal learning method provided by the application. In one possible approach, the number of cluster categories may be a positive integer. E.g., 2, 4, 8, or 16, etc. The number of cluster categories is not limited in this application.
And 540, uploading the ith local model and the target cluster category to the ith miner node, so that the ith miner node aggregates the local models belonging to the same target cluster category into a kth round global model, and the ith miner node belongs to the block chain system.
In the application, after the ith client node obtains the ith local model and the target cluster category, the ith client node can perform information interaction with the ith miner node.
It should be noted that the ith miner node belongs to the blockchain system, and the ith miner node is a miner node that successfully excavates a mine in the blockchain system. In one possible design scenario of the present application, after a mineworker node that successfully mines appears in the blockchain system, the mineworker node is qualified to execute the horizontal federal learning method provided in the present application. In the k-th round of model update for horizontal federal learning, the miner node may randomly select a client node binding from among the client nodes that have not bound the miner node. And the ith miner node selects the bound client node to be the ith client node.
It can be seen that the ith client node and the ith miner node have a binding relationship in advance. Thus, the ith client node can upload the ith local model and the target cluster category to the ith miner node. Thereafter, the ith miner node can aggregate the local models of the same generic object cluster category into a k-th round global model in the blockchain system.
Step 550, downloading the k round global model from the ith miner node.
In the design of the application, after the ith miner node is successfully aggregated to obtain the kth round global model, the kth round global model is sent to the corresponding ith client node. Accordingly, the ith client node downloads the kth round global model from the ith miner node.
In one possible approach, the ith client node may be designed to listen to the information of the ith miner node, and receive the kth round global model sent by the ith miner node when the ith miner node is found to have the transmitted information.
In another possible approach, the ith client node does not listen for information of the ith miner node. The ith miner node stores the communication address of the ith client node. And the ith miner node directly sends the kth round global model to the communication address so as to enable the ith client node to obtain the global model which is updated in the current round, namely the kth round global model.
In summary, in the horizontal federated learning method provided in this embodiment, the ith client node can update the global model of the previous round according to the local data to obtain the ith local model; and calculating the target cluster category to which the ith local model belongs, uploading the target cluster category to the ith miner node so that the ith miner node aggregates the local models of the same target cluster category to generate a global model of the current round, and then downloading the global model of the current round from the ith miner node to finish the model updating process of the current round of horizontal federal learning. Therefore, the method carries out local clustering, so that the miner nodes can aggregate the local models of the same category in the process of updating the global model, the updated global model has better performance, the interference of the local model with larger deviation is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 6, fig. 6 is a flowchart of a method for horizontal federal learning according to an exemplary embodiment of the present application. The horizontal federal learning method can be applied to the ith miner node in the blockchain system shown above. In fig. 6, the lateral federal learning method includes:
and step 610, receiving the ith local model and the target cluster category uploaded by the ith client node.
In the application, after the ith miner node is bound with the ith client node in advance, the ith local model uploaded by the ith client node and the target cluster category to which the ith local model belongs can be received.
Step 620, storing the ith local model in the candidate block.
It should be noted that, after receiving the ith local model and the target cluster category, the ith miner node can store the ith local model in the candidate block. The candidate block is a chunk maintained by the ith miner node itself, which has not been published throughout the blockchain system, and which is not yet full.
Step 630, based on the target cluster category, downloading a local model corresponding to the target cluster category from the broadcast data of other miners' nodes in the blockchain system and storing the local model in the candidate block.
Optionally, after storing the ith local model in the candidate block, the ith miner node may broadcast the ith local model and the target cluster category in the blockchain system so that the ith local model can be saved by other miner nodes storing the local models of the target cluster category.
In the present application, the ith miner node can receive the broadcast data of other miner nodes in the block chain system. If the cluster type in the broadcast data is the same as the target cluster type, the ith miner node can download a local model corresponding to the cluster type from the broadcast data and store the local model into a candidate block.
Step 640, responding to the fact that the target miner node in the block chain system finishes workload certification PoW operation, and aggregating to obtain a k-th round global model according to a local model corresponding to the target cluster category stored in a generating block generated by the target miner node; the generated block is a candidate block for the target miner node when completing the workload-justified PoW operation.
It should be noted that the local model corresponding to the target cluster category includes the ith local model.
In this application, the blockchain system has a target miner node that completes a workload-justified PoW operation. At this time, the target miner node has the right to send the generated block to other miner nodes in the overall system. Wherein the generated block is a candidate block for the target miner node when the workload-justified PoW operation is completed. After the target miner node sends the generated block to other miner nodes in the whole system, all the miner nodes in the block chain system aggregate the local models according to the local models corresponding to the target cluster categories stored in the generated block to obtain the k-th round global model.
In one possible approach, the ith miner node may use the algorithm of equation (2) to obtain the kth round global model.
Figure BDA0003164960850000131
For equation (2), wkRepresenting the global model of the k-th round, n representing the number of local models corresponding to the target cluster categories stored in the generation block,
Figure BDA0003164960850000132
and representing the local model corresponding to each target cluster category.
It should be noted that, after each new block is found by a miner node in the present application, a message is broadcast to other miner nodes in the block chain system, so that the other miner nodes in the block chain system determine whether the block chain is branched. If the blockchain in the blockchain system is forked, all the miner nodes empty the content in the respective candidate blocks, and the corresponding local models and the cluster types to which the local models belong are obtained from the corresponding client nodes.
Step 650, sending the k round global model to the ith client node.
In the present application, the ith miner node can send the kth round global model to the ith client node.
In one possible approach, the ith miner node knows the communication address of the ith client node corresponding to the ith miner node in advance. In this scenario, the ith miner node sends the kth round global model to the ith client node according to the communication address.
In another possible approach, the ith miner node does not know the communication address of the ith client node corresponding to the ith miner node. In this scenario, the ith miner node may send the kth round global model in a broadcast manner. Correspondingly, the ith client node knows the identification of the ith miner node and receives the kth round global model according to the identification of the ith miner node.
In another possible approach, the ith miner node does not know the communication address of the ith client node corresponding to the ith miner node. In this scenario, the ith miner node may pre-establish a communication link with the ith client node over which the ith miner node sends the kth round global model to the ith client node.
In summary, in this embodiment, the ith miner node in the blockchain system can receive the ith local model and the target cluster type sent by the ith client node, store the ith local model in a candidate block in the local, then download the local models corresponding to the target cluster types from other miner nodes based on the target cluster types, and store the local models in the candidate block, when there is a target miner node that completes the workload certification PoW operation in the blockchain system, aggregate the local models corresponding to the target cluster types in the generation block of the target miner node to obtain the kth round global model, and then send the kth round global model to the ith client node. Therefore, the method realizes the transverse federal learning method combined with the block chain system, can ensure that the global model is not tampered, and can avoid the phenomenon that the whole transverse federal learning method stops running after the central server fails. Meanwhile, the local models belonging to the same category can be aggregated, so that the interference of the local models with larger deviation when the global model is generated is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 7, fig. 7 is a flowchart of a method for horizontal federal learning according to an exemplary embodiment of the present application. The horizontal federal learning method can be completed by the cooperation of the ith client node and the ith miner node, wherein i is a positive integer. In fig. 7, the lateral federal learning method includes:
in step 710, the ith client node obtains the global model of the (k-1) th round.
In this application, the execution process of step 710 may refer to the execution process of step 510, and details are not described in this embodiment of the application.
And step 720, the ith client node updates the k-1 th round global model according to the local data to obtain the ith local model.
In this application, the execution process of step 720 may refer to the execution process of step 520, and the embodiments of this application are not described again.
In step 731, the ith client node obtains the locally stored clustering algorithm.
In the application, the ith client node can store the clustering algorithm locally in advance according to the downstream service requirement. The clustering algorithm is used for clustering local models with similar characteristics into one class.
In one possible approach, the clustering algorithm may be a SimHash clustering algorithm.
In step 732, the ith client node obtains a target cluster category according to the ith local model based on a clustering algorithm.
Optionally, based on a clustering algorithm, the ith local model is mapped to a hash vector with length n, the hash vector is a one-dimensional vector used for representing the target cluster category, and each bit of the hash vector is 0 or 1.
In the application, if the clustering algorithm adopted by the ith client node is the SimHash clustering algorithm, the ith client node can calculate the target clustering category according to equation (3).
Figure BDA0003164960850000151
In the formula (3), the reaction mixture is,
Figure BDA0003164960850000152
representing a target cluster category. Alternatively,
Figure BDA0003164960850000153
is a hash vector, which is a one-dimensional vector.
Figure BDA0003164960850000154
Is n, each bit in the hash vector is either 0 or 1.
In equation (3), xjIs a random unit-norm vector of size m × 1.
The process of computing the hash vector by equation (3) is illustrated below by way of an example. Let n be 2, x1=[0.1,0.3,0.5]T,x2=[-0.2,-0.4,0.6]T
Figure BDA0003164960850000155
Under the condition, the hash vector of the ith client node can be obtained
Figure BDA0003164960850000156
Is composed of
Figure BDA0003164960850000157
In this example, the hash vector may have four categories of [0, 0], [0, 1], [1, 0], and [1, 1 ]. That is, the object cluster category includes four types in total.
In the application, the SimHash clustering algorithm can map similar local models into the same Hash vector with higher probability, and the local models generated in the current round are clustered into a plurality of categories based on the SimHash clustering algorithm.
In step 740, the ith client node uploads the ith local model and the target cluster category to the ith miner node.
In this application, the execution process of step 740 may refer to the execution process of step 540, and the embodiment of this application is not described again.
Optionally, when the ith client node uploads the ith local model and the target cluster category, the ith timestamp may also be uploaded to the ith miner node. And the ith timestamp is used for indicating the moment when the ith local model completes training.
Correspondingly, the ith miner node receives the ith local model and the target cluster category uploaded by the ith client node. Optionally, the ith miner node can also receive the ith timestamp uploaded by the ith client node.
Step 750, the ith miner node sends reward data to the ith client node, the reward data being used to provide the redemption item.
Illustratively, the reward data may be used for exchanging actual articles in life, and the reward data may also be used for exchanging virtual articles, which is not limited in the embodiments of the present application.
Accordingly, the ith client node receives reward data sent by the ith miner node.
And step 761, the ith miner node sends the ith local model to the (i + 1) th miner node in the block chain system, so that the (i + 1) th miner node verifies the authenticity of the ith local model.
Step 762, in response to the ith local model being true, the ith miner node stores the ith local model in a candidate block.
In the application, the ith miner node can send the received ith local model to other miner nodes for verification. One possible verification method is to send the local model to the neighboring miner node for verification, and if the local model passes the verification, the local model is true. When the ith local model is true, the ith miner node stores the ith local model in a local candidate block.
Step 770, based on the target cluster category, the ith miner node downloads the local model corresponding to the target cluster category from the broadcast data of other miner nodes in the blockchain system and stores the local model in the candidate block.
It should be noted that the ith miner node executes the workload proving PoW operation in response to the candidate block existing in the blockchain system satisfying the aggregation trigger condition. Wherein the aggregation trigger condition comprises at least one of: the candidate block is full; or, the time elapsed since the i-th local model was stored is longer than the second threshold.
Wherein the second threshold is a time length threshold.
In the present application, the execution process of step 770 may refer to the execution process of step 630, and is not described herein again.
In this application, after the ith miner node performs step 770, step 781 and step 782 may be performed, or step 783 and step 784 may be performed.
In step 781, in response to the target miner node being the ith miner node, the candidate block is determined to be a generation block.
Optionally, in a case that the ith miner node is the target miner node, the ith miner node determines its own candidate block as the generation block. In the present application, a generated block is a block in a block chain corresponding to a target cluster category; the block chains corresponding to different cluster types are different.
And step 782, aggregating to obtain a k-th round global model according to the local models corresponding to the target cluster categories stored in the generation block.
And 783, responding to that the target miner node is a miner node except the ith miner node in the block chain system, and acquiring a generation block of the target miner node.
784, aggregating to obtain a k-th round global model according to the local models corresponding to the target cluster categories stored in the generating block.
Illustratively, the processes shown in the above steps 782 and 784 may be performed instead of the steps (a1) and (a 2).
And (a1) calculating the mean value of the local model corresponding to the target cluster category.
And (a2) determining the mean value as a k round global model.
In a possible manner, the ith miner node may obtain the kth round global model by using an algorithm of equation (2), and details may be referred to the embodiment shown in fig. 6, which is not described herein again.
Step 790, in response to the number of local models used for generating the k-th round global model being greater than the first threshold, sending the k-th round global model to the ith client node.
In this example, the first threshold may be a percentage, such as 25%, 30%, 40%, 45%, etc., which is not limited in this application.
Correspondingly, when the number of local models for generating the kth round global model is larger than a first threshold value, the ith client node downloads the kth round global model from the ith miner node.
In this example, after the ith client node receives the kth round global model, the kth round update ends. It should be noted that, because the present application adopts a clustering method, local client nodes belonging to different categories will correspond to different blockchains. For example, if the application includes 4 categories, the miners' nodes maintain 4 blockchains, so that the client nodes belonging to the 4 categories can obtain the kth round global model belonging to their own category, and the kth round global model of each category has a good training effect.
In summary, the method for horizontal federated learning provided in this embodiment can enable a horizontal federated learning system using a blockchain technique to not only reduce the problem that horizontal federated learning cannot be continuously executed due to a failure of a central server and a global model is easy to tamper, but also enable client nodes with large differences in real life to participate in horizontal federated learning by a clustering method, thereby training global models of respective categories with high efficiency.
Optionally, the method and the device can perform clustering operation in each round, so that the local models with the same characteristics are assigned to one category every time, the training effect of each round is good, and the training efficiency of each round of horizontal federal learning is improved.
Optionally, since the ith client node can obtain reward data, the method and the system can improve the participation enthusiasm of the client nodes, and help to recruit the client nodes to join in the open network so as to improve the system operation performance of the horizontal federal learning system.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 8, fig. 8 is a block diagram of a horizontal federal learning device according to an exemplary embodiment of the present application. The transverse federal learning device is applied to the ith client node in the transverse federal learning system, wherein i is a positive integer. The horizontal federal learning device can be implemented as all or part of a client node in software, hardware, or a combination of both. The device includes:
and the model obtaining module 810 is used for obtaining a k-1 th round global model, wherein the k-1 th round global model is obtained after a k-1 th round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer.
And the model updating module 820 is used for updating the k-1 th round global model according to local data to obtain an ith local model.
A category calculating module 830, configured to calculate a category of the target cluster to which the ith local model belongs.
The data uploading module 840 is configured to upload the ith local model and the target cluster category to an ith miner node, so that the ith miner node aggregates the local models belonging to the target cluster category to generate a kth round global model, and the ith miner node belongs to a block chain system.
A first downloading module 850, configured to download the kth round global model from the ith miner node.
In an alternative embodiment, the category calculating module 830 is configured to obtain a locally stored clustering algorithm; and obtaining the target clustering category according to the ith local model based on the clustering algorithm.
In an optional embodiment, the class calculation module 830 is configured to map the ith local model to a hash vector with a length of n based on the clustering algorithm, where the hash vector is a one-dimensional vector used for representing the target clustering class, and each bit of the hash vector is 0 or 1.
In an alternative embodiment, the first downloading module 850 is configured to download the kth round global model from the ith miner node in response to the number of local models used for generating the kth round global model being greater than a first threshold.
In an optional embodiment, the apparatus further includes a timestamp uploading module, configured to upload an ith timestamp to the ith miner node, where the ith timestamp is used to indicate a time when the ith local model completes training.
In an optional embodiment, the apparatus further comprises a reward receiving module for receiving reward data sent by the ith miner node, wherein the reward data is used for providing the redeemed goods.
In summary, in the transverse federated learning apparatus provided in this embodiment, the ith client node can update the global model of the previous round according to the local data to obtain the ith local model; and calculating the target cluster category to which the ith local model belongs, uploading the target cluster category to the ith miner node so that the ith miner node aggregates the local models of the same target cluster category to generate a global model of the current round, and then downloading the global model of the current round from the ith miner node to finish the model updating process of the current round of horizontal federal learning. Therefore, the method carries out local clustering, so that the miner nodes can aggregate the local models of the same category in the process of updating the global model, the updated global model has better performance, the interference of the local model with larger deviation is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 9, fig. 9 is a block diagram of a horizontal federal learning device according to an exemplary embodiment of the present application. The transverse federal learning device is applied to the ith miner node in a block chain system, wherein i is a positive integer. The horizontal federal learning device can be implemented as all or part of a miner's node in software, hardware, or a combination of both. The device includes:
and the data receiving module 910 is configured to receive the ith local model and the target cluster category uploaded by the ith client node.
A model storage module 920, configured to store the ith local model in a candidate block.
A second downloading module 930, configured to download, from the broadcast data of other miners nodes in the blockchain system, a local model corresponding to the target cluster category based on the target cluster category, and store the local model in the candidate block.
A model aggregation module 940, configured to perform a workload certification PoW operation in response to a target miner node existing in the blockchain system, and aggregate a kth round global model according to a local model corresponding to a target clustering category stored in a generated block generated by the target miner node; the generation block is a candidate block for the target mineworker node when completing a workload-justified PoW operation.
A model sending module 950, configured to send the kth round global model to the ith client node.
In an optional embodiment, the model storage module 920 is configured to send the ith local model to an i +1 th miner node in the blockchain system, so that the i +1 th miner node verifies whether the ith local model is true or false; in response to the ith local model being true, storing the ith local model into the candidate block.
In an alternative embodiment, the model aggregation module 940 is configured to determine the candidate block as a generation block in response to the target miner node being the ith miner node; and aggregating to obtain a k-th round global model according to the local model corresponding to the target cluster category stored in the generating block. Or, the model aggregation module 940 is configured to, in response to that a target miner node is a miner node other than the ith miner node in the blockchain system, obtain a generation block of the target miner node; and aggregating to obtain a k-th round global model according to the local model corresponding to the target cluster category stored in the generating block.
In an optional embodiment, the apparatus further comprises an execution module for executing the workload attestation PoW operation in response to a candidate block existing in the blockchain system satisfying an aggregation trigger condition; wherein the aggregation trigger condition comprises at least one of: the candidate block is full; or, the time elapsed since storing the ith local model is longer than a second threshold.
In an alternative embodiment, the generating block in the apparatus is a block in a block chain corresponding to the target cluster category; the block chains corresponding to different cluster types are different.
In an optional embodiment, the model aggregation module 940 is configured to calculate a mean of local models corresponding to the target cluster category; determining the mean as the k-th round global model.
In an alternative embodiment, the model sending module 950 is configured to send the kth round global model to the ith client node in response to the number of local models used for generating the kth round global model being greater than a first threshold.
In an optional embodiment, the apparatus further includes a timestamp receiving module, configured to receive an ith timestamp uploaded by the ith client node, where the ith timestamp is used to indicate a time when the ith local model completes training.
In an optional embodiment, the apparatus further comprises a reward transmission module for transmitting reward data to the ith client node, the reward data for providing a redemption item.
In summary, in this embodiment, the ith miner node in the blockchain system can receive the ith local model and the target cluster type sent by the ith client node, store the ith local model in a candidate block in the local, then download the local models corresponding to the target cluster types from other miner nodes based on the target cluster types, and store the local models in the candidate block, when there is a target miner node that completes the workload certification PoW operation in the blockchain system, aggregate the local models corresponding to the target cluster types in the generation block of the target miner node to obtain the kth round global model, and then send the kth round global model to the ith client node. Therefore, the method realizes the transverse federal learning method combined with the block chain system, can ensure that the global model is not tampered, and can avoid the phenomenon that the whole transverse federal learning method stops running after the central server fails. Meanwhile, the local models belonging to the same category can be aggregated, so that the interference of the local models with larger deviation when the global model is generated is avoided, and the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
The present application provides a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federated learning method as applied herein in an i < th > client node in a lateral federated learning system.
The present application provides a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a method of horizontal federal learning as applied herein in an ith miner node in a blockchain system.
It should be noted that: in the above embodiment, when the horizontal federal learning method is executed, the horizontal federal learning apparatus is only illustrated by the division of the above functional modules, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the horizontal federal learning device and the horizontal federal learning method provided by the embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the implementation of the present application and is not intended to limit the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (21)

1. A horizontal federated learning method is applied to an ith client node in a horizontal federated learning system, wherein i is a positive integer, and the method comprises the following steps:
acquiring a k-1 th round of global model, wherein the k-1 th round of global model is obtained after a k-1 th round of training in the horizontal federal learning process, k is greater than or equal to 2, and k is a positive integer;
updating the k-1 th round global model according to local data to obtain an ith local model;
calculating the target cluster category to which the ith local model belongs;
uploading the ith local model and the target cluster category to an ith miner node so that the ith miner node aggregates the local models belonging to the target cluster category into a kth round global model, wherein the ith miner node belongs to a block chain system;
and downloading the k round global model from the ith miner node.
2. The method of claim 1, wherein the calculating the target cluster category to which the ith local model belongs comprises:
acquiring a clustering algorithm of local storage;
and obtaining the target clustering category according to the ith local model based on the clustering algorithm.
3. The method according to claim 2, wherein the deriving the target cluster category according to the ith local model based on the clustering algorithm comprises:
based on the clustering algorithm, mapping the ith local model to a hash vector of length n, the hash vector being a one-dimensional vector for representing the target cluster category, and each bit of the hash vector being 0 or 1.
4. The method according to any one of claims 1 to 3, wherein the downloading the kth round global model from the ith miner node comprises:
downloading the kth round global model from the ith miner node in response to the number of local models used to generate the kth round global model being greater than a first threshold.
5. The method of any of claims 1 to 3, further comprising:
uploading an ith timestamp to the ith miner node, wherein the ith timestamp is used for indicating the moment when the ith local model completes training.
6. The method of any of claims 1 to 3, further comprising:
receiving reward data sent by the ith miner node, wherein the reward data is used for providing redeemed goods.
7. A horizontal federal learning method is applied to the ith miner node in a block chain system, wherein i is a positive integer, and the method comprises the following steps:
receiving an ith local model and a target cluster category uploaded by an ith client node;
storing the ith local model into a candidate block;
downloading local models corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system based on the target cluster category and storing the local models in the candidate blocks;
responding to the fact that a target miner node in the block chain system finishes workload certification PoW operation, and aggregating to obtain a k-th round global model according to the ith local model corresponding to the target cluster category stored in a generating block generated by the target miner node; the generation block is a candidate block for the target mineworker node when completing a workload attestation PoW operation;
sending the k round global model to the i client node.
8. The method of claim 7, wherein storing the ith local model in a candidate block comprises:
sending the ith local model to an (i + 1) th miner node in the block chain system so that the (i + 1) th miner node can verify the authenticity of the ith local model;
in response to the ith local model being true, storing the ith local model into the candidate block.
9. The method of claim 7, wherein the aggregating a kth round global model according to a local model corresponding to a target cluster category stored in a generated block generated by a target mineworker node in response to the presence of the target mineworker node in the blockchain system completing a workload attestation (PoW) operation comprises:
determining the candidate block as a generated block in response to the target miner node being the ith miner node;
aggregating to obtain a k-th round global model according to the local model corresponding to the target clustering category stored in the generating block;
or the like, or, alternatively,
in response to that a target miner node is a miner node in the blockchain system except for the ith miner node, obtaining a generation block of the target miner node;
and aggregating to obtain a k-th round global model according to the local model corresponding to the target cluster category stored in the generating block.
10. The method of claim 9, further comprising:
in response to a candidate block existing in the blockchain system satisfying an aggregation trigger condition, performing the workload-justified PoW operation;
wherein the aggregation trigger condition comprises at least one of: the candidate block is full; or, the time elapsed since storing the ith local model is longer than a second threshold.
11. The method of claim 9, wherein the generated block is a block in a block chain corresponding to the target cluster category; the block chains corresponding to different cluster types are different.
12. The method according to any one of claims 7 to 10, wherein the aggregating to obtain the k-th round global model comprises:
calculating the mean value of the local model corresponding to the target clustering category;
determining the mean as the k-th round global model.
13. The method according to any of claims 7 to 10, wherein said sending said k-th round global model to said i-th client node comprises:
in response to the number of local models used to generate the k-th round global model being greater than a first threshold, the transmitting the k-th round global model into the i-th client node.
14. The method according to any one of claims 7 to 10, further comprising:
and receiving an ith timestamp uploaded by the ith client node, wherein the ith timestamp is used for indicating the moment when the ith local model completes training.
15. The method according to any one of claims 7 to 10, further comprising:
transmitting reward data to the ith client node, the reward data for providing redemption items.
16. A horizontal federal learning device is applied to an ith client node in a horizontal federal learning system, wherein i is a positive integer, and the device comprises:
the model acquisition module is used for acquiring a k-1 th round of global model, wherein the k-1 th round of global model is obtained after a k-1 th round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer;
the model updating module is used for updating the k-1 th round global model according to local data to obtain an ith local model;
the category calculation module is used for calculating the target cluster category to which the ith local model belongs;
the data uploading module is used for uploading the ith local model and the target cluster category to an ith miner node so that the ith miner node can aggregate the local models belonging to the target cluster category into a kth round global model, and the ith miner node belongs to a block chain system;
and the first downloading module is used for downloading the kth round global model from the ith miner node.
17. A horizontal federal learning device, which is applied to the ith miner node in a block chain system, wherein i is a positive integer, and the device comprises:
the data receiving module is used for receiving the ith local model and the target cluster category uploaded by the ith client node;
the model storage module is used for storing the ith local model into a candidate block;
the second downloading module is used for downloading a local model corresponding to the target clustering type from the broadcast data of other miner nodes in the blockchain system based on the target clustering type and storing the local model into the candidate block;
the model aggregation module is used for responding to the fact that a target miner node exists in the block chain system to finish workload certification PoW operation, and aggregating to obtain a kth round global model according to a local model corresponding to a target clustering category stored in a generating block generated by the target miner node; the generation block is a candidate block for the target mineworker node when completing a workload attestation PoW operation;
and the model sending module is used for sending the kth round global model to the ith client node.
18. A client node, comprising a processor, a memory coupled to the processor, and program instructions stored on the memory that, when executed by the processor, implement the method of horizontal federal learning as in any of claims 1 to 6.
19. A miner node comprising a processor, a memory coupled to the processor, and program instructions stored on the memory, the processor when executing the program instructions implementing the lateral federal learning methodology of any of claims 7 to 15.
20. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the method for horizontal federal learning as claimed in any of claims 1 to 6.
21. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement a method for horizontal federal learning as claimed in any of claims 7 to 15.
CN202110801990.9A 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium Active CN113487041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801990.9A CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801990.9A CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113487041A true CN113487041A (en) 2021-10-08
CN113487041B CN113487041B (en) 2024-05-07

Family

ID=77939595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801990.9A Active CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113487041B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160309A1 (en) * 2022-02-28 2023-08-31 华为技术有限公司 Federated learning method and related device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373988A1 (en) * 2017-06-27 2018-12-27 Hcl Technologies Limited System and method for tuning and deploying an analytical model over a target eco-system
CN111355739A (en) * 2020-03-06 2020-06-30 深圳前海微众银行股份有限公司 Data transmission method, device, terminal equipment and medium for horizontal federal learning
CN111931242A (en) * 2020-09-30 2020-11-13 国网浙江省电力有限公司电力科学研究院 Data sharing method, computer equipment applying same and readable storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112508075A (en) * 2020-12-01 2021-03-16 平安科技(深圳)有限公司 Horizontal federation-based DBSCAN clustering method and related equipment thereof
CN112527273A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Code completion method, device and related equipment
CN112714106A (en) * 2020-12-17 2021-04-27 杭州趣链科技有限公司 Block chain-based federal learning casual vehicle carrying attack defense method
CN112712182A (en) * 2021-03-29 2021-04-27 腾讯科技(深圳)有限公司 Model training method and device based on federal learning and storage medium
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373988A1 (en) * 2017-06-27 2018-12-27 Hcl Technologies Limited System and method for tuning and deploying an analytical model over a target eco-system
CN111355739A (en) * 2020-03-06 2020-06-30 深圳前海微众银行股份有限公司 Data transmission method, device, terminal equipment and medium for horizontal federal learning
CN111931242A (en) * 2020-09-30 2020-11-13 国网浙江省电力有限公司电力科学研究院 Data sharing method, computer equipment applying same and readable storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112508075A (en) * 2020-12-01 2021-03-16 平安科技(深圳)有限公司 Horizontal federation-based DBSCAN clustering method and related equipment thereof
CN112714106A (en) * 2020-12-17 2021-04-27 杭州趣链科技有限公司 Block chain-based federal learning casual vehicle carrying attack defense method
CN112527273A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Code completion method, device and related equipment
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster
CN112712182A (en) * 2021-03-29 2021-04-27 腾讯科技(深圳)有限公司 Model training method and device based on federal learning and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾延延;张昭;冯键;王春凯;: "联邦学习模型在涉密数据处理中的应用", 中国电子科学研究院学报, no. 01 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160309A1 (en) * 2022-02-28 2023-08-31 华为技术有限公司 Federated learning method and related device

Also Published As

Publication number Publication date
CN113487041B (en) 2024-05-07

Similar Documents

Publication Publication Date Title
CN110084377B (en) Method and device for constructing decision tree
CN110442652B (en) Cross-chain data processing method and device based on block chain
CN110365491B (en) Service processing method, device, equipment, storage medium and data sharing system
US20210304201A1 (en) Transaction verification method and apparatus, storage medium, and electronic device
CN108881312A (en) Intelligent contract upgrade method, system and relevant device and storage medium
WO2019001139A1 (en) Method and device for running chaincode
CN113627085B (en) Transverse federal learning modeling optimization method, equipment and medium
CN112527912B (en) Data processing method and device based on block chain network and computer equipment
CN113505882B (en) Data processing method based on federal neural network model, related equipment and medium
CN109146490A (en) block generation method, device and system
CN113505520A (en) Method, device and system for supporting heterogeneous federated learning
CN111711655A (en) Block chain-based electronic data evidence storing method, system, storage medium and terminal
CN110601896A (en) Data processing method and equipment based on block chain nodes
WO2023284387A1 (en) Model training method, apparatus, and system based on federated learning, and device and medium
CN114140075B (en) Service processing method, device, medium and electronic equipment
CN112418259A (en) Method for configuring real-time rules based on user behaviors in live broadcast process, computer equipment and readable storage medium
CN114186256A (en) Neural network model training method, device, equipment and storage medium
US20240005165A1 (en) Machine learning model training method, prediction method therefor, apparatus, device, computer-readable storage medium, and computer program product
CN112631884A (en) Pressure measurement method and device based on data synchronization, computer equipment and storage medium
CN113487041B (en) Transverse federal learning method, device and storage medium
US20240160505A1 (en) Method of processing agreement task
CN116629379A (en) Federal learning aggregation method and device, storage medium and electronic equipment
CN115859371A (en) Privacy calculation method based on block chain, electronic device and storage medium
CN115328786A (en) Automatic testing method and device based on block chain and storage medium
CN114764389A (en) Heterogeneous simulation test platform of joint learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230728

Address after: 1301, Office Building T2, Qianhai China Resources Financial Center, No. 55 Guiwan Fourth Road, Nanshan Street, Qianhai Shenzhen-Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518052

Applicant after: Shenzhen Hefei Technology Co.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant