CN113487041B - Transverse federal learning method, device and storage medium - Google Patents

Transverse federal learning method, device and storage medium Download PDF

Info

Publication number
CN113487041B
CN113487041B CN202110801990.9A CN202110801990A CN113487041B CN 113487041 B CN113487041 B CN 113487041B CN 202110801990 A CN202110801990 A CN 202110801990A CN 113487041 B CN113487041 B CN 113487041B
Authority
CN
China
Prior art keywords
ith
model
node
target
local model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110801990.9A
Other languages
Chinese (zh)
Other versions
CN113487041A (en
Inventor
侯宪龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hefei Technology Co ltd
Original Assignee
Shenzhen Hefei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hefei Technology Co ltd filed Critical Shenzhen Hefei Technology Co ltd
Priority to CN202110801990.9A priority Critical patent/CN113487041B/en
Publication of CN113487041A publication Critical patent/CN113487041A/en
Application granted granted Critical
Publication of CN113487041B publication Critical patent/CN113487041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a transverse federal learning method, a device and a storage medium, belonging to the technical field of machine learning. Aiming at a transverse federation learning method, the ith client node can update the global model of the previous round according to local data to obtain the ith local model; and calculating the target cluster category to which the i local model belongs, uploading the target cluster category to the i miner node so that the i miner node can generate the global model of the round by polymerizing the local models belonging to the same target cluster category, and then downloading the global model of the round from the i miner node. As can be seen, the clustering method is adopted in the method, so that the miner nodes gather the local models such as the class in the process of updating the global model, the updated global model has better performance, local model interference with larger deviation is avoided, and the updating efficiency of transverse federal learning in the actual application scene with larger local model difference is improved.

Description

Transverse federal learning method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of machine learning, in particular to a transverse federal learning method, a device and a storage medium.
Background
Lateral federal learning (Horizontal FEDERATED LEARNING) is widely used as a method for cooperatively executing machine learning model training by a terminal and a cloud.
In the related art, a central server is set in the cloud, and the terminal is equipment used by each user. The central server establishes a transverse federal learning system, and the terminals participate autonomously to update the global model in the transverse federal learning system.
Disclosure of Invention
The embodiment of the application provides a transverse federal learning method, a device and a storage medium. The technical scheme is as follows:
according to an aspect of the present application, there is provided a horizontal federal learning method applied to an i-th client node in a horizontal federal learning system, i being a positive integer, the method including:
acquiring a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in a transverse federal learning process, k is more than or equal to 2, and k is a positive integer;
Updating the kth-1 round global model according to local data to obtain an ith local model;
Calculating the target clustering category to which the ith local model belongs;
Uploading the ith local model and the target cluster category to an ith miner node so that the ith miner node polymerizes the local models belonging to the target cluster category into a kth round of global model, wherein the ith miner node belongs to a blockchain system;
Downloading the kth round of global model from the ith miner node.
According to another aspect of the present application, there is provided a horizontal federal learning method applied to an ith miner node in a blockchain system, i being a positive integer, the method comprising:
receiving an ith local model and a target cluster type uploaded by an ith client node;
Storing the ith local model into a candidate block;
Based on the target cluster category, downloading a local model corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system and storing the local model into the candidate block;
Responding to the fact that a target miner node in the blockchain system completes the workload proof PoW operation, and obtaining a kth round of global model through aggregation according to a local model corresponding to a target cluster type stored in a generating block generated by the target miner node; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW;
And sending the kth round of global model to the ith client node.
According to another aspect of the present application, there is provided a horizontal federal learning apparatus for use in an ith client node in a horizontal federal learning system, i being a positive integer, the apparatus comprising:
The model acquisition module is used for acquiring a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer;
The model updating module is used for updating the kth-1 round global model according to the local data to obtain an ith local model;
the class calculation module is used for calculating the target clustering class to which the ith local model belongs;
The data uploading module is used for uploading the ith local model and the target clustering type to an ith miner node so that the ith miner node can generate a kth round of global model by polymerizing the local models belonging to the target clustering type, and the ith miner node belongs to a block chain system;
And the first downloading module is used for downloading the kth round of global model from the ith miner node.
According to another aspect of the present application, there is provided a transverse federal learning apparatus for use in an ith mineworker node in a blockchain system, i being a positive integer, the apparatus comprising:
The data receiving module is used for receiving the ith local model and the target clustering category uploaded by the ith client node;
A model storage module for storing the ith local model into a candidate block;
A second downloading module, configured to download, from broadcast data of other miner nodes in the blockchain system, a local model corresponding to the target cluster category based on the target cluster category, and store the local model in the candidate block;
the model aggregation module is used for performing workload proof (PoW) operation in response to the existence of a target miner node in the blockchain system, and aggregating according to the local model corresponding to the target cluster category stored in the generation block generated by the target miner node to obtain a kth round of global model; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW;
And the model sending module is used for sending the kth round of global model to the ith client node.
According to another aspect of the present application there is provided a client node comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement a lateral federation learning method as provided by aspects of the present application in an i-th client node in a lateral federation learning system.
According to another aspect of the present application there is provided a mineworker node comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement a lateral federal learning method as provided by aspects of the present application as applied in an ith mineworker node in a blockchain system.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federation learning method as provided by aspects of the present application as applied in an i-th client node in a lateral federation learning system.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federal learning method as provided by aspects of the present application as applied in an ith miner node in a blockchain system.
According to one aspect of the present application, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform the lateral federation learning method provided by the application in various aspects applied in the i-th client node in the lateral federation learning system.
According to one aspect of the present application, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform the lateral federal learning method of the present application as provided in various aspects of the ith miner node in the blockchain system.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
Aiming at a transverse federation learning method, the ith client node can update the global model of the previous round according to local data to obtain the ith local model; and calculating the target cluster category to which the ith local model belongs, and uploading the target cluster category to the ith miner node so that the ith miner node can generate a global model of the round by polymerizing the local models belonging to the same target cluster category, and then downloading the global model of the round from the ith miner node to complete a round of model updating process of transverse federal learning. Therefore, the method performs clustering locally on the client node, so that the miner node can gather local models such as the group in the process of updating the global model, the updated global model has better performance, local model interference with larger deviation is avoided, and the updating efficiency of transverse federal learning in a real application scene with larger local model difference is improved.
Drawings
In order to more clearly describe the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments of the present application will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a block diagram of a client node according to an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a miner node according to an exemplary embodiment of the application;
FIG. 3 is a system architecture diagram of a lateral federal learning system in the related art;
FIG. 4 is a system architecture diagram of a horizontal federal learning system according to an embodiment of the present application;
FIG. 5 is a flowchart of a lateral federal learning method provided by an exemplary embodiment of the present application;
FIG. 6 is a flowchart of a lateral federal learning method provided by an exemplary embodiment of the present application;
FIG. 7 is a flowchart of a lateral federal learning method provided by an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a lateral federal learning device according to an exemplary embodiment of the present application;
Fig. 9 is a block diagram of a lateral federal learning device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "connected," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
As used herein, the term "if" is optionally interpreted as "when..once.," at … … times, "in response to a determination," or "in response to a detection," depending on the context. Similarly, the phrase "if determined … …" or "if detected (stated condition or event)" or "in response to detection (stated condition or event)" depending on the context.
It should be noted that the use of personally identifiable information should follow privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining user privacy. In particular, the personally identifiable information should clearly indicate to the user the nature of authorized use during administration and processing to minimize the risk of unintended or unauthorized access or use.
The horizontal federal learning method according to the embodiment of the present application may be applied to a client node, where the client node has a display screen and an arithmetic unit. The client nodes may include cell phones, tablet computers, laptops, desktop computers, or computer tablets, etc.
Referring to fig. 1, fig. 1 is a block diagram of a client node according to an exemplary embodiment of the present application, where the client node includes a processor 120 and a memory 140, and at least one instruction is stored in the memory 140, where the instruction is loaded and executed by the processor 120 to implement a lateral federal learning method according to various method embodiments of the present application, as shown in fig. 1.
In the present application, the client node 100 is an electronic device having a data operation function. The client node 100 obtains a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer; updating the kth-1 round global model according to the local data to obtain an ith local model; calculating the target clustering category to which the ith local model belongs; uploading the ith local model and the target clustering category to the ith miner node so that the ith miner node can generate a kth round of global model by polymerizing the local models belonging to the target clustering category, wherein the ith miner node belongs to a blockchain system; downloading the kth round of global model from the ith miner node.
Processor 120 may include one or more processing cores. Processor 120 connects the various portions of the overall client node 100 using various interfaces and lines, and performs various functions of the client node 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in memory 140, and invoking data stored in memory 140. Alternatively, the processor 120 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 120 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 120 and may be implemented by a single chip.
The Memory 140 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 140 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 140 may be used to store instructions, programs, code sets, or instruction sets. The memory 140 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc.; the storage data area may store data and the like referred to in the following respective method embodiments.
Referring to fig. 2, fig. 2 is a block diagram of a miner node according to an exemplary embodiment of the application, and as shown in fig. 2, the miner node includes a processor 220 and a memory 240, where the memory 240 stores at least one instruction, and the instruction is loaded and executed by the processor 220 to implement a horizontal federal learning method according to various method embodiments of the application.
In the present application, the mineworker node 200 is an electronic device having a data operation function. It should be noted that, since the miner node is a node in the blockchain system, the parallel computing performance of the miner node 200 may be higher. The miner node 200 receives the ith local model and the target cluster category uploaded by the ith client node; storing the ith local model in the candidate block; based on the target cluster category, downloading a local model corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system and storing the local model into a candidate block; responding to the fact that a target miner node in a blockchain system completes the workload proof PoW operation, and according to an ith local model corresponding to a target cluster type stored in a generating block generated by the target miner node, aggregating to obtain a kth global model; the generation block is a candidate block of the target miner node when finishing the work load proving PoW operation; the kth round of global model is sent to the ith client node.
Processor 220 may include one or more processing cores. The processor 220 utilizes various interfaces and lines to connect various portions of the overall miner node 200, perform various functions of the miner node 200 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 240, and invoking data stored in the memory 240. Alternatively, the processor 220 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 220 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 220 and may be implemented by a single chip.
The Memory 240 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 240 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 240 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 240 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc.; the storage data area may store data and the like referred to in the following respective method embodiments.
Referring to fig. 3, fig. 3 is a system architecture diagram of a lateral federal learning system according to the related art. In fig. 3, the lateral federal learning system 300 includes a first client node 311, a second client node 312, and a central server 320. It should be noted that the lateral federal learning system 300 may include several client nodes, two of which are shown in fig. 3 by way of example only.
In fig. 3, a first client node 311 and a second client node 312 are used to obtain the global model of the current round from the central server 320. After the global model of the current round is obtained, the first client node 311 and the second client node 312 train the global model of the current round by using locally stored data, so as to obtain a first local model and a second local model. The first local model and the second local model are then transmitted back to the central server 320 by the respective belonging client node. The central server 320 will aggregate the first local model and the second local model to get the global model for the next round.
As can be seen from the above solutions provided in the related art, the role of the central server is of great importance. If the central server is damaged or fails, the client node may not obtain a correct global model, so that the horizontal federal learning system cannot operate normally. Or if the data in the central server is tampered, the central server transmits the tampered global model to each client node, so that the client node cannot effectively work according to the global model, even an erroneous operation result is generated, and the harm is caused. Therefore, the central server in the related art needs to be maintained with emphasis to secure the system.
Based on risks and challenges faced by the transverse federal learning system in the related art, the application provides a transverse federal learning method combined with a blockchain technology. Referring to fig. 4, fig. 4 is a system architecture diagram of a horizontal federal learning system according to an embodiment of the present application. In fig. 4, a lateral federal learning system 410 and a blockchain system 420 are included.
Alternatively, the lateral federal learning system 410 includes a number of client nodes, which may be several to hundreds of thousands, and embodiments of the present application are not limited to a particular number of client nodes. Illustratively, in FIG. 4, four client nodes are represented, representing several client nodes in the lateral federal learning system 410. Among the four client nodes are client node 1, node 2, node 413, and node 4, node 411, node 412, and node 4, respectively.
Similarly, the blockchain system 420 includes a number of miner nodes, which may be several to hundreds of thousands, and embodiments of the application are not limited to a particular number of miner nodes. Illustratively, in FIG. 4, four miner nodes are represented, representing a number of miner nodes in the blockchain system 420. The four miner nodes are the 1 st miner node 421, the 2 nd miner node 422, the 3 rd miner node 423, and the 4 th miner node 424, respectively.
In the system shown in fig. 4, the blockchain system 420 replaces the central server in the related art so that the global model can no longer be easily tampered with. Meanwhile, because the global model is maintained by the blockchain, even if part of miner nodes fail, most client nodes can still receive the correct global model of the next round, and the normal operation of the transverse federal learning system is not influenced.
The present application will be described based on the framework provided in fig. 4, and details of the lateral federal learning method provided in the present application will be described later.
Referring to fig. 5, fig. 5 is a flowchart of a lateral federal learning method according to an exemplary embodiment of the present application. The horizontal federation learning method can be applied to the ith client node in the horizontal federation learning system, wherein i is a positive integer. In fig. 5, the lateral federal learning method includes:
Step 510, obtaining a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in the transverse federal learning process, and k is greater than or equal to 2 and is a positive integer.
In the embodiment of the application, the ith client node may be a personal device actually used by a user. For example, the ith client node may be a smart phone, computer, or other device whose computing capabilities are used by the user to meet the computing performance requirements for the local device in lateral federal learning.
It should be noted that, before adding the horizontal federation learning system provided by the present application, the client node may perform verification of the computing performance, and if the requirement of the computing performance is met, the client node may apply to add the horizontal federation learning system. Optionally, the process of adding the client node into the horizontal federal learning system may be confirmed by a User operating through a User Interface (UI), or may be confirmed by a User agreeing to participate in a performance improvement plan or the like to collectively promote the performance of the local model.
In the present application, the ith client node obtains the kth-1 round of global model. In one possible approach, the kth-1 round of global models are already stored locally, at which point the ith client node need only obtain the kth-1 round of global models from the local. In another possible way, the kth-1 round of global model is not yet stored locally, and the ith client node needs to obtain the kth-1 round of global model from the corresponding miner node. Here, the miner node is the miner node corresponding to the i-th client node when the global model is updated for the k-1 th round of the horizontal federal learning system.
And step 520, updating the kth-1 round of global model according to the local data to obtain an ith local model.
In the application, if the ith client node already obtains the kth-1 round of global model, the ith client node can locally update the model, and the kth-1 round of global model is updated by using local data, so as to obtain the ith local model.
In one possible way, the ith client node may store therein the algorithms or formulas needed to update the global model. In different application scenarios, the algorithms or formulas required to update the global model are different. In the present application, one possible update formula is shown, please refer to formula (1).
In equation (1), F i denotes a loss function. For example, the loss function may be mean square error MSE (Mean Squared Error) or cross entropy. Illustratively, the choice of the loss function depends on downstream tasks performed by a machine learning model that is derived by applying lateral federal learning training. In one possible way, the loss function may select the cross entropy if the downstream task is a classification task, and the mean square error MSE if the downstream task is a regression task.Indicating that the distance between the current model update result and the tag is to be minimized as much as possible. /(I)And representing a regularization term, wherein the regularization term has the function of controlling parameters of the ith local model updated at the time to be as close as possible to parameters of the kth-1 round global model, so that the convergence capacity of the model is enhanced.
In step 530, the target cluster class to which the i-th local model belongs is calculated.
In the application, the ith client node can calculate the target clustering category to which the ith local model belongs after obtaining the ith local model. In one possible approach, the ith client node has a specified feature algorithm stored locally. The i-th client node can call the stored characteristic algorithm, takes the i-th local model as input, and obtains the target clustering category to which the i-th local model belongs after operation.
Illustratively, the horizontal federation learning method provided by the application can set the number of clustering categories according to actual application scenes. In one possible approach, the number of cluster categories may be a positive integer. E.g., 2, 4, 8, or 16, etc. The application does not limit the number of cluster categories.
Step 540, uploading the ith local model and the target cluster category to the ith miner node, so that the ith miner node generates a kth round of global model by polymerizing the local models of the same target cluster category, and the ith miner node belongs to the blockchain system.
In the application, after the ith client node obtains the ith local model and the target clustering category, the ith client node can perform information interaction with the ith miner node.
It should be noted that the ith miner node belongs to the blockchain system, and the ith miner node is a miner node in the blockchain system. In one possible design scenario of the present application, when a miner node is present in the blockchain system, the miner node qualifies to perform the lateral federation learning method provided by the present application. In a k-th round of model updating for lateral federal learning, the miner node may randomly select a client node binding from among the client nodes that have not bound the miner node. Wherein the client node to which the ith miner node selects a binding is the ith client node.
It can be seen that the ith client node has a binding relationship with the ith miner node in advance. Thus, the ith client node can upload the ith local model and target cluster categories to the ith miner node. Thereafter, the ith miner node can aggregate local models of the same generic target cluster class into a kth round of global models in the blockchain system.
Step 550, downloading the kth round of global model from the ith miner node.
In the design of the application, after the ith miner node is successfully aggregated to obtain the kth round of global model, the kth round of global model is sent to the corresponding ith client node. Accordingly, the ith client node downloads the kth round of global model from the ith miner node.
In one possible approach, the ith client node may be configured to monitor the information of the ith miner node, and when the ith miner node is found to have transmitted information, receive the kth round of global model sent by the ith miner node.
In another possible way, the ith client node does not monitor the information of the ith miner node. The ith miner node holds the communication address of the ith client node. The ith miner node directly sends the kth round of global model to the communication address so that the ith client node obtains the global model with the updated round, namely the kth round of global model.
In summary, according to the horizontal federation learning method provided in this embodiment, the i-th client node can update the global model of the previous round according to the local data to obtain the i-th local model; and calculating the target cluster category to which the ith local model belongs, and uploading the target cluster category to the ith miner node so that the ith miner node can generate a global model of the round by polymerizing the local models belonging to the same target cluster category, and then downloading the global model of the round from the ith miner node to complete a round of model updating process of transverse federal learning. Therefore, the method locally clusters, so that the miner nodes can aggregate the local models of the same category in the process of updating the global model, the updated global model has better performance, local model interference with larger deviation is avoided, and the updating efficiency of transverse federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 6, fig. 6 is a flowchart of a lateral federal learning method according to an exemplary embodiment of the present application. The lateral federal learning method can be applied to the ith miner node in the blockchain system shown above. In fig. 6, the lateral federal learning method includes:
step 610, the ith local model and target cluster category uploaded by the ith client node are received.
In the application, after the ith miner node is bound with the ith client node in advance, the ith local model uploaded by the ith client node and the target cluster category to which the ith local model belongs can be received.
Step 620, store the ith local model into the candidate block.
It should be noted that, after receiving the i local model and the target cluster category, the i miner node can store the i local model in the candidate block. The candidate block is a block maintained by the ith miner node itself, which is not published in the entire blockchain system, and which is not yet full.
Step 630, based on the target cluster category, downloading the local model corresponding to the target cluster category from the broadcast data of other miner nodes in the blockchain system and storing the local model in the candidate block.
Optionally, after storing the ith local model in the candidate block, the ith local model and the target cluster class may be broadcast in the blockchain system so that other miner nodes storing local models of the target cluster class can save the ith local model.
In the present application, the ith miner node is able to receive broadcast data from other miner nodes in the blockchain system. If the clustering type in the broadcast data is the same as the target clustering type, the ith miner node can download a local model corresponding to the clustering type from the broadcast data and store the local model into the candidate block.
Step 640, responding to the fact that a target miner node in the blockchain system completes the workload proof PoW operation, and obtaining a kth round of global model through aggregation according to a local model corresponding to a target cluster type stored in a generating block generated by the target miner node; the generated block is a candidate block for the target mineworker node when the workload proof PoW operation is completed.
It should be noted that, the local model corresponding to the target cluster category includes the i local model.
In the present application, the blockchain system has target mineworker nodes that complete the workload proof PoW operation. At this time, the target miner node has the right to send the generated block to other miner nodes in the entire system. Wherein the generated block is a candidate block of the target mineworker node when the workload proof PoW operation is completed. After the target miner nodes send the generation blocks to other miner nodes in the whole system, all the miner nodes in the blockchain system aggregate the local models according to the local models corresponding to the target cluster types stored in the generation blocks to obtain a kth round of global model.
In one possible approach, the ith miner node may use the algorithm of equation (2) to derive the kth round global model.
For equation (2), w k represents the kth round of global model, n represents the number of local models corresponding to the target cluster category stored in the generation block,And representing the local model corresponding to each target cluster category.
It should be noted that, after each new block is found, the miner node in the present application broadcasts a message to other miner nodes in the blockchain system to determine whether the blockchain is bifurcated or not. If the blockchains in the blockchain system are bifurcated, the application can empty the contents of the candidate blocks from all the miner nodes, and acquire the corresponding local model and the clustering category to which the local model belongs from the corresponding client nodes again.
Step 650, send the kth round of global model to the ith client node.
In the present application, the ith miner node can send the kth round of global model into the ith client node.
In one possible approach, the ith miner node knows in advance the communication address of the ith client node corresponding to the ith miner node. In this scenario, the ith miner node sends the kth round of global model into the ith client node according to the communication address.
In another possible way, the ith miner node is unaware of the communication address of the ith client node corresponding to the ith miner node. In this scenario, the ith miner node may send the kth round of global models in a broadcast manner. Correspondingly, the ith client node knows the identity of the ith miner node and receives the kth round of global model according to the identity of the ith miner node.
In another possible way, the ith miner node is unaware of the communication address of the ith client node corresponding to the ith miner node. In this scenario, the ith miner node may pre-establish a communication link with the ith client node over which the ith miner node sends the kth round of global model to the ith client node.
In summary, the present embodiment may enable the ith miner node in the blockchain system to receive the ith local model and the target cluster category sent by the ith client node, store the ith local model in a local candidate block, download the local model corresponding to the target cluster category from other miner nodes based on the target cluster category, store the local model in the candidate block, and aggregate the local model corresponding to the target cluster category in a generating block of the target miner node to obtain the kth global model when the target miner node that completes the workload proof PoW operation exists in the blockchain system, and send the kth global model to the ith client node. Therefore, the application realizes the transverse federation learning method combined with the blockchain system, not only can ensure that the global model is not tampered, but also can avoid that the whole transverse federation learning method falls into stop and cannot run after the central server fails. Meanwhile, as the local models belonging to the same class can be aggregated, the local model interference caused by larger deviation when the global model is generated is avoided, and therefore the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 7, fig. 7 is a flowchart of a lateral federal learning method according to an exemplary embodiment of the present application. The horizontal federation learning method can be completed by the cooperation of an ith client node and an ith miner node, wherein i is a positive integer. In fig. 7, the lateral federal learning method includes:
In step 710, the ith client node obtains the kth-1 round of global model.
In the present application, the execution of step 710 may refer to the execution of step 510, and the embodiments of the present application are not repeated.
In step 720, the ith client node updates the kth-1 round of global model according to the local data to obtain the ith local model.
In the present application, the execution of step 720 may refer to the execution of step 520, and the embodiments of the present application are not described in detail.
In step 731, the ith client node obtains a locally stored clustering algorithm.
In the application, the ith client node can store a clustering algorithm in a local area in advance according to the downstream service requirement. The clustering algorithm is used for grouping local models with similar characteristics into one type.
In one possible approach, the clustering algorithm may be SimHash clustering algorithm.
In step 732, the ith client node obtains a target cluster class from the ith local model based on the clustering algorithm.
Optionally, the ith local model is mapped to a hash vector of length n based on a clustering algorithm, the hash vector is a one-dimensional vector representing the target cluster class, and each bit of the hash vector is 0 or 1.
In the application, if the clustering algorithm adopted by the ith client node is SimHash clustering algorithm, the ith client node can calculate and obtain the target clustering category according to the formula (3).
In the expression (3) of the formula,Representing the target cluster category. Alternatively,/>Is a hash vector, which is a one-dimensional vector. /(I)Is n, each bit in the hash vector is either 0 or 1.
In equation (3), x j is a random unit-norm vector whose size is mx1.
The process of calculating the hash vector by the expression (3) is explained below by way of one example. Let n=2, x 1=[0.1,0.3,0.5]T,x2=[-0.2,-0.4,-0.6]T,
Under the above conditions, the application can obtain the hash vector of the ith client nodeIs that
In this example, the hash vector may appear in total of four categories, 0,1, 0, and 1, 1. That is, the target cluster category includes four types in total.
In the application, simHash clustering algorithm can map similar local models with higher probability into the same hash vector, and cluster the local models generated in this round into a plurality of categories based on SimHash clustering algorithm.
Step 740, the ith client node uploads the ith local model and target cluster categories to the ith miner node.
In the present application, the execution of step 740 may refer to the execution of step 540, and the embodiments of the present application are not described in detail.
Optionally, when the ith client node uploads the ith local model and the target cluster category, the ith timestamp may also be uploaded to the ith miner node. Wherein the ith timestamp is used to indicate the time at which the ith local model completes training.
Correspondingly, the ith miner node receives the ith local model and the target cluster category uploaded by the ith client node. Optionally, the ith miner node is further capable of receiving the ith timestamp uploaded by the ith client node.
In step 761, the ith miner node sends the ith local model to the (i+1) th miner node in the blockchain system, so that the (i+1) th miner node verifies the authenticity of the ith local model.
In response to the ith local model being true, the ith miner node stores the ith local model in the candidate block, step 762.
In the application, the ith miner node can send the received ith local model to other miner nodes for verification. One possible method of verification is to send the local model to the adjacent mineworker node for verification, and if so, then the local model is declared to be true. When the ith local model is true, the ith miner node stores the ith local model in a local candidate block.
Based on the target cluster category, the ith miner node downloads the local model corresponding to the target cluster category from the broadcast data of other miner nodes in the blockchain system and stores it in the candidate block, step 770.
It should be noted that, the ith miner node performs a workload proof PoW operation in response to the existence of candidate blocks in the blockchain system satisfying the aggregation trigger condition. Wherein the aggregation trigger condition includes at least one of: the candidate block is full; or, the elapsed time from the storage of the ith local model is longer than the second threshold.
Wherein the second threshold is a time length threshold.
In the present application, the execution of step 770 may refer to the execution of step 630, and will not be described herein.
In the present application, after the i-th miner node performs the completion of step 770, steps 781 and 782 may be performed, or steps 783 and 784 may be performed.
In response to the target miner node being the ith miner node, the candidate block is determined to be a generation block, step 781.
Alternatively, in the case where the ith miner node itself is the target miner node, the ith miner node determines the own candidate block as the generation block. In the application, the generated block is a block in a block chain corresponding to the target cluster category; the blockchains corresponding to different cluster categories are different.
And 782, aggregating to obtain a kth round of global model according to the local model corresponding to the target cluster category stored in the generation block.
In response to the target miner node being a miner node in the blockchain system other than the ith miner node, a generation block for the target miner node is obtained 783.
And 784, according to the local model corresponding to the target cluster category stored in the generation block, the kth round of global model is obtained through aggregation.
Illustratively, the processes shown in steps 782 and 784 above may be accomplished instead of steps (a 1) and (a 2).
And (a 1) calculating the average value of the local model corresponding to the target cluster category.
And (a 2) determining the mean value as a kth round of global model.
In a possible manner, the ith miner node may use the algorithm of the formula (2) to obtain the kth round of global model, and details may refer to the embodiment shown in fig. 6, which is not described herein.
Step 790, in response to the number of local models used to generate the kth round of global models being greater than the first threshold, transmits the kth round of global models to the ith client node.
In this example, the first threshold may be a percentage, such as 25%, 30%, 40%, 45%, etc., as the application is not limited in this regard.
Correspondingly, when the number of local models for generating the kth round of global models is larger than a first threshold value, the kth round of global models are downloaded from the ith miner node by the ith client node.
In this example, the kth round of updating ends when the kth client node receives the kth round of global model. It should be noted that, because the present application adopts the clustering method, local client nodes belonging to different classes will correspond to different blockchains. For example, if the application includes 4 classes, the miner node will maintain 4 blockchains, so that the client node belonging to the 4 classes can obtain the kth round of global model belonging to the own class, and further, the training effect of the kth round of global model of each class is better.
In summary, the horizontal federation learning method provided in this embodiment can make the horizontal federation learning system adopting the blockchain technology not only reduce the problem that horizontal federation learning cannot be continuously performed due to a failure of a central server and the global model is easy to tamper, but also enable client nodes with large differences in real life to participate in horizontal federation learning through a clustering method, so as to train the global model of each category to which each client node belongs truly and efficiently.
Optionally, the application can execute clustering operation in each round, so that the local model with the same characteristic belongs to one category each time, the training effect of each round is good, and the training efficiency of each round of transverse federal learning is improved.
Optionally, the application can improve the enthusiasm of the participation of the client node, and is helpful for recruiting the participation of the client node in the public network so as to improve the system operation performance of the transverse federal learning system.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 8, fig. 8 is a block diagram illustrating a lateral federal learning device according to an exemplary embodiment of the present application. The horizontal federation learning device is applied to an ith client node in a horizontal federation learning system, wherein i is a positive integer. The lateral federal learning means may be implemented as all or part of the client node by software, hardware, or a combination of both. The device comprises:
The model obtaining module 810 is configured to obtain a kth-1 round of global model, where the kth-1 round of global model is a global model obtained after the kth-1 round of training in the horizontal federal learning process, k is greater than or equal to 2, and k is a positive integer.
And the model updating module 820 is used for updating the kth-1 round of global model according to the local data to obtain an ith local model.
And a class calculation module 830, configured to calculate a target cluster class to which the ith local model belongs.
And the data uploading module 840 is configured to upload the i local model and the target cluster category to an i miner node, so that the i miner node aggregates the local models belonging to the target cluster category into a k-th round global model, and the i miner node belongs to a blockchain system.
A first downloading module 850, configured to download the kth round of global model from the ith miner node.
In an alternative embodiment, the class calculation module 830 is configured to obtain a locally stored clustering algorithm; and obtaining the target clustering category according to the ith local model based on the clustering algorithm.
In an alternative embodiment, the class calculation module 830 is configured to map, based on the clustering algorithm, the ith local model to a hash vector with a length n, where the hash vector is a one-dimensional vector for representing the target cluster class, and each bit of the hash vector is 0 or 1.
In an alternative embodiment, the first downloading module 850 is configured to download the kth round of global models from the ith miner node in response to the number of local models used to generate the kth round of global models being greater than a first threshold.
In an alternative embodiment, the apparatus further comprises a timestamp uploading module configured to upload an ith timestamp to the ith miner node, where the ith timestamp is used to indicate a time when the ith local model completes training.
In summary, according to the horizontal federation learning device provided in this embodiment, the i-th client node can update the global model of the previous round according to the local data to obtain the i-th local model; and calculating the target cluster category to which the ith local model belongs, and uploading the target cluster category to the ith miner node so that the ith miner node can generate a global model of the round by polymerizing the local models belonging to the same target cluster category, and then downloading the global model of the round from the ith miner node to complete a round of model updating process of transverse federal learning. Therefore, the method locally clusters, so that the miner nodes can aggregate the local models of the same category in the process of updating the global model, the updated global model has better performance, local model interference with larger deviation is avoided, and the updating efficiency of transverse federal learning in the actual application scene with larger local model difference is improved.
Referring to fig. 9, fig. 9 is a block diagram illustrating a lateral federal learning device according to an exemplary embodiment of the present application. The transverse federal learning device is applied to an ith miner node in a block chain system, wherein i is a positive integer. The lateral federal learning means may be implemented as all or part of a mineworker node by software, hardware, or a combination of both. The device comprises:
The data receiving module 910 is configured to receive the ith local model and the target cluster category uploaded by the ith client node.
A model storage module 920, configured to store the ith local model into a candidate block.
And a second downloading module 930, configured to download, from broadcast data of other miner nodes in the blockchain system, a local model corresponding to the target cluster category based on the target cluster category, and store the local model in the candidate block.
The model aggregation module 940 is configured to root-aggregate a kth round of global models according to local models corresponding to target cluster types stored in a generating block generated by a target miner node in response to completion of a workload proof PoW operation by the target miner node in the blockchain system; the generated block is a candidate block of the target mineworker node when the workload proof PoW operation is completed.
A model sending module 950, configured to send the kth round of global models to the ith client node.
In an alternative embodiment, the model storage module 920 is configured to send the i-th local model to an i+1-th miner node in the blockchain system, so that the i+1-th miner node verifies authenticity of the i-th local model; in response to the ith local model being true, the ith local model is stored into the candidate block.
In an alternative embodiment, the model aggregation module 940 is configured to determine the candidate block as a generation block in response to the target miner node being the i-th miner node; and according to the local model corresponding to the target cluster category stored in the generation block, the kth round of global model is obtained through aggregation. Or the model aggregation module 940 is configured to obtain a generation block of the target miner node in response to the target miner node being a miner node other than the ith miner node in the blockchain system; and according to the local model corresponding to the target cluster category stored in the generation block, the kth round of global model is obtained through aggregation.
In an alternative embodiment, the apparatus further comprises an execution module for executing the proof of work PoW operation in response to the presence of candidate blocks in the blockchain system satisfying an aggregate trigger condition; wherein the aggregation trigger condition includes at least one of: the candidate block is full; or, the elapsed time from the storage of the ith local model is longer than a second threshold.
In an optional embodiment, the generating block in the apparatus is a block in a blockchain corresponding to the target cluster category; the blockchains corresponding to different cluster categories are different.
In an optional embodiment, the model aggregation module 940 is configured to calculate a mean value of the local model corresponding to the target cluster class; and determining the mean value as the kth round of global model.
In an alternative embodiment, the model sending module 950 is configured to send the kth round of global models to the ith client node in response to the number of local models used to generate the kth round of global models being greater than a first threshold.
In an alternative embodiment, the apparatus further includes a timestamp receiving module, configured to receive an ith timestamp uploaded by the ith client node, where the ith timestamp is used to indicate a time when the ith local model completes training.
In summary, the present embodiment may enable the ith miner node in the blockchain system to receive the ith local model and the target cluster category sent by the ith client node, store the ith local model in a local candidate block, download the local model corresponding to the target cluster category from other miner nodes based on the target cluster category, store the local model in the candidate block, and aggregate the local model corresponding to the target cluster category in a generating block of the target miner node to obtain the kth global model when the target miner node that completes the workload proof PoW operation exists in the blockchain system, and send the kth global model to the ith client node. Therefore, the application realizes the transverse federation learning method combined with the blockchain system, not only can ensure that the global model is not tampered, but also can avoid that the whole transverse federation learning method falls into stop and cannot run after the central server fails. Meanwhile, as the local models belonging to the same class can be aggregated, the local model interference caused by larger deviation when the global model is generated is avoided, and therefore the updating efficiency of the horizontal federal learning in the actual application scene with larger local model difference is improved.
The present application provides a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federation learning method as the present application is applied in an i-th client node in a lateral federation learning system.
The present application provides a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a lateral federal learning method as applied in an ith miner node in a blockchain system.
It should be noted that: in the transverse federation learning device provided in the above embodiment, when the transverse federation learning method is executed, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the transverse federal learning device provided in the above embodiment and the transverse federal learning method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above embodiments are merely exemplary embodiments of the present application and are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present application should be included in the scope of the present application.

Claims (19)

1. A method for horizontal federal learning, applied to an ith client node in a horizontal federal learning system, i being a positive integer, the method comprising:
acquiring a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in a transverse federal learning process, k is more than or equal to 2, and k is a positive integer;
Updating the kth-1 round global model according to local data to obtain an ith local model;
Calculating the target clustering category to which the ith local model belongs;
Uploading the ith local model and the target cluster category to an ith miner node, wherein the ith miner node belongs to a blockchain system; the ith miner node is used for receiving the ith local model and the target clustering category uploaded by the ith client node; storing the ith local model into a candidate block; based on the target cluster category, downloading a local model corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system and storing the local model into the candidate block; when a target miner node for completing the workload proof PoW operation exists in the blockchain system, according to the ith local model corresponding to the target cluster category stored in the generation block generated by the target miner node, acquiring a kth round of global model through aggregation; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW; transmitting the kth round of global model to the ith client node;
Downloading the kth round of global model from the ith miner node.
2. The method of claim 1, wherein said calculating a target cluster category to which said i-th local model belongs comprises:
acquiring a locally stored clustering algorithm;
And obtaining the target clustering category according to the ith local model based on the clustering algorithm.
3. The method according to claim 2, wherein the obtaining the target cluster category according to the i-th local model based on the clustering algorithm comprises:
Based on the clustering algorithm, the ith local model is mapped to a hash vector of length n, the hash vector being a one-dimensional vector representing the target cluster category, and each bit of the hash vector being either 0 or 1.
4. A method according to any one of claims 1 to 3, wherein said downloading said kth round of global models from said ith miner node comprises:
downloading the kth round of global models from the ith miner node in response to the number of local models used to generate the kth round of global models being greater than a first threshold.
5. A method according to any one of claims 1 to 3, wherein the method further comprises:
Uploading an ith timestamp to the ith miner node, wherein the ith timestamp is used for indicating the moment when the ith local model finishes training.
6. A method of horizontal federal learning, for use with an ith miner node in a blockchain system, i being a positive integer, the method comprising:
receiving an ith local model and a target cluster type uploaded by an ith client node;
Storing the ith local model into a candidate block;
Based on the target cluster category, downloading a local model corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system and storing the local model into the candidate block;
when a target miner node for completing the workload proof PoW operation exists in the blockchain system, according to the ith local model corresponding to the target cluster category stored in the generation block generated by the target miner node, acquiring a kth round of global model through aggregation; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW;
And sending the kth round of global model to the ith client node.
7. The method of claim 6, wherein the storing the ith local model into a candidate block comprises:
transmitting the ith local model to an (i+1) th miner node in the blockchain system so that the (i+1) th miner node verifies authenticity of the ith local model;
In response to the ith local model being true, the ith local model is stored into the candidate block.
8. The method according to claim 6, wherein when there is a target miner node that completes the workload proof PoW operation in the blockchain system, aggregating to obtain a kth round of global model according to the ith local model corresponding to a target cluster category stored in a generation block generated by the target miner node, including:
Determining the candidate block as a generation block in response to the target miner node being the ith miner node;
according to the local model corresponding to the target cluster category stored in the generation block, a kth round of global model is obtained through aggregation;
Or alternatively, the first and second heat exchangers may be,
Acquiring a generation block of a target miner node in response to the target miner node being a miner node other than the ith miner node in the blockchain system;
and according to the local model corresponding to the target cluster category stored in the generation block, the kth round of global model is obtained through aggregation.
9. The method of claim 8, wherein the method further comprises:
Executing the workload proof PoW operation in response to the existence of candidate blocks in the blockchain system satisfying an aggregation trigger condition;
wherein the aggregation trigger condition includes at least one of: the candidate block is full; or, the elapsed time from the storage of the ith local model is longer than a second threshold.
10. The method of claim 8, wherein the generated block is a block in a blockchain corresponding to the target cluster category; the blockchains corresponding to different cluster categories are different.
11. The method according to any one of claims 6 to 9, wherein the aggregating results in a kth round of global models, comprising:
calculating the average value of the local model corresponding to the target cluster category;
And determining the mean value as the kth round of global model.
12. The method according to any of the claims 6 to 9, wherein said sending the kth round of global model into the ith client node comprises:
Responsive to the number of local models used to generate the kth round of global models being greater than a first threshold, the transmitting the kth round of global models into the ith client node.
13. The method according to any one of claims 6 to 9, further comprising:
And receiving an ith timestamp uploaded by the ith client node, wherein the ith timestamp is used for indicating the moment when the ith local model finishes training.
14. A horizontal federation learning apparatus, for use in an ith client node in a horizontal federation learning system, i being a positive integer, the apparatus comprising:
The model acquisition module is used for acquiring a kth-1 round of global model, wherein the kth-1 round of global model is a global model obtained after the kth-1 round of training in the transverse federal learning process, k is greater than or equal to 2, and k is a positive integer;
The model updating module is used for updating the kth-1 round global model according to the local data to obtain an ith local model;
the class calculation module is used for calculating the target clustering class to which the ith local model belongs;
The data uploading module is used for uploading the ith local model and the target clustering category to an ith miner node, and the ith miner node belongs to a blockchain system; the ith miner node is used for receiving the ith local model and the target clustering category uploaded by the ith client node; storing the ith local model into a candidate block; based on the target cluster category, downloading a local model corresponding to the target cluster category from broadcast data of other miner nodes in the blockchain system and storing the local model into the candidate block; when a target miner node for completing the workload proof PoW operation exists in the blockchain system, according to the ith local model corresponding to the target cluster category stored in the generation block generated by the target miner node, acquiring a kth round of global model through aggregation; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW; transmitting the kth round of global model to the ith client node;
And the first downloading module is used for downloading the kth round of global model from the ith miner node.
15. A transverse federal learning apparatus for use in an ith miner node in a blockchain system, i being a positive integer, the apparatus comprising:
The data receiving module is used for receiving the ith local model and the target clustering category uploaded by the ith client node;
A model storage module for storing the ith local model into a candidate block;
A second downloading module, configured to download, from broadcast data of other miner nodes in the blockchain system, a local model corresponding to the target cluster category based on the target cluster category, and store the local model in the candidate block;
The model aggregation module is used for aggregating to obtain a kth round of global model according to the local model corresponding to the target cluster category stored in the generation block generated by the target miner node when the target miner node for completing the workload proof PoW operation exists in the blockchain system; the generating block is a candidate block of the target miner node when finishing the operation of the workload proof PoW;
And the model sending module is used for sending the kth round of global model to the ith client node.
16. A client node comprising a processor, a memory coupled to the processor, and program instructions stored on the memory, which when executed by the processor implement the lateral federation learning method of any of claims 1-5.
17. A mineworker node comprising a processor, a memory coupled to the processor, and program instructions stored on the memory, which when executed by the processor implement the lateral federal learning method of any of claims 6 to 13.
18. A computer readable storage medium having stored therein program instructions, which when executed by a processor implement the lateral federal learning method according to any one of claims 1 to 5.
19. A computer readable storage medium having stored therein program instructions, which when executed by a processor implement the lateral federal learning method according to any one of claims 6 to 13.
CN202110801990.9A 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium Active CN113487041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801990.9A CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801990.9A CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113487041A CN113487041A (en) 2021-10-08
CN113487041B true CN113487041B (en) 2024-05-07

Family

ID=77939595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801990.9A Active CN113487041B (en) 2021-07-15 2021-07-15 Transverse federal learning method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113487041B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702918A (en) * 2022-02-28 2023-09-05 华为技术有限公司 Federal learning method and related equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355739A (en) * 2020-03-06 2020-06-30 深圳前海微众银行股份有限公司 Data transmission method, device, terminal equipment and medium for horizontal federal learning
CN111931242A (en) * 2020-09-30 2020-11-13 国网浙江省电力有限公司电力科学研究院 Data sharing method, computer equipment applying same and readable storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112508075A (en) * 2020-12-01 2021-03-16 平安科技(深圳)有限公司 Horizontal federation-based DBSCAN clustering method and related equipment thereof
CN112527273A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Code completion method, device and related equipment
CN112712182A (en) * 2021-03-29 2021-04-27 腾讯科技(深圳)有限公司 Model training method and device based on federal learning and storage medium
CN112714106A (en) * 2020-12-17 2021-04-27 杭州趣链科技有限公司 Block chain-based federal learning casual vehicle carrying attack defense method
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11823067B2 (en) * 2017-06-27 2023-11-21 Hcl Technologies Limited System and method for tuning and deploying an analytical model over a target eco-system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355739A (en) * 2020-03-06 2020-06-30 深圳前海微众银行股份有限公司 Data transmission method, device, terminal equipment and medium for horizontal federal learning
CN111931242A (en) * 2020-09-30 2020-11-13 国网浙江省电力有限公司电力科学研究院 Data sharing method, computer equipment applying same and readable storage medium
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN112508075A (en) * 2020-12-01 2021-03-16 平安科技(深圳)有限公司 Horizontal federation-based DBSCAN clustering method and related equipment thereof
CN112714106A (en) * 2020-12-17 2021-04-27 杭州趣链科技有限公司 Block chain-based federal learning casual vehicle carrying attack defense method
CN112527273A (en) * 2020-12-18 2021-03-19 平安科技(深圳)有限公司 Code completion method, device and related equipment
CN112990276A (en) * 2021-02-20 2021-06-18 平安科技(深圳)有限公司 Federal learning method, device, equipment and storage medium based on self-organizing cluster
CN112712182A (en) * 2021-03-29 2021-04-27 腾讯科技(深圳)有限公司 Model training method and device based on federal learning and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾延延 ; 张昭 ; 冯键 ; 王春凯 ; .联邦学习模型在涉密数据处理中的应用.中国电子科学研究院学报.2020,(第01期),全文. *

Also Published As

Publication number Publication date
CN113487041A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN113609521B (en) Federated learning privacy protection method and system based on countermeasure training
CN110084377B (en) Method and device for constructing decision tree
CN111967609B (en) Model parameter verification method, device and readable storage medium
CN106991095B (en) Machine exception handling method, learning rate adjusting method and device
CN111711655A (en) Block chain-based electronic data evidence storing method, system, storage medium and terminal
CN114327803A (en) Method, apparatus, device and medium for accessing machine learning model by block chain
CN112418259A (en) Method for configuring real-time rules based on user behaviors in live broadcast process, computer equipment and readable storage medium
CN111143165A (en) Monitoring method and device
CN113487041B (en) Transverse federal learning method, device and storage medium
CN114357495A (en) Prediction machine under-chain aggregation method, device, equipment and medium based on block chain
CN111813529A (en) Data processing method and device, electronic equipment and storage medium
CN116629379A (en) Federal learning aggregation method and device, storage medium and electronic equipment
CN115527090A (en) Model training method, device, server and storage medium
CN110515819A (en) Performance test methods, electronic equipment, scheduling system and medium
CN113011893B (en) Data processing method, device, computer equipment and storage medium
CN115328786A (en) Automatic testing method and device based on block chain and storage medium
CN112416488B (en) User portrait implementing method, device, computer equipment and computer readable storage medium
CN114756714A (en) Graph data processing method and device and storage medium
CN114567678A (en) Resource calling method and device of cloud security service and electronic equipment
CN112422480B (en) Method and device for determining account attribute, storage medium and electronic device
CN113919511A (en) Federal learning method and device
CN112667864A (en) Graph alignment method and device, electronic equipment and storage medium
CN107707383B (en) Put-through processing method and device, first network element and second network element
CN114968491B (en) Virtual resource testing method and device, electronic equipment and storage medium
CN109451016A (en) Data downloading management method, system and relevant device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230728

Address after: 1301, Office Building T2, Qianhai China Resources Financial Center, No. 55 Guiwan Fourth Road, Nanshan Street, Qianhai Shenzhen-Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518052

Applicant after: Shenzhen Hefei Technology Co.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant