CN114611721A - Federal learning method, device, equipment and medium based on partitioned block chain - Google Patents

Federal learning method, device, equipment and medium based on partitioned block chain Download PDF

Info

Publication number
CN114611721A
CN114611721A CN202210257885.8A CN202210257885A CN114611721A CN 114611721 A CN114611721 A CN 114611721A CN 202210257885 A CN202210257885 A CN 202210257885A CN 114611721 A CN114611721 A CN 114611721A
Authority
CN
China
Prior art keywords
common
fragment
node
nodes
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210257885.8A
Other languages
Chinese (zh)
Inventor
余荣
蔡礼斌
陈涵
康嘉文
王思明
谭北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210257885.8A priority Critical patent/CN114611721A/en
Publication of CN114611721A publication Critical patent/CN114611721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a federated learning method, a federated learning device, federated learning equipment and a federated learning medium based on a partitioned block chain, wherein the method comprises the following steps: the method comprises the steps that a main node in a common fragment builds a behavior record table based on verification behaviors of common nodes in the same common fragment, and determines behavior scores of the common nodes based on the behavior record table; after a main node in a common fragment sends a calculation task, recording the response time delay of common nodes in the same common fragment, and determining the performance score of the common nodes based on the response time delay; and the master node in the leader fragment re-determines the general nodes in each common fragment based on the behavior scores and the performance scores of the general nodes. According to the method and the device, the behavior score and the performance score can be determined according to the verification behavior and the response delay of the common node, the common node in the common fragment is re-determined based on the behavior score and the performance score of the common node, and the performance difference between the fragments is further reduced.

Description

Federal learning method, device, equipment and medium based on partitioned block chain
Technical Field
The invention relates to the technical field of block chains and federal learning, in particular to a federal learning method, a device, equipment and a medium based on a partitioned block chain.
Background
Federal learning is an emerging artificial intelligence basic technology for establishing a sharing model between a mobile terminal and a server, so that the data resources are effectively utilized in the context of large-scale data, and the privacy and safety of users are guaranteed. The federal learning refers to a distributed machine learning method, namely a learning method that after local data are trained, updated parameters are uploaded to a server by participants, and then the updated parameters are aggregated by the server to obtain overall parameters. Compared with the traditional machine learning technology, the federal learning can improve the learning efficiency, and realize the integration and utilization of data dispersed in various organizations and mechanisms on the premise of meeting the data privacy requirement.
The block chain is a brand new distributed infrastructure and a computing paradigm which utilizes a block chain type data structure to verify and store data, utilizes a distributed node consensus algorithm to generate and update data, utilizes a cryptographic mode to ensure the safety of data transmission and access, and utilizes intelligent contract programming and operation data consisting of automatic script codes. Due to the characteristics of decentralized, tracing, source tracing, non-tampering, programmability, collective maintenance and the like of the block chain, the distributed storage architecture of the block chain is used as the basic architecture of the federal learning, the task of running model aggregation on a client is realized by designing a protocol on the upper layer of the block chain, the consistency of model parameters among a plurality of participants in the federal learning and the safety and reliability of model parameter synchronization and sharing can be guaranteed, and meanwhile, a reasonable incentive mechanism in the block chain also provides a technical solution for improving the enthusiasm of the participants for cooperating and participating in the federal learning model training.
The sharding technology is originally proposed for optimizing a large centralized database, and the specific way is to divide the data of the large database into a plurality of data shards and then distribute the data shards to different servers, so that the overall performance of the database is improved. Inspired by large centralized database sharding techniques, developers propose applying sharding techniques to blockchain architectures. Any change of the account book in the block chain requires that all nodes in the network reach consensus, the public transparency and the non-falsification characteristics can reduce the trust cost in a multi-party scene, each transaction needs to be recorded in the block chain after four processes of transaction generation, block construction, block competition and block broadcasting, and is finally confirmed after signature confirmation of most nodes, and the complex consensus process limits the expandability of the block chain while ensuring the safety, thereby becoming the bottleneck of improving the performance of the block chain network. After the fragmentation technology is adopted, the nodes in a single fragment only need to bear partial work of the whole network, and all fragments work in parallel, so that the bearing capacity of the whole network is improved.
The existing federal learning system introducing the fragmentation technology mainly adopts a random fragmentation strategy, and because the performance of each node in a block chain system has certain difference, the performance difference between fragments is large when random fragmentation is carried out, and the overall execution rate of the system is negatively influenced.
Disclosure of Invention
The application provides a federated learning method, a federated learning device, a federated learning equipment and a federated learning medium based on a partition block chain, wherein common nodes in each common partition are determined through behavior scores and performance scores of the common nodes, so that performance differences among the common partitions are reduced, and the overall throughput of a system is improved.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, the present application provides a federated learning method based on a partition blockchain, including:
the method comprises the steps that a main node in a common fragment builds a behavior record table based on verification behaviors of common nodes in the same common fragment, and determines behavior scores of the common nodes based on the behavior record table;
after a main node in a common fragment sends a calculation task, recording the response time delay of common nodes in the same common fragment, and determining the performance score of the common nodes based on the response time delay;
and the main node in the leader fragment uniformly distributes the common nodes with the behavior scores lower than the threshold value to each common fragment, and uniformly distributes the rest common nodes to each common fragment according to the performance scores.
According to an implementable manner of the first aspect of the present application, the member nodes in the leader fragment and the master node in the general fragment are determined by a PoW consensus mechanism.
According to a manner that can be implemented in the first aspect of the present application, the step of constructing, by a master node in a common segment, a behavior record table based on verification behaviors of common nodes in the same common segment includes:
the main node in each common fragment receives a basic model issued by the leader fragment and issues the basic model to a plurality of intelligent terminals, wherein the basic model is constructed by the leader fragment based on a federal learning task in an intelligent contract;
the intelligent terminals train the basic model to obtain corresponding gradient data, pack the gradient data into a plurality of transactions and submit the transactions to common fragments;
and the common nodes in the common fragments verify the transactions, and the master node in the same common fragment records the verification behaviors of the common nodes to construct a behavior record table.
According to an implementable manner of the first aspect of the present application, the verifying the multiple transactions by the general node in the general fragment, and the master node in the same general fragment records the verification behavior of each general node to construct the behavior record table, including:
the general node in the general fragment verifies the multiple transactions and determines the legal transactions in the multiple transactions;
and the main node in the common fragment determines the malicious behavior of each common node according to the verification behavior of each common node on the legal transaction in the same common fragment, and records the malicious behavior of each common node into a behavior record table.
According to an implementable manner of the first aspect of the present application, the verifying the plurality of transactions by the generic node in the generic fragment, and determining a valid transaction in the plurality of transactions includes:
a generic node in the generic fragment verifies the plurality of transactions;
and taking the transaction with the verification passing times larger than the preset times as a legal transaction.
In a second aspect, the present application provides a federated learning system based on a partition blockchain, comprising
The behavior score determining module is used for constructing a behavior record table based on the verification behaviors of the common nodes in the same common fragment through the main node in the common fragment and determining the behavior score of the common nodes based on the behavior record table;
the performance score determining module is used for recording the response time delay of the common nodes in the same common fragment after the main node in the common fragment sends the calculation task, and determining the performance score of the common nodes based on the response time delay;
and the general node determination module is used for uniformly distributing the general nodes with the behavior scores lower than the threshold value to each common fragment through the main node in the leader fragment, and uniformly distributing the rest general nodes to each common fragment according to the performance scores.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements any one of the above steps of federated learning based on a partition blockchain when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the above steps of federated learning based on a partition block chain.
Compared with the prior art, the method, the device, the equipment and the medium for federal learning based on the fragment block chain can determine the behavior score and the performance score according to the verification behavior and the response delay of the general nodes, re-determine the general nodes in the general fragments based on the behavior score and the performance score of the general nodes, reduce the performance difference among the fragments, enable the execution time of each general fragment to be similar when the same task is executed, and further improve the working speed of the system as a whole.
Drawings
Fig. 1 is a schematic flowchart of a federated learning method based on a partition blockchain according to a preferred embodiment of the present application;
fig. 2 is a block diagram of a structure of a federated learning apparatus based on a partitioned block chain according to a preferred embodiment of the present application;
fig. 3 is a diagram of an overall working framework of a system based on a partition block chain according to a preferred embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprises," "comprising," or any other variation thereof, in the description and claims of this application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
As shown in fig. 1, the present application discloses a federated learning method based on a partition blockchain, which includes:
s1, the main node in the common fragment constructs a behavior record table based on the verification behavior of the common node in the same common fragment, and determines the behavior score of the common node based on the behavior record table;
s2, after the main node in the common fragment sends the calculation task, recording the response time delay of the common node in the same common fragment, and determining the performance score of the common node based on the response time delay;
s3, the main node in the leader fragment uniformly distributes the general nodes with the behavior scores lower than the threshold value to each general fragment, and distributes the rest general nodes to each general fragment according to the performance scores.
Specifically, the common fragments and the member nodes (i.e., the master node and the common nodes) in the leader fragment are all block chain nodes, the master node and the member nodes in the leader fragment in the common fragments are determined in advance through a PoW consensus mechanism, and the common nodes in the common fragments are determined in a random manner in the initial training iteration of federal learning.
Specifically, the response delay refers to a delay between the time when the primary node of the common segment issues a computing task and the time when the primary node receives a response of the common node. Determining the performance score of a general node according to the response delay may be:
the computing performance of the general nodes is evaluated according to the response time delay of the general nodes, and the general nodes are classified into a superior performance class, a medium performance class and a poor performance class according to the evaluation result, wherein the performance scores corresponding to the general nodes in the superior performance class, the medium performance class and the poor performance class are respectively 3, 2 and 1 (the response time delay of the general nodes in the superior performance class is the shortest, and the response time delay of the general nodes in the poor performance class is the longest).
Specifically, the master node in the leader fragment uniformly distributes the general nodes with the behavior scores lower than the threshold value to each general fragment, and uniformly distributes the remaining general nodes to each general fragment according to the performance scores.
Specifically, as shown in fig. 3, in federal learning, member nodes (master nodes and general nodes) in a leader fragment and a general fragment are determined in advance by a PoW consensus mechanism, in an initial training iteration of federal learning, general nodes in the general fragment are determined in a random manner, in a non-initial training iteration, performance scores and behavior scores of all general nodes are determined according to initial iteration training, and then general nodes in the general fragment are determined according to the performance scores and behavior scores of each general node. During the first training iteration, a user issues a federal learning task through an intelligent contract, a leader fragment constructs a basic model based on the federal learning task in the intelligent contract and issues the basic model to a main node of each common fragment, then the main node in each common fragment selects a plurality of intelligent terminals to train the basic model, the intelligent terminals pack gradient data obtained by training into transactions and submit the transactions to corresponding common fragments, a common node in the common fragment verifies the received transactions, the main node in the common fragment acquires the gradient data in the verified transactions and tests the obtained gradient data one by using a test data set, the main node in the common fragment determines the weighted value of each gradient data according to the test result of each gradient data, and packs the gradient data into transactions and submits the transactions after generating a local model (namely local gradient data) based on aggregation of each gradient data, the method comprises the steps that a main node in a leader fragment receives transactions uploaded by each common fragment, then local models (namely local gradient data) of each common fragment are obtained, the main node in the leader fragment tests each local model (namely local gradient data) by using a test data set to obtain a test result, weighted aggregation is conducted on each local model after the weighted value of each local model is determined based on the test result to generate a global model (namely global gradient data), a user issues a basic model and a federal learning task, if the generated global model does not meet the requirement of the federal learning task, an intelligent terminal takes the global model as the basic model in a new iterative training, and trains a new basic model by using local data until the finally generated global model meets the requirement of the federal learning task.
In this embodiment, a general node in a general fragment needs to verify a received transaction, in the federal learning system, a master node in each general fragment receives a basic model issued by a leader fragment, the master node in each general fragment selects a plurality of intelligent terminals to train the basic model, the intelligent terminals train the basic model by using local data, since the local data of each intelligent terminal is different, gradient data obtained by training each intelligent terminal are different, the intelligent terminals pack the gradient data obtained by training into a transaction and transmit the transaction to a corresponding general fragment, after the general fragment receives the transaction uploaded by the intelligent terminal, the general node in the general fragment verifies the transaction, only after the transaction is verified (when more than a certain number of general nodes in the general fragment verify the transaction, the general node passes the verification, the transaction is a legitimate transaction), gradient data in the transaction can be used to aggregate to generate an aggregate model. The master node in the common fragment records the verification behaviors of the common nodes to form a behavior record table of each common fragment, wherein the verification behaviors refer to the verification behaviors of the common nodes on legal transactions, when the transactions are verified, the verification behaviors of the common nodes on the transactions can be verified to be not passed or passed, if the verification behaviors of the common nodes on the legal transactions are verified to be not passed, the verification behaviors of the common nodes are malicious behaviors, the master node in the same common fragment counts the malicious behaviors of the common nodes and records the malicious behaviors into the behavior record table, the behavior record table of each common fragment mainly records the malicious behavior times of each common node, the master node in the common fragment can score the behaviors of each common node based on the behavior record table, and the influence of the common nodes with the behavior scores lower than a threshold value on the fragment performance is large, therefore, when the nodes in the common fragments are distributed, the common nodes with the behavior scores lower than the threshold value are screened out and are evenly distributed in each common fragment, the number of the common nodes with the low behavior scores in each common fragment is ensured to be similar, the rest common nodes are evenly distributed in each common fragment according to the performance scores, the common nodes in the common fragments are determined from the two aspects of the behavior scores and the performance scores, the performance difference among the common fragments can be further reduced, the execution time of each common fragment when the same task is executed is similar, and the working speed of the system is further improved as a whole.
In an embodiment, the member nodes in the leader fragment and the master node in the common fragment are determined by a PoW consensus mechanism.
In this embodiment, because the computational power consumption and storage consumption of the member nodes in the leader fragment and the master nodes in the common fragment in the federal learning process are larger than those of other blockchain nodes, the nodes with powerful storage and computation performance are selected as the member nodes in the leader fragment and the master nodes in the common fragment.
In an embodiment, the constructing, by the master node in the common fragment, a behavior record table based on the verification behavior of the common node in the same common fragment includes:
the master node in each common fragment receives a basic model issued by the leader fragment and issues the basic model to a plurality of intelligent terminals, wherein the basic model is constructed by the leader fragment based on a federal learning task in an intelligent contract;
the intelligent terminals train the basic model to obtain corresponding gradient data, pack the gradient data into a plurality of transactions and submit the transactions to common fragments;
and verifying the plurality of transactions by the common nodes in the common fragment, and recording the verification behaviors of the common nodes by the main node in the same common fragment to construct a behavior record table.
In this embodiment, the verification behavior of the general node in the general fragment refers to a verification behavior of the received transaction, that is, there are two results, namely, verification passed and verification failed when verifying the received transaction, in the federal learning process, the general node in the general fragment receives a large number of transactions, and if the master node in the general fragment constructs a behavior record table based on the verification behaviors of all transactions by the general node, the workload of the system is greatly increased. And each general node does not influence the accuracy of behavior scoring on the verification behavior of part of transactions, so that in order to reduce the workload of the main node in the common fragment, the transaction generated by packaging is only based on the intelligent terminal when the behavior record table is constructed, namely the transaction is formed by packaging the gradient data generated by training the basic model issued by the main node by the intelligent terminal.
In an embodiment, the verifying the plurality of transactions by the general nodes in the general fragment, and recording the verification behavior of each general node by the master node in the same general fragment to construct the behavior record table, including:
the general node in the general fragment verifies the multiple transactions and determines the legal transactions in the multiple transactions;
and the master node in the common fragment determines the malicious behavior of each common node according to the verification behavior of each common node in the same common fragment on the legal transaction, and records the malicious behavior of each common node into a behavior record table.
In this embodiment, a valid transaction, that is, a transaction passing verification in the same general fragment, can be determined by verifying the verification behavior of all the general nodes in the general fragment on the transaction, and if the verification behaviors of all the general nodes in the general fragment on the transaction are recorded in the behavior record table, the storage pressure is huge for the master node in the general fragment. Therefore, in order to relieve the storage pressure of the master node in the common fragment, the behavior record table only needs to record the malicious behavior of each common node in the fragment (a common node is marked as a malicious behavior when the verification of one legal transaction fails).
In one embodiment, the verifying the plurality of transactions by the generic node in the generic fragment, and determining a valid transaction in the plurality of transactions includes:
a generic node in the generic shard validates the plurality of transactions;
and taking the transaction with the verification passing times larger than the preset times as a legal transaction.
In this embodiment, a general node in a general segment verifies a received transaction, and then aggregates gradient data in the verified transaction to generate a local model (i.e., local gradient data), and in order to ensure the amount of the gradient data used for aggregating and generating the local model, when the verification passing frequency of the general node obtained by the transaction is greater than a preset frequency, the transaction can be determined as a legal transaction, where the preset frequency is set in advance.
As shown in fig. 2, the present application further discloses a federated learning apparatus based on a partition blockchain, which includes:
a behavior score determining module 201, configured to construct, by a master node in a common segment, a behavior record table based on verification behaviors of common nodes in the same common segment, and determine a behavior score of a common node based on the behavior record table;
the performance score determining module 202 is configured to record response delay of a general node in a common segment after a host node in the common segment issues a computing task, and determine a performance score of the general node based on the response delay;
and the general node determining module 203 is configured to uniformly distribute, by the master node in the leader fragment, the general nodes whose behavior scores are lower than the threshold value to the general fragments, and uniformly distribute the remaining general nodes to the general fragments according to the performance scores.
In one embodiment, the behavior score determining module 201 includes:
the basic model issuing unit is used for receiving the basic models issued by the leader fragments through the main nodes in the common fragments and issuing the basic models to the intelligent terminals, wherein the basic models are constructed by the leader fragments based on the federal learning task in the intelligent contract;
the transaction generation unit is used for training the basic model through the intelligent terminals to obtain corresponding gradient data, packaging the gradient data into a plurality of transactions and submitting the transactions to common fragments;
and the behavior record table construction unit is used for verifying the plurality of transactions through the common nodes in the common fragments and recording the verification behaviors of the common nodes by the main node in the same common fragment so as to construct a behavior record table.
In one embodiment, the behavior score determining module 201 includes:
the legal transaction determining unit is used for verifying the plurality of transactions through a general node in a general fragment and determining the legal transaction in the plurality of transactions;
and the malicious behavior recording unit is used for determining the malicious behavior of each general node according to the verification behavior of each general node in the same general fragment on the legal transaction by the main node in the general fragment, and recording the malicious behavior of each general node into the behavior recording table.
In one embodiment, the behavior score determining module 201 includes:
the transaction verification unit is used for verifying the plurality of transactions through a general node in the general fragment;
and the legal transaction determining unit is used for taking the transaction with the verification passing times larger than the preset times as the legal transaction.
In one embodiment, an electronic device, specifically a computer device, is provided, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the processor implements the steps of any one of the above federated learning methods based on a partition blockchain.
In one embodiment, a computer readable storage medium is provided, in which a computer program is stored, which computer program, when executed by a processor, comprises the steps of a method for federated learning based on a partitioned blockchain.
The foregoing is a preferred embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations are also regarded as the protection scope of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a ROM (Read-Only Memory), a RAM (Random Access Memory), or the like.

Claims (8)

1. A federated learning method based on a partitioned blockchain is characterized by comprising the following steps:
the method comprises the steps that a main node in a common fragment builds a behavior record table based on verification behaviors of common nodes in the same common fragment, and determines behavior scores of the common nodes based on the behavior record table;
after a main node in a common fragment sends a calculation task, recording the response time delay of common nodes in the same common fragment, and determining the performance score of the common nodes based on the response time delay;
and the main node in the leader fragment uniformly distributes the common nodes with the behavior scores lower than the threshold value to each common fragment, and uniformly distributes the rest common nodes to each common fragment according to the performance scores.
2. The federated learning method based on a partition block chain according to claim 1, wherein the member nodes in the leader partition and the master node in the generic partition are determined by a PoW consensus mechanism.
3. The federated learning method based on a partitioned block chain according to claim 1, wherein the master node in the common partition builds a behavior record table based on the verification behavior of the common nodes in the same common partition, including:
the master node in each common fragment receives a basic model issued by the leader fragment and issues the basic model to a plurality of intelligent terminals, wherein the basic model is constructed by the leader fragment based on a federal learning task in an intelligent contract;
the intelligent terminals train the basic model to obtain corresponding gradient data, pack the gradient data into a plurality of transactions and submit the transactions to common fragments;
and verifying the plurality of transactions by the common nodes in the common fragment, and recording the verification behaviors of the common nodes by the main node in the same common fragment to construct a behavior record table.
4. The federated learning method based on a partitioned block chain of claim 3, wherein the general nodes in the common partition verify the multiple transactions, and a master node in the same common partition records the verification behavior of each general node to construct a behavior record table, including:
the general node in the general fragment verifies the multiple transactions and determines the legal transactions in the multiple transactions;
and the main node in the common fragment determines the malicious behavior of each common node according to the verification behavior of each common node on the legal transaction in the same common fragment, and records the malicious behavior of each common node into a behavior record table.
5. The federated learning method based on a partition block chain of claim 4, wherein the generic node in the generic partition verifies the plurality of transactions, determining a valid transaction in the plurality of transactions, comprises:
a generic node in the generic fragment verifies the plurality of transactions;
and taking the transaction with the verification passing times larger than the preset times as a legal transaction.
6. A federated learning device based on a partitioned blockchain, comprising:
the behavior score determining module is used for constructing a behavior record table based on the verification behaviors of the common nodes in the same common fragment through the main node in the common fragment and determining the behavior score of the common nodes based on the behavior record table;
the performance score determining module is used for recording the response time delay of the common nodes in the same common fragment after the main node in the common fragment sends the calculation task, and determining the performance score of the common nodes based on the response time delay;
and the general node determination module is used for uniformly distributing the general nodes with the behavior scores lower than the threshold value to each common fragment through the main node in the leader fragment, and uniformly distributing the rest general nodes to each common fragment according to the performance scores.
7. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the federated learning method based on a partitioned block chain as recited in any one of claims 1-5 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the federated learning method based on a partitioned blockchain according to any one of claims 1 to 5.
CN202210257885.8A 2022-03-14 2022-03-14 Federal learning method, device, equipment and medium based on partitioned block chain Pending CN114611721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210257885.8A CN114611721A (en) 2022-03-14 2022-03-14 Federal learning method, device, equipment and medium based on partitioned block chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210257885.8A CN114611721A (en) 2022-03-14 2022-03-14 Federal learning method, device, equipment and medium based on partitioned block chain

Publications (1)

Publication Number Publication Date
CN114611721A true CN114611721A (en) 2022-06-10

Family

ID=81862415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210257885.8A Pending CN114611721A (en) 2022-03-14 2022-03-14 Federal learning method, device, equipment and medium based on partitioned block chain

Country Status (1)

Country Link
CN (1) CN114611721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115310137A (en) * 2022-10-11 2022-11-08 深圳市深信信息技术有限公司 Secrecy method and related device of intelligent settlement system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115310137A (en) * 2022-10-11 2022-11-08 深圳市深信信息技术有限公司 Secrecy method and related device of intelligent settlement system

Similar Documents

Publication Publication Date Title
CN109508982B (en) Random parallel Byzantine fault-tolerant consensus method of block chain main chain and parallel multiple sub-chains
Cao et al. Toward on-device federated learning: A direct acyclic graph-based blockchain approach
Wang et al. A platform-free proof of federated learning consensus mechanism for sustainable blockchains
CN112257873A (en) Training method, device, system, equipment and storage medium of machine learning model
CN112712182B (en) Model training method and device based on federal learning and storage medium
CN110417558A (en) Verification method and device, the storage medium and electronic device of signature
Yin et al. A blockchain-based incremental update supported data storage system for intelligent vehicles
CN114626547A (en) Group collaborative learning method based on block chain
CN110177124A (en) Identity identifying method and relevant device based on block chain
CN109447803B (en) Alliance chain accounting method, equipment, alliance chain and storage medium
CN114244835A (en) Decentralized self-adaptive collaborative training method and device based on block chain
Wang et al. Blockchain assisted federated learning for enabling network edge intelligence
CN114611721A (en) Federal learning method, device, equipment and medium based on partitioned block chain
CN112118138A (en) System and method for implementing block chain consensus mechanism
CN107181774A (en) Data movement between distributive data center
Ma et al. Blockchain-escorted distributed deep learning with collaborative model aggregation towards 6G networks
Cui et al. A secure and decentralized DLaaS platform for edge resource scheduling against adversarial attacks
Yang et al. A hybrid consensus algorithm for master–slave blockchain in a multidomain conversation system
CN111695701B (en) System for realizing data set construction processing based on federal learning and construction generation method thereof
Wu et al. Virtual-time-accelerated emulation for blockchain network and application evaluation
Al-Musharaf et al. Improving blockchain consensus mechanism via network clusters
CN115310137A (en) Secrecy method and related device of intelligent settlement system
CN115687526A (en) Seismic data model sharing method based on block chain and federal learning
Zhang et al. TBDD: A New Trust-based, DRL-driven Framework for Blockchain Sharding in IoT
Feng et al. Crbft: An optimized blockchain algorithm for edge-based iot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination