CN113704810A - Federated learning oriented chain-crossing consensus method and system - Google Patents

Federated learning oriented chain-crossing consensus method and system Download PDF

Info

Publication number
CN113704810A
CN113704810A CN202110497514.2A CN202110497514A CN113704810A CN 113704810 A CN113704810 A CN 113704810A CN 202110497514 A CN202110497514 A CN 202110497514A CN 113704810 A CN113704810 A CN 113704810A
Authority
CN
China
Prior art keywords
cluster
consensus
module
cross
update
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110497514.2A
Other languages
Chinese (zh)
Other versions
CN113704810B (en
Inventor
肖江
戴小海
李辉楚吴
余辰
金海�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Publication of CN113704810A publication Critical patent/CN113704810A/en
Priority to US17/644,425 priority Critical patent/US20220318688A1/en
Application granted granted Critical
Publication of CN113704810B publication Critical patent/CN113704810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computer Hardware Design (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Mathematical Optimization (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a federal learning-oriented chain-crossing consensus method and a system, wherein the method at least comprises the following steps: performing single-chain federal learning in the cluster and counting local update information; sending updated consensus information to the second federation for cross-cluster gradient exchange; receiving a confirmation result of the cross-cluster gradient update consensus fed back by the second link; and updating a local model based on the confirmation result. According to the method, after the update consensus is achieved, reward and punishment are carried out according to the contribution represented by the cluster, so that the cluster in the computing node is stimulated to represent honest voting, and a participant can actively assist in model update.

Description

Federated learning oriented chain-crossing consensus method and system
Technical Field
The invention relates to the technical field of federal learning, in particular to a federal learning-oriented chain-crossing consensus method and a system.
Background
With the importance of privacy security in recent years, federal learning has been widely applied to fields with high requirements for privacy and privacy of data, such as finance, medical care, insurance, credit investigation, service, automatic driving, indoor positioning and the like. For example, patent CN 112418520a discloses a credit card transaction risk prediction method based on federal learning to perform transaction risk prediction in the financial field. Patent CN112201342A discloses a method, device, equipment and storage medium for medical auxiliary diagnosis based on federal learning, so as to perform medical auxiliary diagnosis in the medical field; patent CN 112446791a discloses a car insurance scoring method, device, equipment and storage medium based on federal learning, so as to score in the insurance field; patent CN 112153650a discloses a reliable federal learning method and system based on terminal reputation in wireless network, so as to perform federal learning in credit investigation field; patent CN 111899076a discloses an aviation service customization system and method based on the federal learning technology platform, so as to customize aviation services in the service field; patent CN 111290381a discloses a federal learning experimental system based on unmanned vehicles for federal learning in the field of automatic driving; patent CN 110632554a discloses an indoor positioning method, device, terminal device and medium based on federal learning, so as to perform federal learning application in the indoor positioning field.
Due to the problems of industry competition, privacy safety, complex administrative procedures and the like, data is difficult to share even among different departments of the same company, and the problem of data islanding exists. And the Federal learning can ensure that all the participants perform machine learning under the coordination of the cooperative server under the condition that data cannot go out of the local area, so that the effect equivalent to that of the machine learning based on the central data set is achieved.
For such a central server-based collaborative learning architecture, there are currently four major challenges: (1) when the coordination server crashes, the federated learning of the participating parties may terminate. (2) And a malicious coordination server can reversely infer the distribution condition of the original data through the updated information provided by each node, so that the risk of privacy disclosure exists. (3) And part of malicious participants can poison the global model by submitting poor-quality update parameters. (4) Due to the lack of incentive mechanisms, each participant has no motivation to actively assist in model updates. An ideal collaborative learning framework should be decentralized, non-tamperable, and have an incentive mechanism to guarantee continuous updates. Since these requirements are the dominant nature of blockchains, the block-Based Federated Learning (BFL) is naturally receiving great attention.
The international research institute team of Shenzhen of Qinghua university provides decentralized control function and key management function by using intelligent contracts, so that the federal learning system cannot be stopped due to the crash of a certain node. For example, patent CN 111212110a discloses a block chain-based federal learning system and method, the system includes: the model training module is used for updating a machine learning model in the federal learning process and aggregating the change values of the machine learning model; the intelligent contract module is used for providing decentralized control function and key management function in the process of federal learning; the storage module based on the IPFS protocol is used for providing a decentralized information storage mechanism for the intermediate information in the federal learning process; and simultaneously operating the model training module, the intelligent contract module based on the block chain technology and the storage module based on the IPFS protocol on each node participating in the federal learning. The complete decentralization of the whole system is realized, the failure and exit of any node cannot influence other nodes to continue to carry out federal learning, and the robustness is stronger.
The Zhejiang industry university team builds a noise committee in each round of training based on the blockchain technique. Noise committee members make it difficult for malicious nodes to reverse their original data feature distributions by adding noise to the local model. For example, CN 112434280a discloses a block chain-based federal learning defense method, which includes: the participants establish intelligent contracts with authorities; the on-register participants obtain models from the block chain and carry out local training, upload the trained local models and corresponding training time to corresponding block nodes and broadcast the models to the block chain; constructing a noise committee of a noise committee for each on-line participant, and adding noise to the local model of the corresponding on-line participant by using the noise committee to update the local model to obtain an updated model; establishing a verification committee for all on-register participants, verifying the prediction reliability and the authenticity of each updating model by utilizing the verification committee according to the data set and the training time, and recording the updating model passing the verification in a new block node; and the authority acquires all updated models passing the verification from the block nodes and aggregates the updated models to obtain an aggregated model, and the aggregated model is broadcasted to the block chain for the next round of registered participants to download local training.
The micro-public bank team provides a double committee verification mechanism, so that poisoning attacks of malicious nodes can be found and avoided in time. For example, CN 111723946a discloses a federal learning method and apparatus applied to a block chain, wherein the method is as follows: the first committee node acquires first local model information from any non-committee node; the first committee node determining a first verification result of the first committee node on the non-committee node according to a local verification data set of the first committee node and the first local model information; the first committee node sending the first verification result to each second committee node; and if the first committee node determines that the committee nodes agree on the first local model information, updating a federal learning model at least according to the first local model information. When the method is applied to financial technology (Fintech), the training of the federal learning model is granted only after the first local model information is known, so that the block link point association cooperation can be found in time.
The Shandong Langchao artificial intelligence research institute team provides an incentive method for rewarding according to training data provided by each participant in the training process. For example, CN 1l0827147A discloses a federation chain-based federal learning incentive method, which relates to the technical field of block chains, and adopts the technical scheme that: constructing a federation chain by a transaction main body and an operation main body; encrypting and aligning user groups of transaction subjects on the alliance chain, and determining a common user group and a common characteristic dimension; the operation subject trains the machine learning model by using the determined common user group and common characteristic dimension until the loss function is converged, and the model training is completed; and the operation main body scores the credit of the behavior generated on the alliance chain, maps the credit point to the transaction cost, and stimulates each transaction main body to maintain the ledger through the transaction cost. For example, CN 111125779a discloses a block chain-based federal learning method and apparatus, the method includes: determining a block chain; the coordinator node creates a federal learning task according to the model original data sent by each participant node; receiving training data obtained by local training of participant nodes; sending parameters to be updated to other participant nodes according to the training data so that the other participant nodes update the model parameters of the other participant nodes according to the parameters to be updated; and after the model training is finished, issuing reward resources according to the training data provided by each participant node in the training process, and writing the reward into the block chain. Compared with the traditional mode, the mutual trust problem of all parties is effectively solved; all parties participating in federal learning negotiate together to generate coordinator nodes, so that the transparency of the process is improved; the federal learning whole-process data is recorded in a block chain, so that the traceability of data operation is ensured; all parties are encouraged to participate actively by rewarding resources, and the enthusiasm of the parties is improved.
However, the above prior art approaches do not take into account the low consensus and learning efficiency caused by the large number of participants faced by the BFL architecture itself. When the data provided by existing participants is sparse and insufficient to support the training task of the federated learning model, a straightforward approach is to expand the number of participants. Since all participants need to communicate with each other by broadcasting, the communication frequency required for achieving consensus increases with the number of participants, resulting in high communication overhead and low consensus efficiency.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a federal learning-oriented chain-crossing consensus method, which at least comprises the following steps:
performing single-chain federal learning in the cluster and counting local update information;
sending updated consensus information to the second federation for cross-cluster gradient exchange;
receiving a confirmation result of the cross-cluster gradient update consensus fed back by the second link;
and updating a local model based on the confirmation result.
Preferably, the method for performing the intra-cluster linkage learning and counting the local update information includes:
and sending the local updating information calculated based on the BFL model in the first federation to a first computing node in the cluster so as to carry out updating fusion consensus in the cluster.
When the coordination server crashes, the federated learning of the participants of the present invention does not terminate. In the case that a small part of the computing nodes are crashed or even badly done, the consensus mechanism of the invention can still realize the data and operation consistency in most computing nodes. The block chain system-based federated learning of the present invention enables decentralized model updating and therefore does not terminate when the coordinating server crashes.
Preferably, the method further comprises: under the condition that a cluster representative in a first federation sends the updated information after fusion consensus to a second federation, at least one second computing node in the second federation carries out second verification consensus on the updated information and feeds back a confirmation result of the cross-cluster gradient update consensus to the first federation. The main technical means adopted by the invention comprise: (1) performing cluster segmentation based on organization; (2) model exchange between clusters only contains fused data, so that the privacy of a single computing node is hidden.
Compared with the prior art, the method and the system have the advantages that the distribution condition of the original data can be reversely inferred by the non-benevolent coordination server through the update information provided by each computing node, and the risk of privacy disclosure can be avoided. In the prior art, the single-cluster federal learning technology constructs all computing nodes into a large cluster, and broadcasting of model updating is required to be performed among all computing nodes of the cluster. Aiming at the defect, the multi-cluster federal cross-federal consensus mechanism constructs the computing nodes in the same organization into a plurality of small clusters, and only the fused model updating is carried out among the small clusters, so that the model data in a single computing node is hidden to a certain extent, and the privacy leakage of the single computing node is avoided.
Preferably, the method further comprises: and the cluster representative in the first federation judges a non-local updating result based on a local updating result and feedback of the second federation according to a fusion judging mechanism to obtain a judging result for judging whether to update the local model.
Preferably, the method for performing intra-cluster linkage learning and achieving consensus further comprises:
randomly selecting a computing node in each federal cluster as a cluster representative to participate in a cross-federal consensus process;
the cluster representative votes based on the validation results of at least two validations and determines whether to accept the update consensus,
-reward and punishment based on a consensus result of at least two verifications represented by the cluster after the update consensus is achieved.
Preferably, the selection mode of the cluster representation is as follows:
the tenure of each cluster representative is r, and when the tenure is finished, each computing node in the cluster applies for becoming a subordinate representative in a mode of sending out block chain transactions; the candidates are sorted according to the mortgage asset value, and candidates ranked t above form a nomination pool; a representative is selected from the nomination pool by a random algorithm.
Preferably, the voting based on the verification results of the at least two verifications and the determining whether to accept the update consensus comprises:
the cluster representative performs a first verification at round r-1 and a second verification at round r, and accepts the vote to all past operations in the case that the second verification is better than the first.
Preferably, the step of awarding a reward penalty based on a consensus result of at least two verifications represented by the cluster comprises: respectively representing the consensus results of the two verifications represented by the cluster as taAnd tb
According to taAnd tbAnd the asset value of the mortgage provided by the cluster representative awards or punts to the cluster representative:
Figure BDA0003054876370000061
when t isbGreater than taAnd if the difference is greater than lambda, the operation represented by the cluster is useless and the mortgage provided by the cluster is not collected;
when t isbGreater than taHowever, if the difference is less than λ, it indicates that there are some errors in the operation represented by the cluster, and the result is useless but does not receive its mortgage;
when t isaGreater than tbAnd in time, the operation represented by the cluster contributes to model updating, and a certain proportion of rewards are given according to the value and the contribution value of the mortgage assets provided by the cluster. Wherein rp represents reward and punishment results, v represents mortgage asset value, and mu represents reward proportion.
The invention also provides a federal learning-oriented chain-crossing consensus system, which at least comprises: a cross-cluster federal learning module and a cross-cluster consensus module, wherein a first computing node in a first federation learns in the cluster and counts local update information based on the cross-cluster federal learning module,
the cross-cluster consensus module sends update consensus information of the local update information to a second federation,
the cross-cluster common identification module receives a confirmation result of cross-cluster gradient update common identification fed back by the second joint;
and the cross-cluster federal learning module updates a local model based on the confirmation result.
Preferably, the cross-cluster consensus module further comprises a fusion mechanism module, and the fusion mechanism module executes program steps comprising: and the cluster representative in the first federation judges the local updating result and the non-local updating result fed back by the second federation according to a fusion judging mechanism to obtain a judging result of whether to update the local model.
Compared with the prior art that partial malicious parties can poison the defects of the global model by submitting poor-quality update parameters, the method introduces a fusion mechanism module in the consensus process, and firstly carries out local judgment on the update parameters before receiving the update parameters. And performing block chain consensus based on the judgment result of the computing node. Only through the consensus update parameters will be accepted by the respective compute node. Poor updating parameters can cause the judgment result of each honest node to be: not accepted, resulting in the final consensus result being also not accepted.
Preferably, the program executed by the cross-cluster federal learning module includes: and sending the local updating information calculated based on the BFL model in the first federation to a first computing node in the cluster so as to carry out updating fusion consensus in the cluster.
Preferably, under the condition that a cluster representative in a first federation sends the updated information after fusion consensus to a second federation, the cross-cluster consensus module of at least one second computing node in the second federation performs second verification consensus on the updated information and feeds back a confirmation result of the cross-cluster gradient update consensus to the first federation.
Preferably, the first fusion judgment module judges the local update result and the non-local update result fed back by the second link according to a fusion judgment mechanism to obtain a judgment result of whether to update the local model.
Preferably, the cross-cluster consensus module comprises a representative election module, a consensus module, and an incentive/penalty module.
The representative election module randomly selects a computing node in each federal cluster as a cluster representative to participate in a cross-federal consensus process;
the consensus module determines whether to accept the updated consensus based on a result of the cluster representative voting based on the verification results of the at least two verifications,
after updating the consensus, the incentive/penalty module awards and penalizes based on at least two verified consensus results represented by the cluster.
Preferably, the program steps performed on behalf of the election module are:
the tenure of each cluster representative is r, and when the tenure is finished, each computing node in the cluster applies for becoming a subordinate representative in a mode of sending out block chain transactions;
the candidates are sorted according to the mortgage asset value, and candidates ranked t above form a nomination pool; a representative is selected from the nomination pool by a random algorithm.
According to the method, after the update consensus is achieved, reward and punishment are carried out according to the contribution represented by the cluster, so that the cluster in the computing node is stimulated to represent honest voting, and a participant can actively assist in model update.
Preferably, the consensus module performs the program steps of: the cluster representative performs a first verification at round r-1 and a second verification at round r, and accepts the vote to all past operations in the case that the second verification is better than the first.
Preferably, the incentive/penalty module performs the program steps of:
respectively representing the consensus results of the two verifications represented by the cluster as taAnd tb
According to taAnd tbAnd the asset value of the mortgage provided by the cluster representative awards or punts to the cluster representative:
Figure BDA0003054876370000081
when t isbGreater than taAnd if the difference is greater than lambda, the operation represented by the cluster is useless and the mortgage provided by the cluster is not collected;
when t isbGreater than taHowever, if the difference is less than λ, it indicates that there are some errors in the operation represented by the cluster, and the result is useless but does not receive its mortgage;
when t isaGreater than tbAnd in time, the operation represented by the cluster contributes to model updating, and a certain proportion of rewards are given according to the value and the contribution value of the mortgage assets provided by the cluster. Wherein rp represents reward and punishment results, v represents mortgage asset value, and mu represents reward proportion.
Drawings
FIG. 1 is a logical schematic of cross-federal learning between federal clusters of two medical internet of things of the present invention;
FIG. 2 is a schematic diagram of the cross-federal cluster gradient fusion of the present invention;
FIG. 3 is a cross-federated cluster consensus workflow diagram of the present invention;
fig. 4 is a schematic structural diagram of one embodiment of the present invention.
List of reference numerals
10: a first compute node; 20: a first CFL module; 21: a first learning module; 22: a first update module; 30: a first CC module; 31: a first representative election module; 32: a first consensus module; 33: a first incentive/penalty module; 34: a first fusion judgment module; 40: a second computing node; 50: a second CFL module; 51: a second learning module; 52: a second update module; 60: a second CC module; 61: a second representative election module; 62: a second consensus module; 63: a second incentive/penalty module; 64: and a second fusion judgment module.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
Aiming at the defects of the prior art, the invention provides a federal learning-oriented cross-chain consensus method and a system.
The invention discloses a federal learning-oriented cross-chain consensus system, which at least comprises a CFL module (cross-cluster federal learning module) and a CC module (cross-cluster consensus module). One computing node is provided with a CFL module (cross-cluster federal learning module) and a CC module (cross-cluster consensus module) at the same time. The hardware of the CFL module (cross-cluster federal learning module) and the CC module (cross-cluster consensus module) in the invention can be one or more of an application-specific integrated chip, a server and a server group.
In the present invention, the CFL module refers to a Cross-cluster federal Learning (Cross-cluster fed Learning) module. And a BFL model for single-Federation cluster learning and a CFL model for cross-Federation cluster learning are arranged in the CFL module.
The Cross-Cluster Consensus (Cross-Cluster Consensus) module may be referred to as CC module for short. The CC module is arranged to achieve efficient cross-federal cluster safety learning by using a block chain cross-chain technology.
A cluster refers to a cluster of computing nodes owned by a single participant. For example, in the medical internet of things, all sensors, servers and computers in a certain hospital are provided.
The BFL model is described below.
Suppose there are K computing nodes in a cluster, each node needs to process skK is more than or equal to 1 and less than or equal to K. Let wkFor the parameters of the learning model, the goal of training the local learning model is to minimize the objective functiong(wk) As follows.
Figure BDA0003054876370000091
Wherein the content of the first and second substances,
Figure BDA0003054876370000092
the function of the loss is represented by,
Figure BDA0003054876370000093
respectively representing a sample i and a corresponding label, wkRepresenting the parameters of the learning model. According to the method, the model performance of each federal cluster can be improved by minimizing the federal learning objective function g (w). Order to
Figure BDA0003054876370000094
For the total number of samples in the cluster, g (w) is calculated as follows:
Figure BDA0003054876370000095
preferably, the learning method of the BFL model of the present invention at least includes: BFL gradient fusion and BFL model fusion.
BFL gradient fusion refers to: and uploading the local model gradient to the chain by each computing node, computing a fusion gradient after consensus, and updating the local learning model by using the fusion gradient.
Specifically, each compute node computes a local model gradient
Figure BDA0003054876370000096
Figure BDA0003054876370000097
Since there is no centralized server for data fusion in the CFL module, each compute node integrates the local gradient with the gradient of other nodes by the third equation (3)The line fusion obtains the BFL fusion gradient at the t moment
Figure BDA00030548763700001013
Figure BDA0003054876370000102
Let η denote the learning rate, the parametric model obtained after the tth round of training is as follows:
Figure BDA0003054876370000103
preferably, a federal learning model updating module is arranged in the CFL module, and may be one or more of an application specific integrated chip, a server and a server group.
A federal learning model update module: the method aims to jointly train the local model in each node by fusing model parameters of each calculation node in the federation.
BFL model fusion refers to: and each computing node records the updated model on the block chain, and updates the local model through the fusion module.
Specifically, each computing node updates the local model first, and then utilizes a seventh formula (7) to fuse the computing models:
Figure BDA0003054876370000104
Figure BDA0003054876370000105
and then updating the local learning model on each computing node:
Figure BDA0003054876370000106
in order to realize the cross-federal cluster learning, the CFL model fusion method at least comprises a CFL gradient fusion method and a CFL model fusion method.
CFL gradient fusion method: assuming that there are M BFL clusters participating in the learning process, there is n within each clustermAnd M is more than or equal to 1 and less than or equal to M. Then, the gradient is fused across the clusters
Figure BDA0003054876370000107
The calculation is as follows:
Figure BDA0003054876370000108
wherein the content of the first and second substances,
Figure BDA0003054876370000109
the fusion gradient of each cluster calculated according to the fourth formula (4) is shown. Finally, the compound of the fifth formula (5)
Figure BDA00030548763700001010
Substituted by those obtained during the t-th round learning
Figure BDA00030548763700001011
I.e., multi-federation cluster updates may be implemented.
The CFL model fusion method comprises the following steps: CFL model fusion
Figure BDA00030548763700001012
Is calculated as in the tenth formula (10) with respect to the eighth formula (8)
Figure BDA0003054876370000111
Is replaced by
Figure BDA0003054876370000112
Namely, CFL cross-federal cluster model fusion is realized.
Figure BDA0003054876370000113
In order to further reduce the communication overhead of the CFL module and ensure the effectiveness of updating the CFL model, the CFL module further comprises a fusion judgment module which can be one or more of an application specific integrated chip, a server and a server group.
The fusion judgment module judges CFL (computational fluid dynamics) cross-cluster fusion gradient in each learning process based on a fusion judgment mechanism
Figure BDA0003054876370000114
Or CFL model fusion
Figure BDA0003054876370000115
Whether it is available for local model updates.
The judgment method of the fusion judgment mechanism comprises the following steps:
fusing gradients of CFL (computational fluid limit) received by each computing node in the cluster across the cluster
Figure BDA0003054876370000116
Or CFL model fusion
Figure BDA0003054876370000117
Authentication is performed and the results of the authentication (either agreeing to the update or disagreeing to the update) are recorded on the local chain. If the percentage of nodes in a cluster that agree to update exceeds a threshold value delta, the cluster agrees to update.
The cluster then uploads the updated decision to the representative nodes of the other clusters via the randomly elected representative node.
And finally, each representative node judges whether to adopt the round of updating according to the decision results of other clusters and the local verification condition.
The CC module includes at least three modules: the representative election module, the consensus module and the incentive/penalty module. The hardware representing the election module, the consensus module and the excitation/penalty module can be one or more of an application-specific integrated chip, a server and a server group.
A representative election module: the module randomly selects a computing node in each federated cluster to serve as a cluster representative to participate in a cross-federated consensus process. The federal cluster elects a representative once every r rounds of consensus process, namely, the tenure of each representative is r. At the end of the tenure, each computing node in the cluster may apply for being a subordinate representative by issuing blockchain transactions. Candidates must mortgage certain assets in applying for a transaction.
Suppose that the value of the asset offered by the ith candidate is vi. All candidates according to viAnd sorting, wherein candidates ranked t top form a nomination pool. Finally, a representative is selected from the nomination pool by a random algorithm.
The random algorithm used by the present invention determines the representation by modulo the hash value of the last block by time t. All transaction records will be recorded in the blockchain, making the proxy selection process verifiable and traceable.
The data format of the transaction record is < type, promoter _ id, mortgage, term _ num, sign >. type is an enumeration variable and refers to transaction type; the promoter _ id is a unique identity tag of a transaction proposer; mortgage, term _ num, and sign denote the mortgage value, current consensus period, and signature of the node, respectively.
The Consensus mode of the Consensus module consists of r-1 round CFL updating fusion and 1 round Two-stage Cross-chain Consensus mechanism (Two-Phase Cross-chain Consensus, 2PCC), and safe Cross-chain federal learning is achieved. The two phases are a preparation phase and a confirmation/rollback phase, respectively.
A preparation stage: in this phase each cluster representative will perform two verifications. Where the model is verified once in the previous r-1 and the update of round r is verified a second time. If the second verification result is better than the first, the cluster representative accepts the vote for all past operations. Otherwise, the cluster refuses to receive all past operations on behalf of the vote. In addition, each cluster selects a cluster representative during the preparation phase.
The transaction data format of this stage is < type, promoter _ id, grad, hash _ sample, round _ num, sign >. Where round _ num is the number of update rounds, and grad is the update gradient. If the method of updating the model is adopted, the grad field can be changed to a model to represent the updated model.
Confirmation/rollback phase: this stage determines whether to receive the current update operation based on the results of the preparation stage. Specifically, if the consensus results for both the local and remote clusters are accepted, the update before the operation confirmation in the past cycle is performed. Otherwise, the operation of the previous cycle will not be accepted and the models roll back to the previous state. Note that rollback is not a branch on the blockchain, and all past operations are retained on the chain.
The transaction data format at this stage is < type, format, body, round _ num, sign >. Where body is the validation content (e.g., the validation gradient) and format is an enumeration variable indicating the type of body.
The incentive/punishment module is used for prompting the cluster to represent honest matters through reward and punishment. In the preparation phase, each cluster representative will provide two verification results to participate in consensus. The consensus results of two verifications according to a certain cluster representation can be respectively represented as taAnd tb. Incentive/penalty Module according to taAnd tbAnd the mortgage v provided by this representation penalizes the cluster representation by:
Figure BDA0003054876370000121
when t isbGreater than taAnd if the difference is greater than λ, the representative operation can be considered useless and does not receive the mortgage it provides; when t isbGreater than taHowever, when the difference is less than λ, there are some errors in the operation of the delegate, and the result is useless but its mortgage is not recovered; when t isaGreater than tbThe representative operation contributes to model updating, and a certain proportion mu of reward can be given according to the mortgage provided by the representative operation and the contribution amount. Where λ refers to the result gap threshold. Wherein rp represents reward and punishment results, v represents mortgage asset value, and mu represents reward proportion.
The transaction data format at this stage is < type, promoter _ id, app _ tx _ id, value, sign >. Wherein app _ tx _ id represents a transaction application tag proposed previously, and value is a reward and punishment result rp.
As shown in fig. 2-4, a federal learning oriented cross-chain consensus system is presented.
In the federally-learning-oriented cross-chain consensus system, a plurality of first computing nodes 10 in a first federation A and a plurality of second computing nodes 40 in a second federation B establish communication connection in a wired or wireless mode.
A number of first computing nodes 10 in a first federation a establish communication connections with each other for intra-cluster federal learning within the first federation. A first CFL module 20 and a first CC module 30 are provided within the first computing node 10.
The first CFL module 20 includes at least a first learning module 21. The first learning module 21 is internally provided with a BFL model for single federal cluster learning and a CFL model for cross-federal cluster learning, and can execute a program of the federal cluster learning model and update a local learning model.
Preferably, the first CFL module 20 further includes a first updating module 22 for jointly training the local model in each node by fusing the model parameters of each computing node in the federation.
The first CC module 30 includes a first representative election module 31, a first consensus module 32, and a first incentive/penalty module 33. The first CC module 30 also includes a first convergence decision module 34.
Likewise, the structural composition of the second federation B is similar or identical to that of the first federation a.
As shown in fig. 4, several second computing nodes 40 in the second federation B establish communication connections with each other for intra-cluster federation learning within the second federation. A second CFL module 50 and a second CC module 60 are disposed within the second computing node 40.
The second CFL module 50 includes at least a second learning module 51. The second learning module 51 is provided with a BFL model for single federal cluster learning and a CFL model for cross-federal cluster learning, and can execute a program of the federal cluster learning model and update a local learning model.
Preferably, the second CFL module 50 further includes a second updating module 52 for jointly training the local model in each node by fusing the model parameters of each computing node in the federation.
The second CC module 60 includes a second representative election module 61, a second consensus module 62, and a second incentive/penalty module 63. The second CC module 30 also includes a second convergence decision module 64.
S1: federal learning within a cluster.
And a first computing node in the first federation learns the Federal in the cluster and counts local update information based on the cross-cluster Federal learning module.
The local learning model on each first compute node is updated as by the first CFL module via the BFL gradient fusion module or the BFL model fusion module.
S2: consensus is formed within the cluster.
And the first CFL module of each first computing node in the first federation A (federation A) transmits the updating information to other first computing nodes in the cluster for in-cluster updating fusion consensus, and records the result on the first CFL module of the first federation A. The cluster internal consensus method can be realized based on a traditional single block chain consensus method, such as a bayer Byzantine Fault Probability (PBFT).
S3: gradient swapping is performed across the clusters.
Of the clusters in the first federation a (federation a), the cluster selected by the first representative election module 31 in the first CC module 30 represents that the update information in the cluster is transmitted to the second federation b (host b) through the communication network.
Preferably, the first representation election module 31 randomly selects the first computing node as a cluster representation in each federated cluster to participate in the cross-federated consensus process.
Each cluster representative of the first federation a sends the local update result and the non-local update result sent by the cluster representative of the second federation B to the first fusion judgment module 30.
The first fusion judgment module 30 judges whether to perform local model updating based on the local first updating result and the updating result sent by the second federated B cluster representative.
Specifically, the first fusion judgment module 30 is based on a fusion judgment engineSystem and method for determining CFL (computational fluid dynamics) cross-cluster fusion gradient in each learning process
Figure BDA0003054876370000141
Or CFL model fusion
Figure BDA0003054876370000142
Whether it is available for local model updates.
A first CC module in each first compute node in the cluster fuses gradients across the cluster for received CFLs
Figure BDA0003054876370000143
Or CFL model fusion
Figure BDA0003054876370000144
Authentication is performed and the results of the authentication (either agreeing to the update or disagreeing to the update) are recorded on the local chain. If the ratio of the computing nodes agreeing to update in a cluster exceeds the threshold value delta, the first fusion judgment module 30 confirms that the cluster agrees to update. For the update decision that the ratio of the computing nodes agreeing to update is lower than the threshold value delta, the fusion judgment module judges that the update decision is abandoned and informs each computing node to abandon the update. The cluster then uploads the decision whether to update to the cluster representatives of the other clusters via the randomly elected cluster representative.
And finally, each cluster representative judges whether to adopt the round of updating according to the decision results of other cluster representatives and the local verification condition.
After the first fusion judgment module 30 determines the update decision, a preparation phase of consensus is entered.
The cluster representative of the first federation a sends the update decision information to the second CC module of the second federation over the communication network. And the second CC module receives a confirmation result of the cross-cluster gradient update consensus fed back by the second joint.
S4: the consensus is updated across the cluster gradients.
Each second computing node 30 in the second nexus B performs a second verification consensus on the update information sent by the first CC module 30, and records the result on a second CFL module in the second nexus B. The second verification consensus is performed by the second module, including the preparation phase and the confirmation/rollback phase, for a period of time shown in the grey area of fig. 3.
Each cluster selects a cluster representative during the preparation phase. Each cluster representative will perform two verifications. Wherein, the model is verified in the previous r-1 rounds of consensus process for the first time, and the r-th round of consensus update is verified for the second time. If the second verification result is better than the first, the cluster representative accepts the vote for all past operations. Otherwise, the cluster refuses to receive all past operations on behalf of the vote.
In the confirm/rollback phase, the second consensus module 62 determines whether to receive the current update operation according to the consensus result in the preparation phase. Specifically, if the consensus results for both the local and remote clusters are accepted, the second computing node performs the update prior to the operation validation in the past cycle. Otherwise, the operation of the first computing node for the last cycle will not be accepted and the BFL model in the second CFL module 50 of each second computer node rolls back to the previous state. Note that rollback is not a branch on the blockchain, and all past operations are retained on the chain.
S5: and confirming result exchange.
The cluster of second federation B represents the first CC module that passes the consensus result agreed to update to first federation a over the communication network. Specifically, based on the update information and the update decision sent by the second CC, the second CFL module in the second computing node in the second federation B updates the CFL gradient fusion method and the CFL model fusion method of the second CFL module.
And the cross-cluster federal learning module updates a local model based on the confirmation result.
S6: and confirming the result consensus. And the first federation A receives the agreement of the second federation B to update the result, and records the result on the CFL module of the first federation A after the agreement is verified. And (4) performing reward and punishment on each cluster representative through an excitation/penalty module 14 in the CC module, and recording the reward and punishment on the CFL chain.
In particular, the incentive/punishment module is used for prompting the cluster to represent honest row by means of reward and punishmentAnd (4) performing the following steps. In the preparation phase, each cluster representative will provide two verification results to participate in consensus. The consensus results of two verifications according to a certain cluster representation can be respectively represented as taAnd tb. Incentive/penalty Module according to taAnd tbAnd the mortgage v provided by this representation penalizes the cluster representation by:
Figure BDA0003054876370000161
when t isbGreater than taAnd the difference is greater than λ, the representative operation can be considered useless and does not receive the mortgage it provides; when t isbGreater than taHowever, when the difference is less than λ, there are some errors in the operation of the delegate, and the result is useless but its mortgage is not recovered; when t isaGreater than tbThe representative operation contributes to model updating, and a certain proportion mu of reward can be given according to the mortgage provided by the representative operation and the contribution amount.
Fig. 3 shows the federal election step to determine the cluster representative θ.
A1: and (5) local model training. Each federation performs local federal learning and training.
A2: federal stands for election. Each federation elects at least one cluster representative theta through the representative election module.
A3: one representative was elected through n rounds of consensus process. A fusion or abandon update decision is determined based on a fusion decision mechanism, i.e., either discarding the inter-federal update gradient or fusing the inter-federal update gradient.
A4: and (3) realizing a cross-chain consensus-preparation stage, wherein each cluster representative is verified twice through n + k-1 rounds of elections. Where the model is verified once in the previous r-1 and the update of round r is verified a second time.
A5: and (4) realizing a cross-chain consensus-confirmation/rollback stage, and judging whether to receive the current updating operation according to the result of the preparation stage. If yes, entering into confirmation phase. If rejected, the operation of the previous cycle will not be accepted and the models roll back to the previous state.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.
The present specification encompasses multiple inventive concepts and the applicant reserves the right to submit divisional applications according to each inventive concept. The present description contains several inventive concepts, such as "preferably", "according to a preferred embodiment" or "optionally", each indicating that the respective paragraph discloses a separate concept, the applicant reserves the right to submit divisional applications according to each inventive concept.

Claims (10)

1. A federally-learning-oriented chain-crossing consensus method, the method comprising at least:
performing single-chain federal learning in the cluster and counting local update information;
sending updated consensus information to the second federation for cross-cluster gradient exchange;
receiving a confirmation result of the cross-cluster gradient update consensus fed back by the second link;
and updating a local model based on the confirmation result.
2. The federal learning oriented chain-crossing consensus method as claimed in claim 1, wherein the method for performing intra-cluster single-chain federal learning and counting local update information comprises:
and sending the local updating information calculated based on the BFL model in the first federation to a first computing node in the cluster so as to carry out updating fusion consensus in the cluster.
3. The federal learning oriented cross-chain consensus method of claim 2, further comprising:
under the condition that a cluster representative in a first federation sends the updated information after fusion consensus to a second federation, at least one second computing node in the second federation carries out second verification consensus on the updated information and feeds back a confirmation result of the cross-cluster gradient update consensus to the first federation.
4. The federal learning oriented cross-chain consensus method of claim 1, further comprising:
and the cluster representative in the first federation judges the local updating result and the non-local updating result fed back by the second federation according to a fusion judging mechanism to obtain a judging result of whether to update the local model.
5. The federal learning oriented chain-crossing consensus method as claimed in any one of claims 1 to 4, wherein the method for performing intra-cluster federal learning and achieving consensus further comprises:
randomly selecting a computing node in each federal cluster as a cluster representative to participate in a cross-federal consensus process;
the cluster representative votes based on the validation results of at least two validations and determines whether to accept the update consensus,
-reward and punishment based on a consensus result of at least two verifications represented by the cluster after the update consensus is achieved.
6. The federated learning-oriented cross-chain consensus method according to claim 5, wherein the cluster representation is selected in a manner that:
the tenure of each cluster representative is r, and when the tenure is finished, each computing node in the cluster applies for becoming a subordinate representative in a mode of sending out block chain transactions;
the candidates are sorted according to the mortgage asset value, and candidates ranked t above form a nomination pool; a representative is selected from the nomination pool by a random algorithm.
7. The federally-learning-oriented cross-chain consensus method as claimed in claim 5, wherein the step of voting and determining whether to accept the updated consensus based on the validation results of at least two validations comprises:
the cluster represents a first verification at round r-1, a second verification at round r,
in the case where the second verification result is better than the first, the cluster representative accepts the vote to all past operations.
8. The federal learning oriented chain-crossing consensus method of claim 5, wherein the step of awarding a punishment based on at least two verified consensus results of the cluster representatives comprises:
respectively representing the consensus results of the two verifications represented by the cluster as taAnd tb
According to taAnd tbAnd the asset value of the mortgage provided by the cluster representative awards or punts to the cluster representative:
Figure FDA0003054876360000021
when t isbGreater than taAnd if the difference is greater than lambda, the operation represented by the cluster is useless and the mortgage provided by the cluster is not collected;
when t isbGreater than taHowever, if the difference is less than λ, it indicates that there are some errors in the operation represented by the cluster, and the result is useless but does not receive its mortgage;
when t isaGreater than tbThe operations represented by the clusters contribute to model updates, awarding a certain proportion of the contribution values and collateral asset values provided by the clusters, wherein,
rp represents reward and punishment results, v represents mortgage asset value, and hong represents reward proportion.
9. A federally-learning-oriented chain-crossing consensus system, the system comprising at least: a cross-cluster federal learning module and a cross-cluster consensus module,
a first computing node within a first federation performs intra-cluster Federal learning and statistics of local update information based on the cross-cluster Federal learning module,
the cross-cluster consensus module sends update consensus information of the local update information to a second federation,
the cross-cluster common identification module receives a confirmation result of cross-cluster gradient update common identification fed back by the second joint;
and the cross-cluster federal learning module updates a local model based on the confirmation result.
10. The federated learning-oriented cross-chain consensus system of claim 9, wherein the cross-cluster consensus module further comprises a fusion mechanism module,
and the fusion mechanism module judges the local updating result and the non-local updating result fed back by the second link according to a fusion judging mechanism to obtain a judging result of whether to update the local model.
CN202110497514.2A 2021-04-01 2021-05-07 Federal learning-oriented cross-chain consensus method and system Active CN113704810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/644,425 US20220318688A1 (en) 2021-04-01 2021-12-15 Method and system for cross-chain consensus oriented to federated learning

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110359146 2021-04-01
CN2021103591465 2021-04-01
CN202110391401 2021-04-07
CN2021103914014 2021-04-07

Publications (2)

Publication Number Publication Date
CN113704810A true CN113704810A (en) 2021-11-26
CN113704810B CN113704810B (en) 2024-04-26

Family

ID=78647880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110497514.2A Active CN113704810B (en) 2021-04-01 2021-05-07 Federal learning-oriented cross-chain consensus method and system

Country Status (2)

Country Link
US (1) US20220318688A1 (en)
CN (1) CN113704810B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338045A (en) * 2022-01-14 2022-04-12 中国人民解放军战略支援部队信息工程大学 Information data verifiability safety sharing method and system based on block chain and federal learning
CN115660114A (en) * 2022-11-11 2023-01-31 湖北文理学院 Asynchronous federal learning architecture system and method based on block chain
EP4273730A1 (en) * 2022-05-05 2023-11-08 Palantir Technologies Inc. Multi-party model training

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049816B (en) * 2023-01-09 2023-07-25 北京交通大学 Federal learning method capable of verifying safety based on blockchain
CN116189874B (en) * 2023-03-03 2023-11-28 海南大学 Telemedicine system data sharing method based on federal learning and federation chain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490738A (en) * 2019-08-06 2019-11-22 深圳前海微众银行股份有限公司 A kind of federal learning method of mixing and framework
CN112328617A (en) * 2020-11-19 2021-02-05 杭州趣链科技有限公司 Learning mode parameter updating method for longitudinal federal learning and electronic device
CN112434280A (en) * 2020-12-17 2021-03-02 浙江工业大学 Block chain-based federal learning defense method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490738A (en) * 2019-08-06 2019-11-22 深圳前海微众银行股份有限公司 A kind of federal learning method of mixing and framework
CN112328617A (en) * 2020-11-19 2021-02-05 杭州趣链科技有限公司 Learning mode parameter updating method for longitudinal federal learning and electronic device
CN112434280A (en) * 2020-12-17 2021-03-02 浙江工业大学 Block chain-based federal learning defense method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338045A (en) * 2022-01-14 2022-04-12 中国人民解放军战略支援部队信息工程大学 Information data verifiability safety sharing method and system based on block chain and federal learning
CN114338045B (en) * 2022-01-14 2023-06-23 中国人民解放军战略支援部队信息工程大学 Information data safe sharing method and system based on block chain and federal learning
EP4273730A1 (en) * 2022-05-05 2023-11-08 Palantir Technologies Inc. Multi-party model training
CN115660114A (en) * 2022-11-11 2023-01-31 湖北文理学院 Asynchronous federal learning architecture system and method based on block chain

Also Published As

Publication number Publication date
US20220318688A1 (en) 2022-10-06
CN113704810B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN113704810A (en) Federated learning oriented chain-crossing consensus method and system
CN107301600B (en) Core construction method of block chain Internet model for cross-chain transaction
CN110677485B (en) Dynamic layered Byzantine fault-tolerant consensus method based on credit
CN111131181B (en) Reputation mechanism and DPBFT algorithm-based block chain dynamic DPoS consensus method
US20210067319A1 (en) Trust-based shard distribution apparatus and method for fault tolerant blockchain networks
CN110689434B (en) Cross-block chain interaction method based on notary group
CN112685766B (en) Enterprise credit investigation management method and device based on block chain, computer equipment and storage medium
CN113467927A (en) Block chain based trusted participant federated learning method and device
CN110599181A (en) Data processing method, device and equipment based on block chain and storage medium
CN108280646A (en) Block chain group chain method based on alliance&#39;s chain and block catenary system
CN110610421B (en) Guarantee fund management method and device under fragment framework
CN111522826B (en) Bid transaction processing method and system based on block chain and relevant nodes
Zhang et al. Cycledger: A scalable and secure parallel protocol for distributed ledger via sharding
CN109308658A (en) A kind of decentralization assets trustship clearance plateform system of highly effective and safe
CN109327459A (en) A kind of common recognition method of alliance&#39;s block chain network
CN110222536A (en) A kind of warehouse receipt date storage method based on distribution book keeping operation
CN111583039A (en) Safe interaction method, incentive method and transaction system for manager-free blockchain transaction
CN114491615A (en) Asynchronous longitudinal federal learning fair incentive mechanism method based on block chain
Wang et al. Trustworthiness two-way games via margin policy in e-commerce platforms
CN101192292A (en) On-line transaction signing authentication administrative system and method
CN114363084B (en) Cross-border trade data trusted storage method based on block chain
CN109120437B (en) Artificial intelligence block cloud ecosystem based on DABFT consensus mechanism
CN114997865A (en) Platform transaction method and device based on block chain system
CN111861270B (en) Creation method of community innovation platform system based on block chain technology
CN111695997B (en) Block chain consensus method and system based on node credit scoring and pre-cross loyalty

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant