CN112540926A - Resource allocation fairness federal learning method based on block chain - Google Patents

Resource allocation fairness federal learning method based on block chain Download PDF

Info

Publication number
CN112540926A
CN112540926A CN202011499218.8A CN202011499218A CN112540926A CN 112540926 A CN112540926 A CN 112540926A CN 202011499218 A CN202011499218 A CN 202011499218A CN 112540926 A CN112540926 A CN 112540926A
Authority
CN
China
Prior art keywords
participants
block chain
model
team
registered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011499218.8A
Other languages
Chinese (zh)
Inventor
汪小益
邱炜伟
吴琛
张帅
匡立中
胡麦芳
张珂杰
黄方蕾
詹士潇
谢杨洁
蔡亮
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qulian Technology Co Ltd
Original Assignee
Hangzhou Qulian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qulian Technology Co Ltd filed Critical Hangzhou Qulian Technology Co Ltd
Priority to CN202011499218.8A priority Critical patent/CN112540926A/en
Publication of CN112540926A publication Critical patent/CN112540926A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a block chain-based fair resource allocation federal learning method, which comprises the following steps of: establishing common block nodes for participants in a block chain, establishing super block nodes for collaborators, and establishing intelligent contracts between the participants and the collaborators through the block chain; after an intelligent contract is established, an on-register participant acquires a model from a block chain, trains and stores the model locally, and uploads desensitization sample data to the block chain; and performing team division on the registered participants according to the test accuracy of the registered participants on desensitization sample data, wherein each team competes for a sub-collaborator, and the sub-collaborator organizes the federal learning of the registered participants in the team to obtain a corresponding federal learning model. According to the federal learning method, the problem of uneven resource distribution of each participant in the federal learning is solved by introducing a block chain technology, and meanwhile, a learning model can achieve a good defense effect on some casual vehicle attacks.

Description

Resource allocation fairness federal learning method based on block chain
Technical Field
The invention relates to the technical field of block chain and federal learning, in particular to a block chain-based fair resource allocation federal learning method.
Background
With the continued development of distributed computing devices, a large number of devices in a distributed network generate a large amount of data. The mutual isolation of private data generated by the distributed devices may cause a data islanding problem, and if the data in the distributed devices are directly stored and processed in a centralized manner, a data privacy security problem may be caused. Different from the traditional machine learning technology based on data aggregation sharing, centralized storage and centralized processing, 2016 Google provides federal learning with privacy protection, distributed equipment only needs to exchange trained model information with a server locally, privacy data cannot be local in the process, and privacy safety of users is protected to a great extent. The micro-population extends the notion of federal learning, which is divided into horizontal federal learning, vertical federal learning, and federal migratory learning based on the differences in the feature distributions of the data. The data sets of each participant in horizontal federated learning have highly overlapping feature dimensions with little sample overlap. Compared with the vertical federal learning, the model information issued by the server to different clients every time in the horizontal federal learning is consistent, which leads to a fairness problem of resource allocation. That is, a participant who provides large data with excellent quality and a participant who provides data with small content and poor quality may ultimately obtain a model of the same performance, which may affect participation enthusiasm of the participants. Under the real environment of protecting the privacy of the data of the participants, the fairness of the resource allocation of the training participants also needs to be guaranteed.
Solving the fairness problem of federal learning can effectively solve this problem by introducing a decentralization mechanism, where decentralization is implemented by block chains. The blockchain technology is a brand new distributed infrastructure and computing paradigm that utilizes blockchain data structures to verify and store data, utilizes distributed node consensus algorithms to generate and update data, cryptographically secures data transmission and access, and utilizes intelligent contracts composed of automated script code to program and manipulate data. The core of the block chain technology is a decentralized distributed account book, and the distributed account book has the characteristics of tamper resistance, traceability and the like. Block chains can be classified into public chains, alliance chains, and private chains. The alliance chain adopts a mixed networking mechanism and has partial control right on nodes in the network. The alliance chain retains the characteristics of the public chain such as partial transparency, public property, tamper resistance and the like, has the characteristics of authority management, identity authentication and the like, is widely favored, and mainly focuses on the application of the block chain in the aspects of data security, trusted authentication and the like.
All participants in the existing horizontal federal learning obtain the same model, and the existing block-link federal learning mainly adopts an integral synchronous training mode, which still does not solve the problem of unfair resource distribution in the existing federal learning combined training process. In view of the wide practical application of horizontal federated learning on a plurality of distributed computing devices, the data resources of the distributed devices participating in federated training and the performance of the finally obtained models greatly influence the enthusiasm of the training of participants, so that research on the fairness problem of resource allocation of such federated learning is needed.
Disclosure of Invention
In view of the above, the invention aims to provide a block chain-based federal learning method with fair resource allocation, which solves the problem of uneven resource allocation of each participant in federal learning by introducing a block chain technology, and meanwhile, enables a learning model to have a good defense effect on some casual vehicle attacks.
In order to achieve the purpose, the invention provides the following technical scheme:
a block chain-based fair resource allocation federal learning method comprises the following steps:
establishing common block nodes for participants in a block chain, establishing super block nodes for collaborators, and establishing intelligent contracts between the participants and the collaborators through the block chain;
after an intelligent contract is established, an on-register participant acquires a model from a block chain, trains and stores the model locally, and uploads desensitization sample data to the block chain;
and performing team division on the registered participants according to the test accuracy of the registered participants on desensitization sample data, wherein each team competes for a sub-collaborator, and the sub-collaborator organizes the federal learning of the registered participants in the team to obtain a corresponding federal learning model.
Preferably, after the intelligent contract is established, the on-register collaborators establishing the intelligent contract initialize the model through the super block node and broadcast the model to the block chain, and after the common block node obtains the model from the block chain, the on-register participants download the model through the corresponding common block node and locally train the model and then store the model locally;
meanwhile, the registered participant uploads desensitization sample data obtained by desensitization of local sample data to a corresponding common block node and broadcasts the desensitization sample data to a block chain.
Preferably, when the team is divided, each registered participant tests desensitization sample data of other registered participants by using local model parameters and uploads a test result to the block chain;
each registered participant recovers the test result of the desensitization sample data, sorts the recovered test result according to the accuracy of the local tag, and uploads the sorted result to the block chain;
and the on-book collaborators score the sequencing results uploaded by all on-book parameterers, and divide the team according to the score values.
Preferably, when the on-book collaborator scores the sequencing results uploaded by all on-book parameter persons, according to the accuracy rate of the test result, sequentially giving scores positively correlated with the preparation rate, namely the higher the accuracy rate is, the higher the score is;
the on-registration collaborators also sum the scoring values of each on-registration parameter to obtain a total accuracy score, and the total accuracy score is stored according to the merged account book;
and dividing the registered participants into a plurality of teams according to the distribution state of the total accuracy scores.
Preferably, when dividing the registered participants into a plurality of teams according to the distribution state of the total accuracy scores, randomly setting a plurality of scores as a clustering center, dividing the registered participants into a plurality of teams according to the distance between each total accuracy score and the clustering center, reselecting one registered participant from each team to update the selected registered participant as the clustering center, and iterating the step until the iteration is finished to obtain a final team.
Preferably, the test result of desensitization sample data of each registered participant to other registered participants is uploaded to the corresponding common block node and broadcasted to the block chain;
each registered participant recovers the test result of desensitization sample data from the corresponding common block node, and simultaneously uploads the sequencing result to the corresponding common block node and broadcasts the sequencing result to the block chain;
and downloading all sequencing results from the block chain through the corresponding super nodes by the collaborators in the book and scoring the division team.
Preferably, the on-line participants in each team compete for sub-collaborators according to self-calculation, accuracy of historical prediction results and network conditions.
Preferably, desensitization sample data, test result data and sequencing result data uploaded by the registered participants are encrypted and then uploaded.
Preferably, after the sub-collaborators as temporary servers in the team aggregate the model parameters of all registered participants in the team to obtain the aggregate model parameters, the aggregate model parameters are distributed to the registered participants in the team to continue training.
Preferably, when the sub-collaborators aggregate the model parameters of the participants in the book, two ways are adopted:
the first method is as follows: averaging a polymerization mode, namely averaging the model parameters of all the participants in the book to obtain polymerization model parameters;
the second method comprises the following steps: and (3) a weighted aggregation mode, namely, giving a weight to the model parameter of each registered participant, and then weighting and summing the model parameters of all the registered participants to obtain an aggregation model parameter.
Compared with the prior art, the invention has the beneficial effects that at least:
according to the block chain-based fair resource allocation federal learning method provided by the invention, desensitization sample data of other registered participants are tested by the registered participants, the accuracy of the registered participants is calculated according to the test results, the registered participant groups are divided according to the accuracy, and each group independently performs federal learning to share a federal learning model, so that the contribution of the data resource quality of each participant to the whole model can be effectively balanced through a resource fairness allocation mechanism, the final model performance is obtained, meanwhile, the strong robustness can be considered and ensured, and the block chain-based fair resource allocation federal learning method has certain fault-tolerant capability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a federal learning method for block chain-based resource allocation fairness according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a team determined in the federal learning method for block chain-based resource allocation fairness according to an embodiment of the present invention;
fig. 3 is a schematic diagram of team training in the block chain-based resource allocation fairness federal learning method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The method aims to solve the problem of uneven resource distribution of each participant in federal learning. The embodiment provides a block chain-based fair resource allocation federal learning method, which has the following specific technical conception: a central server in federal learning is removed by introducing a decentralized technology of a block chain, and a model frame without the central server can effectively perform team training, namely participants with different data qualities perform separate combined training, so that the final participants obtain models with different performances according to different data qualities.
Fig. 1 is a flowchart of a federal learning method for block chain-based resource allocation fairness according to an embodiment of the present invention. As shown in fig. 1, the federal learning method for block chain-based resource allocation fairness according to an embodiment includes the following steps:
step 1, establishing common block nodes for participants in a block chain, establishing super block nodes for collaborators, and establishing intelligent contracts between the participants and the collaborators through the block chain.
In the embodiment, two types of nodes are designed in a block chain, one type of node belongs to a participant information uploading node, is named as a common block node, is called a node for short, and usually comprises m nodes, wherein m is the number of participants, that is, each participant corresponds to one node, and the other type of node belongs to a collaborator information uploading node, and is named as a super node, usually 1 node, and the super node mainly comprises information of some model structures and message mechanisms. And (4) independently carrying out local training among participants of the nodes, and finishing uploading and downloading model information after the local training.
Before the federal learning task is performed, intelligent contracts need to be established between participants and collaborators through a blockchain. The specific intelligent contract construction process comprises the following steps:
(1) the participants register and authenticate identity information, and establish authority management by collaborators. And the participants register online information, and the collaborators collect the registration information, record the id information of the participants and request the participants to perform identity authentication.
(2) Participants and collaborators establish data treaties. The participants mutually determine the type of data to be trained through a question and talk mechanism, and sample data x for self-party parameters and training is prepared1,x2,x3…xmAnd jointly negotiate a deterministic model structure. Collaborators record model types and carry out self-designed training model structure M*
(3) Collaborators publish message recycling contracts. The collaborators issue a message codebook including the collaborators accepted data status information infgThe participant requests to send data status information infsThe collaborators request to receive the data information infrAnd so on to the message cryptographic contract. The cipher book is used for constructing information interaction between the collaborators and the participants and monitoring the state information of each party in real time. In the whole interaction process, all participants and collaborators start a message monitoring model for monitoring message transmission information in real time.
(4) Collaborator information encryption phase. Firstly, the collaborators compress the model data information into a character string with a fixed size through a digital abstract technology, then encrypt the abstract information through an asymmetric encryption technology, and the collaborators obtain digital signatures. And after the signature is finished, the collaborator broadcasts the complete encrypted data information and the digital signature to the miners, the miners verify the information by using the public key of the collaborator, if the verification is successful, the interactive information is really initiated by the roll-out person and the information is not tampered, otherwise, the verification fails, so that the data information of the collaborator is prevented from being tampered.
And 2, after the intelligent contract is established, the on-register participants acquire the model from the block chain and perform local training, upload model parameters to the block chain, and upload desensitization sample data to the block chain.
After the intelligent contract is established, the participants in the intelligent contract are the participants in the contract, and the collaborators in the intelligent contract are the collaborators in the contract. The method comprises initialization preparation phase of collaborators, model initialization of the collaborators, running in trusted execution environment, model structure information M by using Hash encryption mechanism (Hash)*Carrying out encryption operation, and uploading the encrypted model to the super node corresponding to the cooperator*
In particular, the on-register collaborators publish model structures, typically using multilayer perceptron (MLP) or Convolutional Neural Networks (CNN), to on-register participants from supernode nodes*Download model structure M*Training a downloaded model structure M using local sample data*And obtaining a local model, wherein the local model is saved by the participant.
After local training of the enrollee participants, desensitization sample data is uploaded at the same time. In an embodiment, the on-list participants transmit completion information after completing data uplink by desensitizing data characteristics of node uplink. The participants in the book use a K-Anonymity algorithm to perform data desensitization on the sample data, and desensitization sample data d is obtained after desensitization*Performing uplink processing, sending request uplink information infsAfter receiving desensitization sample data sent by registered participants, the collaborator sends a receiving message infgAnd informing each registered participant.
And 3, performing team division on the registered participants according to the test accuracy rate of the registered participants on desensitization sample data.
When the on-book collaborator counts that desensitized sample data for all on-book participants is ready, a test command is issued. And after receiving the message, all registered members download desensitization data from the link points, test by using the trained local model, and perform uplink operation on the test result again.
In an embodiment, the on-book participants use the trained local model to test desensitization sample data, and the test is completed to record the prediction information preiPrediction information preiThe encrypted prediction information is used for uplink operation by using Hash algorithm (Hash) encryption. The participator sends a test completion message inf to the cooperative party after finishing the data uplink operations
After the test results are uplinked, the on-board participants recover the test results, use the local tags to perform accuracy sequencing on the test results uploaded by other members, and broadcast and publish the sequencing results.
And the collaborators monitor the test data recovery message in real time, and send test data verification commands to the registered participants after the test results of the registered participants are completely collected. The on-register participants download the test results of desensitization sample data of other on-register participants to themselves from corresponding nodes, test accuracy is verified by using local corresponding tags, the accuracy information is ranked, and the ranking results are expressed as identity-accuracy vectors (id)i∷acci) The form of (1) publishes the accuracy rate information acc of each on-register participant corresponding to the test dataiWherein i is 1,2,3 … m, idiFor member i information identification code, acciAnd uploading the accuracy information to the chain for the testing accuracy of the member i by the registered participant, and publishing other nodes through a broadcasting mechanism. And the on-register participants finish the accuracy information publishing and send publishing finishing information to the collaborators.
And after receiving all the published sequencing results of the participants in the registration, the collaborators start to perform statistical processing. Identity-accuracy vector (id) published by collaborators for each participanti∷acci) And (4) scoring, setting the number of participants to be m in total, setting the highest ranking in the accuracy rate to be m scores, sequentially decreasing the others, and posting the result records after the identity-accuracy rate vectors sent by the participants are scored for information storage. And finally, the collaborator side carries out score summation in sequence according to the participant information id information, namely, the sum of the accuracy rate obtained by each participant at other participants is obtained by referring to the historical account book information, the participant i is finished by iterating for m-1 times in sequence, and the collaborator enters into account and stores the score vector after scoring the user id accuracy rate of each participant.
After the accuracy score sum is obtained, the team is divided. And the collaborators perform density division on the scored vectors and immediately perform division by expanding members with larger score difference, wherein k is set as the number of training groups set by the collaborators.
Randomly selecting k team training centers { a1,a2,a3…akThe score vector on the participant side is { v }1,v2,v3…vmV, where the score vector v on the participant sidem∈Rn
The distance between each component of the defined score vector and the center point,
Figure BDA0002843055870000091
at the reselection team center:
a(i)=argmin{dij(a,v)}
the process is iterated until the class interval is larger, and iteration is stopped.
And 4, one sub-collaborator is selected from each team, and the sub-collaborators organize the in-team registered participants for federal learning to obtain a corresponding federal learning model.
In an embodiment, each training team is given a sub-collaborator c*Which is responsible for updating the model information. Specifically, a committee mechanism is established according to the already-assigned team members, each of the teamsEach participant i generates a sub-collaborator by publishing his/her own computing power, the accuracy of the historical ledger and network status elections. The sub-collaborators are responsible for publishing model parameter updates and establishing team message conventions, as well as team private key distribution.
During team federal learning, each sub-collaborator publishes its team key to the team training members. Specifically, the sub-collaborators randomly generate an encrypted set of private key data to distribute to each id member within the training team. And after each member receives the private key data, feeding back the successful verification of the secret information.
And during the federal learning of the team, uploading the encrypted model parameters theta by the team members, aggregating the model parameters by the collaborators, issuing updated model information, and continuing the next round of model training, so that iteration is stopped until the model error reaches the standard error range.
Each participant in the team performs hash encryption on the model information of the participant each time, attaches a private key to perform asset uplink, and sends an uplink success message to a sub-collaborator after the data uplink is completed. And after the sub-collaborators monitor the uplink information of all the participants of the team, carrying out data decryption operation. And performing aggregation operation on the decrypted model information, wherein the aggregation strategy usually adopts a federal average aggregation strategy or a federal security aggregation strategy. After the aggregation is completed, the cooperative party encrypts and uploads the aggregated model information to the block chain node, sends a training loading request, and the participators in the team monitor the request and then download the model for local training, and iterate the whole process in sequence, so that the iteration is stopped until the model error reaches the standard error range.
There are two main ways in which the federal average polymerization can be operated:
Figure BDA0002843055870000101
where n is the number of participants in the group, M*For intra-team aggregate models, M1Is a local model of the participants within the team.
The safe polymerization mode adopts the following mode:
M*=w1M1+w2M2+…+wiMn
s.t.∑wi=1,wi≥0。
in the block chain-based fair resource allocation federal learning method, (1) in a desensitization sample data test stage, the quality and the quality of sample data are evaluated by registered participants, so that the casual vehicle attack in some federal learning can be effectively prevented. (2) By introducing a decentralization technology, a central server for federal learning is removed, the data resource quality of each party is measured in the initial stage, the participants with similar data resource quality are clustered, and training groups with different data quality levels are divided. In the training stage, each training group elects a coordinator to process the data training process, so that the method has a good defense effect for honest and curious servers, and avoids data leakage or malicious tampering by the servers in the federal learning process. (3) In the process of data release, a safe and reliable transaction environment is provided through auditing by collaborators, and a general safe sharing framework is realized.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A block chain-based fair resource allocation federal learning method is characterized by comprising the following steps:
establishing common block nodes for participants in a block chain, establishing super block nodes for collaborators, and establishing intelligent contracts between the participants and the collaborators through the block chain;
after an intelligent contract is established, an on-register participant acquires a model from a block chain, trains and stores the model locally, and uploads desensitization sample data to the block chain;
and performing team division on the registered participants according to the test accuracy of the registered participants on desensitization sample data, wherein each team competes for a sub-collaborator, and the sub-collaborator organizes the federal learning of the registered participants in the team to obtain a corresponding federal learning model.
2. The block chain-based resource allocation fair federal learning method as claimed in claim 1, wherein after an intelligent contract is established, an on-register collaborator establishing the intelligent contract initializes the model through a superblock node and broadcasts the model to the blockchain, and after a common blockchain node obtains the model from the blockchain, an on-register participant downloads the model through a corresponding common blockchain node and locally trains the model, and then stores the model locally;
meanwhile, the registered participant uploads desensitization sample data obtained by desensitization of local sample data to a corresponding common block node and broadcasts the desensitization sample data to a block chain.
3. The federated learning method of blockchain-based resource allocation fairness of claim 1, wherein, when a team is divided, each of the enrolled participants tests desensitized sample data of other enrolled participants using local model parameters and uploads test results to a blockchain;
each registered participant recovers the test result of the desensitization sample data, sorts the recovered test result according to the accuracy of the local tag, and uploads the sorted result to the block chain;
and the on-book collaborators score the sequencing results uploaded by all on-book parameterers, and divide the team according to the score values.
4. The federal learning method for block chain-based resource distribution fairness as claimed in claim 3, wherein when the registered collaborators score the sequencing results uploaded by all registered parameter users, the scores positively correlated with the preparation rate are sequentially given according to the high and low accuracy of the test results, i.e. the score is higher when the accuracy is higher;
the on-registration collaborators also sum the scoring values of each on-registration parameter to obtain a total accuracy score, and the total accuracy score is stored according to the merged account book;
and dividing the registered participants into a plurality of teams according to the distribution state of the total accuracy scores.
5. The federal learning method for block chain-based resource allocation fairness as claimed in claim 4, wherein when dividing registered participants into a plurality of teams according to the distribution status of the total accuracy scores, randomly setting a plurality of scores as a cluster center, dividing the registered participants into a plurality of teams according to the distance between each total accuracy score and the cluster center, reselecting one registered participant from each team to update as the cluster center, and iterating the steps until the iteration is finished to obtain the final team.
6. The block chain-based federal learning method for fair resource allocation according to claim 3, wherein the test result of desensitization sample data of each registered participant to other registered participants is uploaded to the corresponding common blocknode and broadcasted into the blockchain;
each registered participant recovers the test result of desensitization sample data from the corresponding common block node, and simultaneously uploads the sequencing result to the corresponding common block node and broadcasts the sequencing result to the block chain;
and downloading all sequencing results from the block chain through the corresponding super nodes by the collaborators in the book and scoring the division team.
7. The block chain-based resource allocation fairness federal learning method as claimed in claim 1, wherein each team's on-register participants compete for sub-collaborators based on their own computing power, historical prediction result accuracy and network conditions.
8. The block chain-based federal learning method for fair resource allocation according to claim 3, wherein desensitization sample data, test result data and sequencing result data uploaded by registered participants are encrypted and then uploaded.
9. The block chain-based resource allocation fairness federated learning method of claim 1, wherein a sub-collaborator aggregates model parameters of all on-board participants in a team as a temporary service within the team, distributes the aggregated model parameters to on-board participants in the team to continue training after obtaining the aggregated model parameters.
10. The block chain-based fair resource allocation federal learning method as claimed in claim 9, wherein the sub-collaborators aggregate model parameters of the participants in the book in two ways:
the first method is as follows: averaging a polymerization mode, namely averaging the model parameters of all the participants in the book to obtain polymerization model parameters;
the second method comprises the following steps: and (3) a weighted aggregation mode, namely, giving a weight to the model parameter of each registered participant, and then weighting and summing the model parameters of all the registered participants to obtain an aggregation model parameter.
CN202011499218.8A 2020-12-17 2020-12-17 Resource allocation fairness federal learning method based on block chain Pending CN112540926A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011499218.8A CN112540926A (en) 2020-12-17 2020-12-17 Resource allocation fairness federal learning method based on block chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011499218.8A CN112540926A (en) 2020-12-17 2020-12-17 Resource allocation fairness federal learning method based on block chain

Publications (1)

Publication Number Publication Date
CN112540926A true CN112540926A (en) 2021-03-23

Family

ID=75019040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011499218.8A Pending CN112540926A (en) 2020-12-17 2020-12-17 Resource allocation fairness federal learning method based on block chain

Country Status (1)

Country Link
CN (1) CN112540926A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033826A (en) * 2021-05-25 2021-06-25 北京百度网讯科技有限公司 Model joint training method, device, equipment and medium based on block chain
CN113298268A (en) * 2021-06-11 2021-08-24 浙江工业大学 Vertical federal learning method and device based on anti-noise injection
CN113313264A (en) * 2021-06-02 2021-08-27 河南大学 Efficient federal learning method in Internet of vehicles scene
CN113468264A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based poisoning defense and poisoning source tracing federal learning method and device
CN113537042A (en) * 2021-07-14 2021-10-22 北京工商大学 Method and system for monitoring shared and updatable Deepfake video content
CN114202397A (en) * 2022-02-17 2022-03-18 浙江君同智能科技有限责任公司 Longitudinal federal learning backdoor defense method based on neuron activation value clustering
CN114758784A (en) * 2022-03-29 2022-07-15 南京理工大学 Method for distributing weight of participants in federal learning based on clustering algorithm
CN114785608A (en) * 2022-05-09 2022-07-22 中国石油大学(华东) Industrial control network intrusion detection method based on decentralized federal learning
WO2023209414A1 (en) * 2022-04-25 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for computing resource allocation

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468264A (en) * 2021-05-20 2021-10-01 杭州趣链科技有限公司 Block chain based poisoning defense and poisoning source tracing federal learning method and device
CN113468264B (en) * 2021-05-20 2024-02-20 杭州趣链科技有限公司 Block chain-based federal learning method and device for poisoning defense and poisoning traceability
CN113033826A (en) * 2021-05-25 2021-06-25 北京百度网讯科技有限公司 Model joint training method, device, equipment and medium based on block chain
CN113033826B (en) * 2021-05-25 2021-09-10 北京百度网讯科技有限公司 Model joint training method, device, equipment and medium based on block chain
CN113313264B (en) * 2021-06-02 2022-08-12 河南大学 Efficient federal learning method in Internet of vehicles scene
CN113313264A (en) * 2021-06-02 2021-08-27 河南大学 Efficient federal learning method in Internet of vehicles scene
CN113298268A (en) * 2021-06-11 2021-08-24 浙江工业大学 Vertical federal learning method and device based on anti-noise injection
CN113298268B (en) * 2021-06-11 2024-03-19 浙江工业大学 Vertical federal learning method and device based on anti-noise injection
CN113537042A (en) * 2021-07-14 2021-10-22 北京工商大学 Method and system for monitoring shared and updatable Deepfake video content
CN114202397A (en) * 2022-02-17 2022-03-18 浙江君同智能科技有限责任公司 Longitudinal federal learning backdoor defense method based on neuron activation value clustering
CN114758784A (en) * 2022-03-29 2022-07-15 南京理工大学 Method for distributing weight of participants in federal learning based on clustering algorithm
WO2023209414A1 (en) * 2022-04-25 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for computing resource allocation
CN114785608A (en) * 2022-05-09 2022-07-22 中国石油大学(华东) Industrial control network intrusion detection method based on decentralized federal learning
CN114785608B (en) * 2022-05-09 2023-08-15 中国石油大学(华东) Industrial control network intrusion detection method based on decentralised federal learning

Similar Documents

Publication Publication Date Title
CN112540926A (en) Resource allocation fairness federal learning method based on block chain
CN109151013B (en) Logistics industry information platform based on alliance block chain
CN109447795B (en) Byzantine consensus method supporting rapid achievement of final confirmation
CN108924130B (en) Block data verification method, device, equipment and storage medium
CN111191283B (en) Beidou positioning information security encryption method and device based on alliance block chain
CN111314067B (en) Block storage method and device, computer equipment and storage medium
CN108646983A (en) The treating method and apparatus of storage service data on block chain
CN108880863A (en) A kind of smart grid equipment safety diagnostic service system based on block chain technology
CN108964926A (en) User trust negotiation establishing method based on two-layer block chain in heterogeneous alliance system
CN110602217B (en) Block chain-based alliance management method, device, equipment and storage medium
CN113467927A (en) Block chain based trusted participant federated learning method and device
CN111818056B (en) Industrial Internet identity authentication method based on block chain
CN112651830B (en) Block chain consensus method applied to power resource sharing network
CN111988137A (en) DPoS (dual port service) consensus method and system based on threshold signature and fair reward
CN110177124A (en) Identity identifying method and relevant device based on block chain
CN113612604B (en) Asynchronous network-oriented safe distributed random number generation method and device
CN110445795B (en) Block chain authentication uniqueness confirmation method
CN115796261A (en) Block chain-based lightweight group consensus federated learning method
CN116595094A (en) Federal learning incentive method, device, equipment and storage medium based on block chain
CN113783899B (en) Node exit method and blockchain system
CN115499129A (en) Multimode trust cross-chain consensus method, system, medium, equipment and terminal
Wang et al. An efficient data sharing scheme for privacy protection based on blockchain and edge intelligence in 6G-VANET
CN114449476A (en) Block link point consensus method for safety communication in Internet of vehicles
CN114491615A (en) Asynchronous longitudinal federal learning fair incentive mechanism method based on block chain
CN115484057A (en) Achievement evidence storing method and system based on alliance chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination