CN115296927A - Block chain-based federal learning credible fusion excitation method and system - Google Patents
Block chain-based federal learning credible fusion excitation method and system Download PDFInfo
- Publication number
- CN115296927A CN115296927A CN202211185889.6A CN202211185889A CN115296927A CN 115296927 A CN115296927 A CN 115296927A CN 202211185889 A CN202211185889 A CN 202211185889A CN 115296927 A CN115296927 A CN 115296927A
- Authority
- CN
- China
- Prior art keywords
- participating
- credit
- node
- block chain
- participating nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2471—Distributed queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computer Hardware Design (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Fuzzy Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a block chain-based federal learning credible fusion incentive method and a block chain-based federal learning credible fusion incentive system, which relate to the technical field of Internet, and comprise the following steps: in the multi-round federal learning process, acquiring the basic credit of each participating node and a preset node grade critical value; aiming at each round of iterative learning, updating the credit of the participating nodes in the current round of iterative learning according to the basic credit of the participating nodes and the credit of the previous round, determining the grades of the participating nodes by comparing the grade critical values of the nodes with the credit, isolating the participating nodes in the preset grades and recording the grades into a block chain; and constructing an entity model of the multi-attribute fusion data according to the credit degrees of the participating nodes and the related attribute information, and storing the entity model on the block chain. Therefore, the problems of passive vehicle taking, model virus attack and the like of the participated nodes in the federal learning process can be solved, the credible inquiry of entity levels and attribute levels of the participated nodes is supported, and a basis is provided for quantitative evaluation of the behaviors of the participated nodes.
Description
Technical Field
The invention belongs to the technical field of internet, and particularly relates to a block chain-based federal learning credible fusion incentive method and system.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art, which may have been known to those of ordinary skill in the art.
The processing of internet data typically involves the manner in which one party collects the data and transmits it to another party, who is responsible for collating and fusing the data. Finally, a third party will acquire the integration data and build a model for use by other parties. With increasing awareness of data security and user privacy, how to analyze data on the premise of enhancing data privacy and security becomes a problem of wide concern in the industry.
Federated learning is a machine learning environment in which multiple participating nodes cooperatively train a model under the coordination of a central server while maintaining the dispersion of training data. The plurality of participating nodes take model parameters as information carriers, can train models together under the condition of not exchanging data, not only links data islands dispersed at each participating node, but also ensures the data privacy safety of the participating nodes, can drive the cooperation of cross-domain enterprise-level big data, and hasten the new ecology of artificial intelligence field shared modeling.
However, in practical application, each participating node in federal learning has different purposes and different data quality, and under the condition of lacking a certain incentive method, the possibility of damaging attacks is provided for malicious nodes, such as malicious behaviors of participating nodes in passive vehicle taking, model virus exposure attacks and the like, and the malicious behaviors may influence the training time of the model or cause the model to be incapable of converging. In the existing technical scheme, the problem of uneven resource distribution of each participating node in federal learning is more emphasized, and quantitative evaluation on behaviors of the participating nodes is lacked, so that the effect in practical application is poor.
Disclosure of Invention
In order to solve the problems, the invention provides a block chain-based federal learning credible fusion incentive method and a block chain-based federal learning credible fusion incentive system, which are used for solving the problems of passive vehicle taking, model virus exposure attack and the like of the participating nodes in the federal learning model training by dynamically updating the credit degrees of the participating nodes in the iterative learning process, realizing the high-efficiency credible model training of the multi-party participating nodes, and constructing an entity model of multi-attribute fusion data so as to conveniently carry out entity-level and attribute-level credible query on the participating nodes, thereby realizing the quantitative evaluation of the behaviors of the participating nodes.
In order to achieve the above object, the present invention mainly includes the following aspects:
in a first aspect, an embodiment of the present invention provides a block chain-based federated learning trusted fusion incentive method, including:
in the multi-round federal learning process, acquiring the basic credit of each participating node and a preset node grade critical value;
aiming at each round of iterative learning, updating the credit of the participating nodes in the current round of iterative learning according to the basic credit of the participating nodes and the credit of the previous round, determining the grades of the participating nodes by comparing the grade critical values of the nodes with the updated credit, isolating the participating nodes with preset grades and recording the isolated participating nodes into a block chain; and constructing an entity model of the multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
In a possible implementation mode, the basic credit degree of the participating node is determined according to the contribution degree of the local gradient of the participating node to the global gradient of the federal learning and the effective information degree of the local gradient; the contribution degree is used for measuring the global consistency of the local gradient and the federally-learned global gradient, and the effective information degree is used for measuring the gradient difference provided by the target participating node and other participating nodes except the target participating node in the federally-learned global gradient.
In one possible implementation, the contribution is a ratio of a number of consistent signs of the local update gradient of the participating node and the global update gradient of the prior round of federal learning to a total parameter of the model aggregated by federal learning.
In one possible embodiment, the degree of validity information is determined by:
wherein, the first and the second end of the pipe are connected with each other,for participating nodesiIn the first placetThe degree of information available in the round of iterations,Mfor purposes of the federal study of the total number of parameters,nrepresenting the total number of participating nodes in federal learning,for participating nodesiAnd participating nodejIn the first placetThe number of parameters with different symbols in the gradient update of the wheel.
In one possible implementation, the credits of the participating nodes in the current round of iterative learning are updated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,m it for participating nodesiIn the first placetThe degree of confidence in the round of iterations,for the value of the change in the degree of credit,in order to be the basis of the credit rating,tis the number of iterations.
In one possible implementation, the related attribute information includes a contribution degree, an effective information degree, a basic credit degree, and a contribution ratio relationship between the participating nodes.
In one possible embodiment, after storing the entity model on the blockchain, the method further includes:
inquiring the contribution degree of each participating node in each round of iterative learning in the entity model on the block chain, and calculating the comprehensive contribution degree of each participating node in the whole process of federal learning; and sending corresponding excitation information to the participating nodes according to the comprehensive contribution degree.
In one possible implementation, user identities are obtained, and trusted query permissions of the entity models on the blockchain are determined for different user identities.
In a second aspect, an embodiment of the present invention provides a block chain-based federal learning trusted fusion incentive system, including:
the acquisition module is used for acquiring the basic credit of each participating node and a preset node grade critical value in the multi-round federal learning process;
the storage module is used for updating the credit degree of the participation node in the current round of iterative learning according to the basic credit degree of the participation node and the credit degree of the previous round of iterative learning, determining the grade of the participation node by comparing the grade critical value of the node with the updated credit degree, isolating the participation node in the preset grade and recording the isolation node in the block chain; and constructing an entity model of multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
In one possible implementation, the method further includes:
the excitation module is used for inquiring the contribution degree of each iteration learning of the participating nodes in the entity model on the block chain and calculating the comprehensive contribution degree of each participating node in the whole process of the federal learning; and sending corresponding incentive information to the participating nodes according to the comprehensive contribution degree.
The above one or more technical solutions have the following beneficial effects:
(1) Aiming at a fusion scene of a federal learning model, the credit degrees of the participating nodes are dynamically updated in the iterative learning process, so that the problems of passive vehicle taking, model virus exposure attack and the like of the participating nodes in the federal learning model training are solved, honesty and active participation of the participating nodes in federal learning is encouraged, and the occurrence of malicious behaviors of the participating nodes is effectively reduced.
(2) The invention provides an entity model of multi-attribute fusion data, realizes entity-level and attribute-level credible queries, and can understand the process of participating in node credit fusion data.
(3) The method can efficiently inquire the respective contribution degree of each participating node in a specified period, calculate the comprehensive contribution degree of each participating node to the model in the whole process of federal learning, and provide a basis for quantitative evaluation of the behaviors of the participating nodes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
Fig. 1 is a schematic flowchart of a block chain-based federal learning trusted fusion incentive method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a solid model according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
Referring to fig. 1, the embodiment provides a block chain-based federal learning credible fusion incentive method, which specifically includes the following steps:
s101: in the multi-round federal learning process, acquiring the basic credit of each participating node and a preset node grade critical value;
s102: aiming at each round of iterative learning, updating the credit of the participating nodes in the current round of iterative learning according to the basic credit of the participating nodes and the credit of the previous round, determining the grades of the participating nodes by comparing the grade critical value of the nodes with the updated credit, isolating the participating nodes of a preset grade and recording the grades into a block chain; and constructing an entity model of the multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
In particular implementations, during federated learning, a plurality of participating nodes cooperatively train a model under the coordination of a central server. In each round of iterative learning, the credit degree of the current round of iterative learning is updated according to the basic credit degree of the participating nodes and the credit degree of the previous round, and the credit degree of the current round of iterative learning is compared with a preset node grade critical value to determine the grade of the participating nodes. Here, the node level critical value may be multiple, and in this embodiment, two values a and b are set as the node level critical value, (b is greater than 0 and a is less than or equal to 1), so that the credits fall into different interval ranges and respectively correspond to different participating node levels, specifically:
and isolating the participating nodes of the preset level and recording the nodes into the block chain. In specific application, uplink chain storage (blacklist) is carried out on the identity identification codes of the untrusted participating nodes, and the untrusted participating nodes are not allowed to participate in the subsequent federal learning, so that the problems of passive casualty taking of the participating nodes, model virus exposure attack and the like in the federal learning model training can be solved, and the participation of the participating nodes is encouraged to honestly and actively participate in the federal learning. And according to the credit degrees and the related attribute information of the participating nodes, an entity model of multi-attribute fusion data is constructed and stored on a block chain, so that entity-level and attribute-level credible queries of the participating nodes are supported, and a basis is provided for quantitative evaluation of behaviors of the participating nodes.
As an optional implementation manner, determining the basic credit degree of the participating node according to the contribution degree of the local gradient of the participating node to the global gradient of the federal study and the effective information degree of the local gradient; the contribution degree is used for measuring the global consistency of the local gradient and the global gradient of the federal learning, and the effective information degree is used for measuring the gradient difference provided by the target participating node and other participating nodes except the target participating node in the federal learning. Optionally, the contribution degree is a ratio of a quantity of the participating node local update gradient and a sign of the previous round of global update gradient for federal learning to a total parameter quantity of the model aggregated for federal learning. In particular, the participating nodesiIn the first placetThe contribution of the round iteration is noted asDefinition ofThe gradient is updated for the t-1 iteration,for participating nodesiIn the first placetThe local updating gradient of the round iteration is calculated in the following mode: the participating node local update and the previous iteration global update correspond to gradient with the same proportion of sign, and the formula is as follows:
in the formula (I), the compound is shown in the specification,Naggregate model total parameters for federal learning (total parameters = [ x1, x2, x3 \8230; xn)]) Wherein x represents a parameter participating in local training of the node,[x1,…,xn]it is indicated that the number of the parameters after aggregation is n,,to count the number of parameters having the same sign (positive/negative) in x, y,。
the effective information degree measures the information quantity contained in the gradient of the update of the participating node in one round of gradient update, and when the gradient difference provided by one participating node and other participating nodes is smaller, the smaller the information quantity of the gradient update provided by the participating node is, the lower the credit degree is. The calculation method is as follows: the different participating nodes locally update the gradient symbol difference proportion, and the calculation process is as follows:
wherein the content of the first and second substances,for participating nodesiIn the first placetThe degree of information available for the round of iteration,Mfor purposes of the federal study of the total number of parameters,nrepresenting the total number of nodes participating in federal learning,for participating nodesiAnd participating nodejIn the first placetThe number of parameters with different symbols in the gradient update of the wheel.
The basic credit degree is comprehensively measured by the contribution degree of the participating nodes to the global gradient of the federal learning and the effective information degree of the local gradient, when one participating node uploads the gradient, the more the information content is and the higher the partial information contributes to the global gradient updating, the higher the credit degree of the participating nodes is, and the participating nodesiIn the first placetThe basic credits of the round iteration are recorded, the calculation of whichThe method is as follows:
wherein, the first and the second end of the pipe are connected with each other,andrespectively representAndthe weight of (a) is calculated,andrespectively representing the contribution degree and the effective information degree of the node i in the t-th iteration,,。
therefore, the calculation of the basic credit of the participating nodes is completed, and the preliminary evaluation of the credit of the participating nodes is realized.
As an alternative embodiment, the credit of the participating nodes in the current round of iterative learning is updated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,m it is a nodeiIn the first placetThe degree of confidence in the round of iterations,in order to be a value of the change in the credit rating,based on the credit-degree of the base,tis the number of iterations.
After each round of iterative learning is finished, according to the fact that the behavior of the participated node in the iteration is positive, negative or malicious, the credit of the participated node in the current round of iterative learning is updated by combining the basic credit of the current round and the credit of the previous round. Wherein, the behavior judgment basis is as follows: whether the participating nodes upload parameters/interrupt communications autonomously within a specified time.
Thus, the grades of all the participating nodes are obtained. And the credit degree of the nodes participating in the federal learning is dynamically updated so as to ensure that each node selected to participate in the federal learning is more credible. And the block chain distributed account book data records are isolated, and the chain certificate storage is realized by combining the identity identification codes of the participating nodes. The data on the chain has difficult tamper property, and has strong control on the identity records.
As an optional implementation manner, as shown in fig. 2, a solid model of multi-attribute fusion data is constructed, and the solid model is linked, where the related attribute information includes a contribution degree, an effective information degree, a basic credit degree, and a contribution ratio relationship between participating nodes, and the specific steps are as follows:
(1) Establishing participating nodesiIn the first placetContribution ratio relation between other participating nodes in round iterationr ti ,r ti =[a i1 ,a i2 ,…,a i i-1() ,a i i+1() ,…a il ]In this case, the first and second substrates,lis shown in commonlOne of the participating nodes is a node that is,a il representing participating nodesiAnd participating nodelThe contribution ratio therebetween;
here, the contribution ratio relationshipr ti For participating nodesiIs federally learned with any other participating nodestContribution ratio in round iteration for distinguishing performance advantages and disadvantages of participating nodes。
(2) According to the contribution degree, the effective information degree, the basic credit degree and the contribution ratio relation between the participating nodes, constructing an entity model m of the participating nodes, wherein the entity model m = &c 1 ,c 2 ,c 3 ,c 4 ,r ti The position of the movable part is, here,c 1 in order to be a degree of contribution,c 2 in order to be of an effective degree of information,c 3 in order to be the basis of the credit rating,c 4 in order to be the degree of credit,r ti is the contribution ratio relationship between the participating nodes.
(3) And uploading the data, namely uploading the entity model of the multi-attribute fusion data to the block chain.
As an optional embodiment, after storing the entity model on the blockchain, the method further includes:
inquiring the contribution degree of each participating node in each iteration learning according to the entity model on the block chain, and calculating the comprehensive contribution degree of each participating node in the whole process of federal learning; and sending corresponding excitation information to the participating nodes according to the comprehensive contribution degree.
In specific implementation, after the last round of federal learning is finished, the contribution degree of each participating node in the block chain distributed account book in the whole process of federal learning is inquired, and the comprehensive contribution degree of each participating node is calculated according to the comprehensive contribution degree model.
Optionally, the user identity is obtained, and the trusted query authority of the entity model on the block chain is determined for different user identities, so as to enhance data privacy and security.
(1) If the node is a common user, all data information of the participating node can be inquired by inputting the identity identification code of the participating node (entity-level credible inquiry); the attribute data information (attribute-level trusted query) of the participating node can be queried by inputting the identity and the attribute name of the participating node.
(2) If the node is an administrator, all data information of the participating node can be inquired and managed by inputting the identity identification code of the participating node.
Example two
The embodiment of the invention also provides a block chain-based federal learning credible fusion incentive system, which comprises:
the acquisition module is used for acquiring the basic credit degrees of all the participating nodes and a preset node grade critical value in the multi-round federal learning process;
the storage module is used for updating the credit of the participatory node in the current round of iterative learning according to the basic credit of the participatory node and the credit of the previous round of iterative learning, determining the grade of the participatory node by comparing the grade critical value of the node with the updated credit, isolating the participatory node with the preset grade and recording the isolated participatory node into the block chain; and constructing an entity model of the multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
As an optional implementation, the method further comprises:
the excitation module is used for inquiring the contribution degree of each iteration learning of the participating nodes in the entity model on the block chain and calculating the comprehensive contribution degree of each participating node in the whole process of the federal learning; and sending corresponding incentive information to the participating nodes according to the comprehensive contribution degree.
The block chain-based federal learning credible fusion excitation system provided in this embodiment is used to implement the block chain-based federal learning credible fusion excitation method, so that the specific implementation manner in the block chain-based federal learning credible fusion excitation system can be found in the foregoing embodiment part of the block chain-based federal learning credible fusion excitation method, and is not described herein again.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A block chain-based federated learning credible fusion excitation method is characterized by comprising the following steps:
in the multi-round federal learning process, acquiring the basic credit of each participating node and a preset node grade critical value;
aiming at each round of iterative learning, updating the credit of the participating nodes in the current round of iterative learning according to the basic credit of the participating nodes and the credit of the previous round, determining the grades of the participating nodes by comparing the grade critical values of the nodes with the updated credit, isolating the participating nodes with preset grades and recording the isolated participating nodes into a block chain; and constructing an entity model of the multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
2. The block chain-based federal learning credible fusion incentive method as claimed in claim 1, wherein the basic credit of the participating nodes is determined according to the contribution degree of the participating nodes' local gradient to the global gradient of the federal learning and the effective information degree of the local gradient; the contribution degree is used for measuring the global consistency of the local gradient and the global gradient of the federal learning, and the effective information degree is used for measuring the gradient difference provided by the target participating node and other participating nodes except the target participating node in the federal learning.
3. The block chain-based federated learning credible fusion incentive method according to claim 2, wherein the contribution degree is a ratio of a number of participating node local update gradients and a previous round of federated learning global update gradients with consistent signs to a total model parameter number of federated learning aggregation.
4. The block chain-based federal learning trusted fusion incentive method of claim 2, wherein the effective informativeness is determined by:
wherein the content of the first and second substances,for participating nodesiIn the first placetThe degree of information available in the round of iterations,Mfor the purpose of federal learning of the total parameter number,nrepresenting the total number of participating nodes in federal learning,for participating nodesiAnd participating nodejIn the first placetThe number of parameters with different symbols in the gradient update of the wheel.
5. The block chain-based federated learning credible fusion excitation method of claim 1, wherein the credit of the participating nodes in the current round of iterative learning is updated according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,m it for participating nodesiIn the first placetThe degree of confidence in the round of iterations,in order to be a value of the change in the credit rating,in order to be the basis of the credit rating,tis the number of iterations.
6. The block chain-based federal learning credible fusion incentive method of claim 1, wherein the relevant attribute information comprises contribution degree, validity information degree, basic credit degree and contribution ratio relationship between participating nodes.
7. The blockchain-based federated learning trusted fusion incentive method of claim 1, further comprising, after storing the mockup on the blockchain:
inquiring the contribution degree of each participating node in each round of iterative learning in the entity model on the block chain, and calculating the comprehensive contribution degree of each participating node in the whole process of federal learning; and sending corresponding excitation information to the participating nodes according to the comprehensive contribution degree.
8. The block chain-based federal learning trusted fusion incentive method of claim 7, wherein user identities are obtained, and trusted query permissions of entity models on the block chain are determined for different user identities.
9. A block chain-based federated learning credible fusion incentive system is characterized by comprising:
the acquisition module is used for acquiring the basic credit degrees of all the participating nodes and a preset node grade critical value in the multi-round federal learning process;
the storage module is used for updating the credit of the participatory node in the current round of iterative learning according to the basic credit of the participatory node and the credit of the previous round of iterative learning, determining the grade of the participatory node by comparing the grade critical value of the node with the updated credit, isolating the participatory node with the preset grade and recording the isolated participatory node into the block chain; and constructing an entity model of the multi-attribute fusion data according to the updated credit degrees and the related attribute information of the participating nodes, and storing the entity model on the block chain.
10. The blockchain-based federated learning trusted fusion incentive system of claim 9, further comprising:
the excitation module is used for inquiring the contribution degree of each iteration learning of the participating nodes in the entity model on the block chain and calculating the comprehensive contribution degree of each participating node in the whole process of the federal learning; and sending corresponding incentive information to the participating nodes according to the comprehensive contribution degree.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211185889.6A CN115296927B (en) | 2022-09-28 | 2022-09-28 | Block chain-based federal learning credible fusion excitation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211185889.6A CN115296927B (en) | 2022-09-28 | 2022-09-28 | Block chain-based federal learning credible fusion excitation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115296927A true CN115296927A (en) | 2022-11-04 |
CN115296927B CN115296927B (en) | 2023-01-06 |
Family
ID=83834432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211185889.6A Active CN115296927B (en) | 2022-09-28 | 2022-09-28 | Block chain-based federal learning credible fusion excitation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115296927B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116304876A (en) * | 2023-03-27 | 2023-06-23 | 烟台大学 | Block chain-based industrial Internet platform operation method, system and equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200193292A1 (en) * | 2018-12-04 | 2020-06-18 | Jinan University | Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism |
CN112348204A (en) * | 2020-11-05 | 2021-02-09 | 大连理工大学 | Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology |
CN112784994A (en) * | 2020-12-31 | 2021-05-11 | 浙江大学 | Block chain-based federated learning data participant contribution value calculation and excitation method |
CN113467927A (en) * | 2021-05-20 | 2021-10-01 | 杭州趣链科技有限公司 | Block chain based trusted participant federated learning method and device |
WO2021208720A1 (en) * | 2020-11-19 | 2021-10-21 | 平安科技(深圳)有限公司 | Method and apparatus for service allocation based on reinforcement learning |
CN114154649A (en) * | 2021-12-06 | 2022-03-08 | 浙江师范大学 | High-quality federal learning system and method based on block chain and credit mechanism |
CN114327889A (en) * | 2021-12-27 | 2022-04-12 | 吉林大学 | Model training node selection method for layered federated edge learning |
CN114580658A (en) * | 2021-12-28 | 2022-06-03 | 天翼云科技有限公司 | Block chain-based federal learning incentive method, device, equipment and medium |
CN114626934A (en) * | 2022-02-08 | 2022-06-14 | 天津大学 | Block chain-based multi-level wind control system and control method |
US20220255764A1 (en) * | 2021-02-06 | 2022-08-11 | SoterOne, Inc. | Federated learning platform and machine learning framework |
CN115099417A (en) * | 2022-06-28 | 2022-09-23 | 贵州大学 | Multi-factor federal learning incentive mechanism based on Starkeberg game |
-
2022
- 2022-09-28 CN CN202211185889.6A patent/CN115296927B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200193292A1 (en) * | 2018-12-04 | 2020-06-18 | Jinan University | Auditable privacy protection deep learning platform construction method based on block chain incentive mechanism |
CN112348204A (en) * | 2020-11-05 | 2021-02-09 | 大连理工大学 | Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology |
WO2021208720A1 (en) * | 2020-11-19 | 2021-10-21 | 平安科技(深圳)有限公司 | Method and apparatus for service allocation based on reinforcement learning |
CN112784994A (en) * | 2020-12-31 | 2021-05-11 | 浙江大学 | Block chain-based federated learning data participant contribution value calculation and excitation method |
US20220255764A1 (en) * | 2021-02-06 | 2022-08-11 | SoterOne, Inc. | Federated learning platform and machine learning framework |
CN113467927A (en) * | 2021-05-20 | 2021-10-01 | 杭州趣链科技有限公司 | Block chain based trusted participant federated learning method and device |
CN114154649A (en) * | 2021-12-06 | 2022-03-08 | 浙江师范大学 | High-quality federal learning system and method based on block chain and credit mechanism |
CN114327889A (en) * | 2021-12-27 | 2022-04-12 | 吉林大学 | Model training node selection method for layered federated edge learning |
CN114580658A (en) * | 2021-12-28 | 2022-06-03 | 天翼云科技有限公司 | Block chain-based federal learning incentive method, device, equipment and medium |
CN114626934A (en) * | 2022-02-08 | 2022-06-14 | 天津大学 | Block chain-based multi-level wind control system and control method |
CN115099417A (en) * | 2022-06-28 | 2022-09-23 | 贵州大学 | Multi-factor federal learning incentive mechanism based on Starkeberg game |
Non-Patent Citations (2)
Title |
---|
SHILI HU: "The Blockchain-Based Edge Computing Framework for Privacy-Preserving Federated Learning", 《2021 IEEE INTERNATIONAL CONFERENCE ON BLOCKCHAIN (BLOCKCHAIN)》 * |
李铮: "一种支持隐私与权益保护的数据联合利用系统方案", 《信息与电脑(理论版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116304876A (en) * | 2023-03-27 | 2023-06-23 | 烟台大学 | Block chain-based industrial Internet platform operation method, system and equipment |
CN116304876B (en) * | 2023-03-27 | 2024-01-23 | 烟台大学 | Block chain-based industrial Internet platform operation method, system and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115296927B (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A blockchain based privacy-preserving incentive mechanism in crowdsensing applications | |
Hu et al. | REPLACE: A reliable trust-based platoon service recommendation scheme in VANET | |
CN114297722B (en) | Privacy protection asynchronous federal sharing method and system based on block chain | |
CN115102763B (en) | Multi-domain DDoS attack detection method and device based on trusted federal learning | |
CN115510494B (en) | Multiparty safety data sharing method based on block chain and federal learning | |
CN112395640A (en) | Industry Internet of things data lightweight credible sharing technology based on block chain | |
CN112416986B (en) | User portrait realizing method and system based on hierarchical personalized federal learning | |
Sedlmeir et al. | Recent developments in blockchain technology and their impact on energy consumption | |
CN115296927B (en) | Block chain-based federal learning credible fusion excitation method and system | |
CN108197959A (en) | A kind of fast verification pond based on block chain, fast verification system and operating method | |
CN105005874A (en) | Examination method and system of network administrator | |
CN116383869A (en) | Agricultural product supply chain credible traceability model based on PBFT consensus mechanism and implementation method | |
CN116258420A (en) | Product quality detection method, device, terminal equipment and medium | |
CN113452681B (en) | Internet of vehicles crowd sensing reputation management system and method based on block chain | |
Montazeri et al. | Distributed mechanism design in continuous space for federated learning over vehicular networks | |
CN113283778A (en) | Layered convergence federated learning method based on security evaluation | |
CN115640305B (en) | Fair and reliable federal learning method based on blockchain | |
CN116451806A (en) | Federal learning incentive distribution method and device based on block chain | |
CN116389478A (en) | Four-network fusion data sharing method based on blockchain and federal learning | |
Krasnokutskaya | Identification and Estimation of Auction Model with Two‐Dimensional Unobserved Heterogeneity | |
CN110517401A (en) | A kind of ballot statistical method of panorama block chain | |
CN114462091A (en) | Block chain crowdsourcing platform design and implementation method for guaranteeing transaction fairness and data privacy | |
CN113946879A (en) | Wide area resource scheduling system based on cloud platform and block chain | |
Wang et al. | Security Research in Personnel Electronic File Management Based on Blockchain Technology | |
CN114584311A (en) | Reputation-based safe dynamic intelligent parking space sharing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |