CN111930698A - Data security sharing method based on Hash diagram and federal learning - Google Patents

Data security sharing method based on Hash diagram and federal learning Download PDF

Info

Publication number
CN111930698A
CN111930698A CN202010625680.1A CN202010625680A CN111930698A CN 111930698 A CN111930698 A CN 111930698A CN 202010625680 A CN202010625680 A CN 202010625680A CN 111930698 A CN111930698 A CN 111930698A
Authority
CN
China
Prior art keywords
node
model
witness
event
round
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010625680.1A
Other languages
Chinese (zh)
Other versions
CN111930698B (en
Inventor
张秀贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xiaozhuang University
Original Assignee
Nanjing Xiaozhuang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xiaozhuang University filed Critical Nanjing Xiaozhuang University
Priority to CN202010625680.1A priority Critical patent/CN111930698B/en
Publication of CN111930698A publication Critical patent/CN111930698A/en
Application granted granted Critical
Publication of CN111930698B publication Critical patent/CN111930698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/80ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu

Abstract

The method for safely sharing data based on the HashMaps and the federal learning comprises the steps that detection of a federal learning local model is added into a block chain 3.0 technology hashgraph consensus algorithm, so that an error model is prevented from being provided by dishonest nodes, meanwhile, the federal learning data model is realized by a method of conducting weighted aggregation on the local model, and 1) the detection of the federal learning local model is added into the block chain 3.0 technology hashgraph consensus algorithm, so that the dishonest nodes are prevented from providing the error model; 2) the dishonest node detection process of the hashgraph mainly comprises the following steps: and generating an event, adopting Gossip communication of the eight diagrams algorithm, adopting a virtual voting algorithm for consensus, and realizing successful detection of the dishonest nodes in the federal learning process based on a hashgraph and federal learning data security sharing model.

Description

Data security sharing method based on Hash diagram and federal learning
Technical Field
The invention designs a data security sharing method based on hashgraph and federal learning, which is suitable for a mobile edge computing network and belongs to the technical field of information communication.
Background
With the uncontrolled control of COVID-19, the tracking of the contacters of the new crown patient becomes particularly important, in the tracking process, the track of the new crown patient, the track of the close contacters, the contact persons who are possible to become the contacters of the new crown patient and the like need to be subjected to statistical analysis, and privacy data of a plurality of users are involved, so that the data cannot be completely uploaded to the cloud end for model training, and accordingly, federal learning is needed to solve the problem, and a central model is obtained on a server by aggregating the models which are trained locally on a client side [1 ]. In federal learning, distributed local devices compute and send local models to a central server from local data samples. The central server trains the sharing model by aggregating local models from different devices [2 ]. Therefore, in the training process, the original data are always stored in the local equipment, and the privacy of the user can be effectively protected. Therefore, not only data sharing but also privacy protection are realized in federal learning, but also some limitations exist in federal learning, and first, it cannot be guaranteed that the learned model is not leaked in transmission in a network. Second, dishonest users can adversely affect the learning model by providing a low-level local pattern. In addition, users also lack the incentive to participate in federal learning using their own computing resources and data. The last is the network overload problem, which is caused by the fact that a large number of models are transmitted simultaneously in the federal learning process and the network is overloaded under the limitation of bandwidth. In recent years, in order to solve the above problems, many researchers have conducted studies in combination with federal learning and block chaining. In the paper, a blockchain is used for storing retrieval data and access rights, so that malicious users can be prevented from tampering with the model. And protecting personal privacy data by using a differential privacy algorithm. However, the use of differential privacy algorithms leads to a drastic drop in data availability due to interference from random noise. Paper [4] proposes a new solution in combination with federal learning and blockchain channels. And a federal learning request is made in the same channel so as to ensure the personal privacy of users under different channels, but the personal privacy protection among the users under the same channel is not involved. Paper [5] is coupled with blockchains and federal learning to ensure privacy of user data. The trained learning model parameters may be securely stored in an immutable manner on the blockchain to prevent unauthorized user access and malicious behavior. [6] Kim et al in (1) proposed a federal study based on blockchains to provide an incentive mechanism to prevent malicious users from changing the model according to the natural transaction properties and immutable ledger of blockchains. In addition, a rapid and stable target precision convergence joint learning model is also provided to reduce network overload. While there are some FL-based blockchain studies, the results of these studies do not take into account dishonest model providers. The dishonest model provider has a great influence on the accuracy and reliability of the learning model, so that how to detect whether the client participating in learning is the dishonest model provider becomes a great problem that must be solved for the development of the internet of things.
Therefore, the invention provides a method for detecting a model provider based on a hashgraph, which solves the problem that a dishonest model provider has adverse influence on model generation in the federal learning process.
Disclosure of Invention
The technical problem is as follows: the invention provides a data security sharing method based on hashgraph and federal learning in the COVID-19 epidemic situation prevention and control process, solves the problem that a dishonest model provider has adverse effect on model generation in the federal learning process, and improves the accuracy of federal learning training. The invention is applied to the statistical analysis of the moving track of the new crown patient, the action track of the close contacter and the contacters which are possibly the new crown patient in order to quickly count and track the contacters of the new crown patient.
The technical scheme is as follows: the invention provides a data security sharing method based on hashgraph (HashChi diagram) and federal learning in the COVID-19 epidemic prevention and control process, which comprises the following steps:
(1) the method for preventing the dishonest node from providing the error model by adding the detection of the federal learning local model into the block chain 3.0 technology hashgraph consensus algorithm is as follows:
the system can be divided into a block chain platform and a communication network, and the block chain platform adopts a hash graph in order to reduce consensus time and avoid network overload. In particular, the blockchain platform is used to record local model retrieval (raw model parameters stored in local devices), availability of local models, and all shared data events that can track data usage for further auditing. The communication network is responsible for data communication.
All users who wish to provide data sharing services can apply for joining the blockchain platform. The data sharing requestor sends a request to the blockchain platform, which checks whether the request was previously processed. If so, the request is forwarded to the node that caches the results, and the cached results are then sent as a response to the requestor. Conversely, if not processed, a new federated learning request with data classes and incentive schemes is issued on the blockchain, and all nodes in the blockchain choose whether to join federated learning based on how well their own data matches the requested data and incentive schemes. All nodes participating in federal learning are considered committee nodes, responsible for driving consensus in block chains.
The client trains a local model locally by using local data, propagates the local model in the blockchain network by adopting a rumor algorithm, votes on the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers that a model provider is reliable and the model is available if participants exceeding 1/2 vote, and removes the model provider from the blockchain if the participants consider that the model provider is unreliable. And finally, performing a weighted set on each reliable local model to obtain a federal learning training model.
(2) The dishonest node detection process of the hashgraph mainly comprises the following steps: generating an event, adopting Gossip communication of the eight diagrams algorithm, adopting a virtual voting algorithm for consensus, and mainly comprising the following processes:
(2-1) the event for generating the detection of the dishonest node mainly comprises: the node comprises a timestamp, a digital signature, a parent hash of the node, parent hashes of other nodes and event content, wherein the event content comprises: data type, local model index, number of votes granted, whether the local model is valid.
(2-2) the main process of the eight diagrams algorithm: after the local node generates an event, another node is randomly selected as a destination node, and data which is known by the node but is unknown by the selected node is sent to the selected node. When a node receives data containing new information, it first executes all new transaction information that has not yet been executed, checks the availability of the local model, votes on the local model, and then repeats the same process by selecting another random node until all nodes have received the event.
The main process of the virtual voting algorithm (2-3) is mainly divided into: determining the round, determining the famous witnesses, collecting reliable votes of the models, and determining the number of consensus rounds and the consensus time:
determining the turn: the first event sent by a node is a witness event, and this witness event is the beginning of a round (r) for the node. Assuming that after receiving the event X sent by the node a, the node B selects the node C as a receiving node, and then the node B creates an event Y (including data known by the node B and unknown by the node C) and sends Y to the node C, before creating the event Y, the node B should check whether a new round needs to be started, if the event X can see most of the witnesses in round r, the event Y is the start of round r +1, and Y is the witness in round r + 1. Otherwise, event Y is still in r rounds.
Secondly, the witness determination and the model reliability voting collection are carried out: when judging whether the witness in the R-th round is a known witness, the witness in the R + 1-th round needs to judge, and then the witness in the R + 2-th round counts whether the witness is the known witness and whether the local model included in the witness event in the R-th round is reliable. If the witness of the node B of round r +1 can see the witness of the node A of round r, the witness of the node B of round r +1 casts a known witness ticket to the witness of the node A of round r. The witness of the C node of the r +2 round collects the number of tickets which can be strongly seen by the witness of the B node (or other nodes) of the r +1 round and proves that the A node is the well-known witness, and when the number of tickets exceeds two thirds of the number of nodes, the witness of the A node is the well-known witness. The local model is valid when the number of reliable votes collected for the local model exceeds the number of nodes 1/2.
Determining the number of consensus rounds and the consensus time: when the witnesses in round r all determine whether they are known witnesses, then the number of rounds of receipt of events that can be seen by all the round r known witnesses is r. Event x to each witness node where it is visible, the earliest visible event for x, such as: the event x is in the node A, the node B and the node C are all visible to x, the node A can see x at the earliest, the node B is the event which transmits x to the node B for the first time, the node C and the node B find the median of the timestamps in the three events to be the consensus timestamp of the event x, and the consensus timestamp, the consensus round number, the vote number obtained by the local model and whether the local model is effective or not are stored in the block chain.
(3) All local models are multiplied by weighting coefficients to obtain a federal learning model, and the weighting coefficients are improved in the text, wherein the specific improvement method is as follows: as shown in FIG. 4, the local model w is computed using a deep learning algorithmi(t) using a homomorphic encryption algorithm on wi(t) is encrypted to obtain w'i(t), mixing w'i(t) sending to hashgraph for detection, if w 'is detected'i(t) is a dishonest provider, then mi0, otherwise mi=1,w′iThe weight coefficient of (t) is:
Figure BDA0002564631880000051
wherein k isiIs the local data volume of the ith model provider, k is the total data volume of all model providers, NiIs given of w'i(t) votes voted for, I being the number of model providers, the federal learning model is:
Figure BDA0002564631880000052
the invention has the advantages that: the invention is applied to the statistical analysis of the moving track of the new crown patient, the action track of the close contacter and the contacters which are possibly the new crown patient in order to quickly count and track the contacters of the new crown patient. The invention realizes the successful detection of the dishonest nodes in the federal learning process. By adding detection on the federal learning local model into the block chain 3.0 technology hashgraph consensus algorithm, an error model provided by a dishonest node can be successfully detected, and the convergence speed of the model is improved. The invention provides a method for realizing a data model of federal learning by carrying out weighted aggregation on a local model, and the accuracy of model training is improved.
The invention provides a data security sharing model based on hashgraph and federal learning, and the data model privacy of local users is protected.
Secondly, a method for detecting the federal learning local model provider based on the hashgraph is provided, the problem that the dishonest model provider has adverse effect on model generation in the federal learning process is solved, the accuracy of federal learning is improved, the learning time is reduced, and network overload is effectively prevented.
And thirdly, the weighting coefficient of the model in the federal learning is improved, and the learning precision is improved.
Drawings
FIG. 1 is a data security sharing model based on hashgraph and federal learning.
FIG. 2 event structure.
Fig. 3 eight diagrams protocol.
FIG. 4 is a federal learning model diagram.
Detailed Description
The invention provides a data security sharing model based on hashgraph and federal learning in the COVID-19 epidemic prevention and control process, so that dishonest nodes can be successfully detected in the federal learning process. By adding detection on the federal learning local model into the block chain 3.0 technology hashgraph consensus algorithm, an error model provided by a dishonest node can be successfully detected, and the convergence speed of the model is improved. The method for realizing the data model of the federal learning by carrying out weighted aggregation on the local model is provided, wherein the weighting coefficient mainly comprises the following steps: the ratio of the local model data volume to the total data volume, and the ratio of the approval ticket number obtained by the local model to the total number of the participating model clients improve the accuracy of model training.
(1) The method for preventing the dishonest node from providing the error model by adding the detection of the federal learning local model into the block chain 3.0 technology hashgraph consensus algorithm is as follows:
the system can be divided into two parts of a block chain platform and a communication network, as shown in figure 1. In order to reduce consensus time and avoid network overload, the blockchain platform adopts a hashgraph. In particular, the blockchain platform is used to record local model retrieval (raw model parameters stored in local devices), availability of local models, and all shared data events that can track data usage for further auditing. The communication network is responsible for data communication.
All users who wish to provide data sharing services can apply for joining the blockchain platform. The data sharing requestor sends a request to the blockchain platform, which checks whether the request was previously processed. If so, the request is forwarded to the node that caches the results, and the cached results are then sent as a response to the requestor. Conversely, if not processed, a new federated learning request with data classes and incentive schemes is issued on the blockchain, and all nodes in the blockchain choose whether to join federated learning based on how well their own data matches the requested data and incentive schemes. All nodes participating in federal learning are considered committee nodes, responsible for driving consensus in block chains.
The client trains a local model locally by using local data, propagates the local model in the blockchain network by adopting a rumor algorithm, votes on the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers that a model provider is reliable and the model is available if participants exceeding 1/2 vote, and removes the model provider from the blockchain if the participants consider that the model provider is unreliable. And finally, performing a weighted set on each reliable local model to obtain a federal learning training model.
(2) The dishonest node detection process of the hashgraph mainly comprises the following steps: generating an event, adopting Gossip communication of the eight diagrams algorithm, adopting a virtual voting algorithm for consensus, and mainly comprising the following processes:
(2-1) the event for generating a dishonest node detection, as shown in fig. 2, mainly includes: the node comprises a timestamp, a digital signature, a parent hash of the node, parent hashes of other nodes and event content, wherein the event content comprises: data type, local model index, number of votes granted, whether the local model is valid.
(2-2) the main flow of the eight diagrams algorithm, as shown in fig. 3: after the local node generates an event, another node is randomly selected as a destination node, and data which is known by the node but is unknown by the selected node is sent to the selected node. When a node receives data containing new information, it first executes all new transaction information that has not yet been executed, checks the availability of the local model, votes on the local model, and then repeats the same process by selecting another random node until all nodes have received the event.
The main process of the virtual voting algorithm (2-3) is mainly divided into: determining the round, determining the famous witnesses, collecting reliable votes of the models, and determining the number of consensus rounds and the consensus time:
determining the turn: the first event sent by a node is a witness event, and this witness event is the beginning of a round (r) for the node. Assuming that after receiving the event X sent by the node a, the node B selects the node C as a receiving node, and then the node B creates an event Y (including data known by the node B and unknown by the node C) and sends Y to the node C, before creating the event Y, the node B should check whether a new round needs to be started, if the event X can see most of the witnesses in round r, the event Y is the start of round r +1, and Y is the witness in round r + 1. Otherwise, event Y is still in r rounds.
Secondly, the witness determination and the model reliability voting collection are carried out: when judging whether the witness in the R-th round is a known witness, the witness in the R + 1-th round needs to judge, and then the witness in the R + 2-th round counts whether the witness is the known witness and whether the local model included in the witness event in the R-th round is reliable. If the witness of the node B of round r +1 can see the witness of the node A of round r, the witness of the node B of round r +1 casts a known witness ticket to the witness of the node A of round r. The witness of the C node of the r +2 round collects the number of tickets which can be strongly seen by the witness of the B node (or other nodes) of the r +1 round and proves that the A node is the well-known witness, and when the number of tickets exceeds two thirds of the number of nodes, the witness of the A node is the well-known witness. The local model is valid when the number of reliable votes collected for the local model exceeds the number of nodes 1/2.
Determining the number of consensus rounds and the consensus time: when the witnesses in round r all determine whether they are known witnesses, then the number of rounds of receipt of events that can be seen by all the round r known witnesses is r. Event x to each witness node where it is visible, the earliest visible event for x, such as: the event x is in the node A, the node B and the node C are all visible to x, the node A can see x at the earliest, the node B is the event which transmits x to the node B for the first time, the node C and the node B find the median of the timestamps in the three events to be the consensus timestamp of the event x, and the consensus timestamp, the consensus round number, the vote number obtained by the local model and whether the local model is effective or not are stored in the block chain.
(3) All local models are multiplied by weighting coefficients to obtain a federal learning model, and the weighting coefficients are improved in the text, wherein the specific improvement method is as follows: as shown in FIG. 4, the local model w is computed using a deep learning algorithmi(t) using a homomorphic encryption algorithm on wi(t) is encrypted to obtain w'i(t), mixing w'i(t) sending to hashgraph for detection, if w 'is detected'i(t) is a dishonest provider, then mi0, otherwise mi=1,w′iThe weight coefficient of (t) is:
Figure BDA0002564631880000091
wherein k isiIs the local data volume of the ith model provider, k is for all model providersTotal amount of data, NiIs given of w'i(t) votes voted for, I being the number of model providers, the federal learning model is:
Figure BDA0002564631880000092

Claims (2)

1. the data security sharing method based on hashgraph and federal learning is characterized in that detection of a federal learning local model is added into a block chain 3.0 technology hashgraph consensus algorithm, so that an error model is prevented from being provided by a dishonest node, and meanwhile, the federal learning data model is realized by a method for performing weighted aggregation on the local model, and specifically comprises the following steps:
(1) the method for preventing the dishonest node from providing the error model by adding the detection of the federal learning local model into the block chain 3.0 technology hashgraph consensus algorithm is as follows:
the client-side trains a local model locally by using local data, propagates the local model in the blockchain network by adopting a rumor algorithm, votes for the accuracy of the model by using a virtual voting algorithm, collects voting results of all other users in the blockchain, considers that a model provider is reliable and the model is available if participants exceeding 1/2 vote, and removes the model provider from the blockchain if the participants consider that the model provider is unreliable; finally, performing weighted collection on each reliable local model to obtain a federal learning training model;
(2) the dishonest node detection process of the hashgraph mainly comprises the following steps: generating an event, adopting Gossip communication of the eight diagrams algorithm, adopting a virtual voting algorithm for consensus, and mainly comprising the following processes:
(2-1) the event generating the detection of the dishonest node: each event mainly comprises: the node comprises a timestamp, a parent hash of the node, parent hashes of other nodes and event contents, wherein the event contents comprise: data type, local model index, the number of votes granted, and whether the local model is valid;
(2-2) the main process of the eight diagrams algorithm: after the local node generates an event, another node is randomly selected as a destination node, and data which is known by the node but is unknown by the selected node is sent to the selected node; when a node receives data containing new information, it first executes all the new transaction information which is not executed yet, checks the availability of the local model, votes for the local model, and then repeats the same process by selecting another random node until all the nodes receive the event;
the main process of the virtual voting algorithm (2-3) is mainly divided into: determining the round, determining the famous witnesses, collecting reliable votes of the models, and determining the number of consensus rounds and the consensus time:
determining the turn: the first event sent by a node is a witness event, and the witness event is the beginning of a round (r) of the node; assuming that after receiving an event X sent by a node a, a node B selects a node C as a receiving node, the node B creates an event Y (including data known by the node B and unknown by the node C) and sends the event Y to the node C, before creating the event Y, the node B should check whether a new round needs to be started, if the event X can see most witnesses in round r, the event Y is the start of round r +1, and Y is a witness in round r + 1; otherwise, event Y remains in round r;
secondly, the witness determination and the model reliability voting collection are carried out: when judging whether the witness of the R-th round is a known witness, judging by the witness of the R + 1-th round, and counting whether the witness is the known witness and whether the local model included in the witness event of the R + 2-th round is reliable by the witness of the R + 2-th round; if the witness of the node B of the r +1 round can see the witness of the node A of the r round, the witness of the node B of the r +1 round sends a known witness ticket to the witness of the node A of the r round; the witness of the node C of the round r +2 collects the number of tickets which can be strongly seen by the witness of the node B of the round r +1 (or other nodes) and proves that the node A is a known witness, and when the number of tickets exceeds two thirds of the number of nodes, the witness of the node A is the known witness; when the number of reliable tickets of the collected local model exceeds 1/2 node numbers, the local model is valid;
determining the number of consensus rounds and the consensus time: when the witnesses in the r-th round determine whether the witnesses are known witnesses, the receiving round of the events which can be seen by all the witness in the r-th round is r; event x to each witness node where it is visible, the earliest visible event for x, such as: if the event x is in the node A, the node B and the node B can see the x, the node A can see the x at the earliest, the node A can see the x as the x, the node B can transmit the x to the node B for the first time, the node C and the node B can find out the median of the timestamps of the three events as the consensus timestamp of the event x, and the consensus timestamp, the consensus round number, the vote number obtained by a local model and whether the local model is effective or not are stored in a block chain;
(3) multiplying all local models by a weighting coefficient to obtain a federal learning model, wherein the weighting coefficient mainly comprises the following two parts: the ratio of the local model data volume to the total data volume, and the ratio of the approval ticket number obtained by the local model to the total number of participating model clients.
2. The method of claim 1, wherein the local model w is computed using a deep learning algorithmi(t) using a homomorphic encryption algorithm on wi(t) is encrypted to obtain w'i(t), mixing w'i(t) sending to hashgraph for detection, if w 'is detected'i(t) is a dishonest provider, then mi0, otherwise mi=1,w′iThe weight coefficient of (t) is:
Figure FDA0002564631870000031
wherein k isiIs the local data volume of the ith model provider, k is the total data volume of all model providers, NiIs given of w'i(t) votes voted for, I being the number of model providers, the federal learning model is:
Figure FDA0002564631870000032
CN202010625680.1A 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning Active CN111930698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010625680.1A CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010625680.1A CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Publications (2)

Publication Number Publication Date
CN111930698A true CN111930698A (en) 2020-11-13
CN111930698B CN111930698B (en) 2024-03-15

Family

ID=73317444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010625680.1A Active CN111930698B (en) 2020-07-01 2020-07-01 Data security sharing method based on hash map and federal learning

Country Status (1)

Country Link
CN (1) CN111930698B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395640A (en) * 2020-11-16 2021-02-23 国网河北省电力有限公司信息通信分公司 Industry Internet of things data lightweight credible sharing technology based on block chain
CN112468565A (en) * 2020-11-19 2021-03-09 江苏省测绘资料档案馆 System for managing space data integrity and tracking shared flow based on block chain
CN112749392A (en) * 2021-01-07 2021-05-04 西安电子科技大学 Method and system for detecting abnormal nodes in federated learning
CN113139884A (en) * 2021-03-26 2021-07-20 青岛亿联信息科技股份有限公司 Intelligent building management system method, system, storage medium and electronic equipment
CN113420323A (en) * 2021-06-04 2021-09-21 国网河北省电力有限公司信息通信分公司 Data sharing method and terminal equipment
CN113626530A (en) * 2021-09-03 2021-11-09 杭州复杂美科技有限公司 Block generation method, computer device and storage medium
CN115062320A (en) * 2022-04-26 2022-09-16 西安电子科技大学 Privacy protection federal learning method, device, medium and system of asynchronous mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262819A (en) * 2019-06-04 2019-09-20 深圳前海微众银行股份有限公司 A kind of the model parameter update method and device of federal study
WO2019232789A1 (en) * 2018-06-08 2019-12-12 北京大学深圳研究生院 Voting-based consensus method
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232789A1 (en) * 2018-06-08 2019-12-12 北京大学深圳研究生院 Voting-based consensus method
CN110262819A (en) * 2019-06-04 2019-09-20 深圳前海微众银行股份有限公司 A kind of the model parameter update method and device of federal study
CN110572253A (en) * 2019-09-16 2019-12-13 济南大学 Method and system for enhancing privacy of federated learning training data
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
KIM, H等: "Blockchained On-Device Federated Learning", IEEE COMMUNICATIONS LETTERS, vol. 24, no. 6, pages 1279 - 1283, XP011792047, DOI: 10.1109/LCOMM.2019.2921755 *
XIUXIAN ZHANG等: "Hashgraph Based Federated Learning for Secure Data Sharing", WISATS 2020: WIRELESS AND SATELLITE SYSTEMS, pages 556 - 565 *
Y. LU等: "Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT", IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, vol. 16, no. 6, pages 4177 - 4186, XP011777066, DOI: 10.1109/TII.2019.2942190 *
YANG CHEN等: "Communication-Efficient Federated Deep Learning with Asynchronous Model Update and Temporally Weighted Aggregation", ARXIV:1903.07424, pages 1 - 10 *
欧阳丽炜: "基于区块链的传染病监测与预警技术", 智慧科学与技术学报, vol. 2, no. 2, pages 135 - 143 *
雷凯等: "智能生态网络:知识驱动的未来价值互联网基础设施", 应用科学学报, no. 01, pages 156 - 176 *
韩嗣诚, 朱晓荣, 张秀贤: "优化可扩展的拜占庭容错共识算法", 物联网学报, vol. 4, no. 2, pages 18 - 25 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395640A (en) * 2020-11-16 2021-02-23 国网河北省电力有限公司信息通信分公司 Industry Internet of things data lightweight credible sharing technology based on block chain
CN112468565A (en) * 2020-11-19 2021-03-09 江苏省测绘资料档案馆 System for managing space data integrity and tracking shared flow based on block chain
CN112749392A (en) * 2021-01-07 2021-05-04 西安电子科技大学 Method and system for detecting abnormal nodes in federated learning
CN112749392B (en) * 2021-01-07 2022-10-04 西安电子科技大学 Method and system for detecting abnormal nodes in federated learning
CN113139884A (en) * 2021-03-26 2021-07-20 青岛亿联信息科技股份有限公司 Intelligent building management system method, system, storage medium and electronic equipment
CN113420323A (en) * 2021-06-04 2021-09-21 国网河北省电力有限公司信息通信分公司 Data sharing method and terminal equipment
CN113420323B (en) * 2021-06-04 2022-06-03 国网河北省电力有限公司信息通信分公司 Data sharing method and terminal equipment
CN113626530A (en) * 2021-09-03 2021-11-09 杭州复杂美科技有限公司 Block generation method, computer device and storage medium
CN115062320A (en) * 2022-04-26 2022-09-16 西安电子科技大学 Privacy protection federal learning method, device, medium and system of asynchronous mechanism
CN115062320B (en) * 2022-04-26 2024-04-26 西安电子科技大学 Privacy protection federal learning method, device, medium and system for asynchronous mechanism

Also Published As

Publication number Publication date
CN111930698B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111930698A (en) Data security sharing method based on Hash diagram and federal learning
Lyu et al. Towards fair and privacy-preserving federated deep models
CN113794675B (en) Distributed Internet of things intrusion detection method and system based on block chain and federal learning
Lenstra et al. A random zoo: sloth, unicorn, and trx
CN114362987B (en) Distributed voting system and method based on block chain and intelligent contract
Hu et al. Reputation-based distributed knowledge sharing system in blockchain
CN115795518B (en) Block chain-based federal learning privacy protection method
CN114650110A (en) Cooperative spectrum sensing method based on highest node degree clustering
CN107070954B (en) Anonymous-based trust evaluation method
Mundinger et al. Reputation in self-organized communication systems and beyond
Goodrich et al. Privacy-enhanced reputation-feedback methods to reduce feedback extortion in online auctions
Zhang et al. Quantum anonymous voting protocol with the privacy protection of the candidate
CN105245499B (en) A kind of cloud service privacy information exposes evidence collecting method
CN113630398B (en) Joint anti-attack method, client and system in network security
CN115640305A (en) Fair and credible federal learning method based on block chain
Dong et al. Defending Against Malicious Behaviors in Federated Learning with Blockchain
Shin et al. Winnowing: Protecting P2P systems against pollution through cooperative index filtering
Yi Ming et al. Research on block chain defense against malicious attack in federated learning
Wu A distributed trust evaluation model for mobile p2p systems
CN114189332A (en) Continuous group perception excitation method based on symmetric encryption and double-layer truth discovery
Truong et al. BFLMeta: Blockchain-Empowered Metaverse with Byzantine-Robust Federated Learning
Aghania Hybrid tip selection algorithm in IOTA
Lou et al. A collusion-resistant automation scheme for social moderation systems
Cheng et al. Correlation trust authentication model for peer-to-peer networks
Azmi et al. Dynamic reputation based trust management using neural network approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant