CN112182102A - Method and device for processing data in federal learning, electronic equipment and storage medium - Google Patents

Method and device for processing data in federal learning, electronic equipment and storage medium Download PDF

Info

Publication number
CN112182102A
CN112182102A CN202011009498.XA CN202011009498A CN112182102A CN 112182102 A CN112182102 A CN 112182102A CN 202011009498 A CN202011009498 A CN 202011009498A CN 112182102 A CN112182102 A CN 112182102A
Authority
CN
China
Prior art keywords
data
federal learning
chain
processing
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011009498.XA
Other languages
Chinese (zh)
Inventor
杨文韬
陈昌
易晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhigui Internet Technology Co ltd
Original Assignee
Xi'an Zhigui Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhigui Internet Technology Co ltd filed Critical Xi'an Zhigui Internet Technology Co ltd
Priority to CN202011009498.XA priority Critical patent/CN112182102A/en
Publication of CN112182102A publication Critical patent/CN112182102A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a method and a device for processing data in federated learning, wherein the method comprises the following steps: acquiring downlink data and uplink data, wherein the downlink data are source data uploaded to a distributed database by each terminal, and the uplink data are intermediate parameters stored on a block chain and obtained by each terminal participating in the modeling of the federal learning process; carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment. According to the method and the device for processing data in federal learning, the electronic equipment and the storage medium, the source data stored in the distributed database safely and the shared intermediate parameters stored in the block chain are used for realizing safe sharing of the data of each terminal, so that safe cooperation and sharing of the two types of data are facilitated, and positive benefits are generated for federal learning training.

Description

Method and device for processing data in federal learning, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for processing data in bang learning, an electronic device, and a storage medium.
Background
The federal learning is to carry out high-efficiency machine learning among multiple participating terminals or multiple computing node terminals on the premise of guaranteeing information safety during big data exchange, protecting terminal data and personal data privacy and guaranteeing legal compliance. The basic idea of federal learning is that data of terminals are not shared, each terminal trains a machine learning model respectively, then intermediate parameters in the learning process are shared, the intermediate parameters are aggregated by a server and fed back to each terminal, and then the trained models are shared respectively.
Therefore, in order to solve the problem that data of each terminal is not shared, data storage safety and data transmission safety in federal learning need to be guaranteed, but a better scheme is not stored at present, so that the method cannot play a positive role in the federal learning training.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a device for processing data in federated learning, electronic equipment and a storage medium.
In a first aspect, a method for processing data in federated learning provided in an embodiment of the present invention includes:
acquiring downlink data and uplink data, wherein the downlink data are source data uploaded to a distributed database by each terminal, and the uplink data are intermediate parameters stored on a block chain and obtained by each terminal participating in the modeling of the federal learning process;
carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
Further, the source data is stored in the distributed DHT list by using an asymmetric encryption algorithm.
Further, the uplink processing procedure of the intermediate parameter includes:
and encrypting and packaging the intermediate parameters into transaction data, performing aggregation division on the transaction data, and storing the divided transaction data in corresponding blocks.
Further, after updating the established federal learning model, the method further comprises the following steps:
and applying the updated federal learning model and the updated non-federal learning model to a preset application scene for prediction, determining prediction results, and comparing according to respective prediction results to obtain comparison results.
Further, before acquiring the data under the link, the method further comprises:
performing quality detection on the source data, and according to the detection result;
if the detection result is normal, encrypting and storing the source data, and sending Token data to the corresponding terminal; and if the detection result is abnormal, abandoning the source data and sending an abandoning execution signal to the corresponding terminal.
Further, the trusted environment is established by adopting an Intel SGX architecture, and accordingly, the source data is read from the distributed DHT list by the trusted environment.
Further, the transaction data is subjected to aggregation division on the Fabric alliance chain.
In a second aspect, an embodiment of the present invention provides a device for processing data in federated learning, including:
the system comprises an acquisition module, a block chain and a processing module, wherein the acquisition module is used for acquiring downlink data and uplink data, the downlink data is source data uploaded to a distributed database by each terminal, and the uplink data is intermediate parameters stored on the block chain and obtained by each terminal participating in the modeling of the federal learning process;
the processing module is used for carrying out a federal learning process according to the data under the chain and the data on the chain and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for processing data in federal learning as described above when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method for processing data in federal learning as described above.
According to the method and the device for processing data in federal learning, the electronic equipment and the storage medium, the source data stored in the distributed database safely and the shared intermediate parameters stored in the block chain are used for realizing safe sharing of the data of each terminal, so that safe cooperation and sharing of the two types of data are facilitated, and positive benefits are generated for federal learning training.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a method for processing data in federated learning according to the present invention;
FIG. 2 is a block flow diagram of a method for processing data in federated learning according to the present invention;
FIG. 3 is a block diagram of an embodiment of a data processing apparatus for federated learning according to the present invention;
FIG. 4 is a block diagram of an embodiment of an electronic device according to the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flowchart illustrating a method for processing data in federated learning according to an embodiment of the present invention, and with reference to fig. 1, the method includes:
s11, acquiring down-link data and up-link data, wherein the down-link data are source data uploaded to a distributed database by each terminal, and the up-link data are intermediate parameters stored on a block chain, obtained by each terminal participating in the modeling of the federal learning process;
s12, carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
With respect to step S11 and step S12, it should be noted that in the embodiment of the present invention, federal learning is to perform efficient machine learning between multiple participating terminals or multiple computing node terminals on the premise of ensuring information security during big data exchange, protecting terminal data and personal data privacy, and ensuring legal compliance. The basic idea of federal learning is that data of terminals are not shared, each terminal trains a machine learning model respectively, then intermediate parameters in the learning process are shared, the intermediate parameters are aggregated by a server and fed back to each terminal, and then the trained models are shared respectively.
In this embodiment, the intermediate parameters obtained by the terminals participating in the federal learning process modeling are stored in the blockchain, so that the generated intermediate parameters are safely stored, transmitted and efficiently shared on the blockchain. In this embodiment, the federation chain — Hyperridge Fabric in the block chain may be used to perform aggregate partitioning on the intermediate parameters.
In the present embodiment, the raw data (source data) of each terminal is stored in a distributed database. The method can be stored in a DHT list specifically, and data security storage of source data is realized by using an asymmetric encryption algorithm.
In the embodiment, because the trusted storage environment is established, the trusted environment is supported to safely, dynamically and real-timely read and update the source data from the DHT, and the established federated learning model is updated according to the source data and the on-chain data in the federated learning process; the federal learning process is carried out in a trusted environment, and the trusted environment is established by adopting an Intel SGX framework.
According to the method for processing data in federal learning provided by the embodiment of the invention, the data of each terminal can be safely shared through the source data safely stored in the distributed database and the intermediate parameters which are stored on the block chain and are shared, so that the two types of data can be safely coordinated and shared, and positive benefits are generated for the federal learning training.
In a further embodiment of the method according to the above embodiment, the uplink processing procedure of the intermediate parameter is mainly explained as follows:
and encrypting and packaging the intermediate parameters into transaction data, performing aggregation division on the transaction data, and storing the divided transaction data in corresponding blocks.
In this regard, it should be noted that the intermediate parameters can be stored and shared on the block chain, and need to be clustered and divided on the chain as transaction data, and then store the divided transaction data on the corresponding block. In this embodiment, the federation chain — Hyperridge Fabric in the block chain may be used to perform aggregate partitioning on the intermediate parameters.
The alliance chain-HyperLegger Fabric in the block chain is used as a carrier of data sharing, based on the framework structure of the Fabric, appropriate parameters are set according to scene conditions, a transaction flow is optimized, and transmission efficiency is improved on the basis of the characteristic of block chain decentralization.
In a further embodiment of the method according to the above embodiment, after updating the established federal learning model, the method further includes the following processing steps:
and applying the updated federal learning model and the updated non-federal learning model to a preset application scene for prediction, determining prediction results, comparing the prediction results according to respective prediction results to obtain comparison results, and comparing the performance of the experiment with the non-federal learning algorithm and the existing federal learning algorithm.
In a further embodiment of the method of the above embodiment, before acquiring the data under link, the method further includes:
performing quality detection on the source data, and according to the detection result;
if the detection result is normal, encrypting and storing the source data, and sending Token data to the corresponding terminal; and if the detection result is abnormal, abandoning the source data and sending an abandoning execution signal to the corresponding terminal.
In this regard, it should be noted that, the above steps can implement automatic monitoring of the data contribution quality of each terminal, and identify a malicious node. Designing 'Token' as a medium of an incentive mechanism, rewarding high-quality data contributors, and reflecting the contribution degree of each terminal on the training effect of the model. That is, the high quality data contributors can train to get a better model in addition to getting "Token".
In the method for processing data in federal learning provided in the above embodiment, the source data stored in the distributed database and the intermediate parameter stored in the block chain to complete sharing enable data of each terminal to be shared safely, so that the two types of data can be coordinated and shared safely, and positive benefits are generated for federal learning training.
Fig. 3 shows a schematic structural diagram of a processing apparatus for data in federated learning according to an embodiment of the present invention, and referring to fig. 3, the apparatus includes an obtaining module 31 and a processing module 32, where:
the acquiring module 31 is configured to acquire downlink data and uplink data, where the downlink data is source data uploaded to a distributed database by each terminal, and the uplink data is intermediate parameters stored in a block chain, where each terminal participates in federal learning process modeling;
the processing module 32 is used for performing a federal learning process according to the data under the chain and the data on the chain and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
In a further embodiment of the apparatus as in the previous embodiment, the source data is stored in a distributed DHT list using an asymmetric encryption algorithm.
In a further embodiment of the apparatus of the above embodiment, the apparatus further includes an uplink module, configured to perform an uplink processing procedure for processing the intermediate parameter, specifically:
and encrypting and packaging the intermediate parameters into transaction data, performing aggregation division on the transaction data, and storing the divided transaction data in corresponding blocks.
In a further embodiment of the apparatus of the above embodiment, after updating the established federal learning model, the apparatus further comprises a comparison module, configured to:
and applying the updated federal learning model and the updated non-federal learning model to a preset application scene for prediction, determining prediction results, and comparing according to respective prediction results to obtain comparison results.
In a further embodiment of the apparatus in the above embodiment, before acquiring the downlink data, the apparatus further includes a monitoring module, configured to:
performing quality detection on the source data, and according to the detection result;
if the detection result is normal, encrypting and storing the source data, and sending Token data to the corresponding terminal; and if the detection result is abnormal, abandoning the source data and sending an abandoning execution signal to the corresponding terminal.
In a further embodiment of the apparatus as described in the above embodiment, the trusted context is established using an Intel SGX architecture, and accordingly, the source data is read from the distributed DHT list by the trusted context.
In a further embodiment of the apparatus of the above embodiments, the transactional data is partitioned aggregately over a Fabric federation chain.
Since the principle of the apparatus according to the embodiment of the present invention is the same as that of the method according to the above embodiment, further details are not described herein for further explanation.
It should be noted that, in the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
According to the method for processing data in federal learning provided by the embodiment of the invention, the data of each terminal can be safely shared through the source data safely stored in the distributed database and the intermediate parameters which are stored on the block chain and are shared, so that the two types of data can be safely coordinated and shared, and positive benefits are generated for the federal learning training.
Fig. 4 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 4: a processor (processor)41, a communication Interface (communication Interface)42, a memory (memory)43 and a communication bus 44, wherein the processor 41, the communication Interface 42 and the memory 43 complete communication with each other through the communication bus 44. Processor 41 may call logic instructions in memory 43 to perform the following method: acquiring downlink data and uplink data, wherein the downlink data are source data uploaded to a distributed database by each terminal, and the uplink data are intermediate parameters stored on a block chain and obtained by each terminal participating in the modeling of the federal learning process; carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
Furthermore, the logic instructions in the memory 43 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes: acquiring downlink data and uplink data, wherein the downlink data are source data uploaded to a distributed database by each terminal, and the uplink data are intermediate parameters stored on a block chain and obtained by each terminal participating in the modeling of the federal learning process; carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for processing data in federated learning is characterized by comprising the following steps:
acquiring downlink data and uplink data, wherein the downlink data are source data uploaded to a distributed database by each terminal, and the uplink data are intermediate parameters stored on a block chain and obtained by each terminal participating in the modeling of the federal learning process;
carrying out a federal learning process according to the data under the chain and the data on the chain, and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
2. The method for processing data during federal learning as claimed in claim 1, wherein the source data is stored in a distributed DHT list using an asymmetric encryption algorithm.
3. The method of claim 1 wherein the intermediate parameter uplink processing procedure comprises:
and encrypting and packaging the intermediate parameters into transaction data, performing aggregation division on the transaction data, and storing the divided transaction data in corresponding blocks.
4. The method for processing data during federal learning as claimed in claim 1, further comprising, after updating the established federal learning model:
and applying the updated federal learning model and the updated non-federal learning model to a preset application scene for prediction, determining prediction results, and comparing according to respective prediction results to obtain comparison results.
5. The method for processing data during federal learning as claimed in claim 1, further comprising, prior to obtaining the data under link:
performing quality detection on the source data, and according to the detection result;
if the detection result is normal, encrypting and storing the source data, and sending Token data to the corresponding terminal; and if the detection result is abnormal, abandoning the source data and sending an abandoning execution signal to the corresponding terminal.
6. The method for processing data during federal learning as claimed in claim 1, wherein the trusted context is established using an intel sgx architecture, and accordingly, the source data is read from the distributed DHT list by the trusted context.
7. The method for processing data during federal learning as claimed in claim 3, wherein the transactional data is partitioned by aggregation across a Fabric alliance chain.
8. The utility model provides a processing apparatus of data in bang study which characterized in that includes:
the system comprises an acquisition module, a block chain and a processing module, wherein the acquisition module is used for acquiring downlink data and uplink data, the downlink data is source data uploaded to a distributed database by each terminal, and the uplink data is intermediate parameters stored on the block chain and obtained by each terminal participating in the modeling of the federal learning process;
the processing module is used for carrying out a federal learning process according to the data under the chain and the data on the chain and updating the established federal learning model; wherein the federal learning process is conducted in a trusted environment.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for processing data in federal learning as claimed in any one of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the method for processing data in federal learning as claimed in any of claims 1 to 7.
CN202011009498.XA 2020-09-23 2020-09-23 Method and device for processing data in federal learning, electronic equipment and storage medium Pending CN112182102A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011009498.XA CN112182102A (en) 2020-09-23 2020-09-23 Method and device for processing data in federal learning, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011009498.XA CN112182102A (en) 2020-09-23 2020-09-23 Method and device for processing data in federal learning, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112182102A true CN112182102A (en) 2021-01-05

Family

ID=73955884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011009498.XA Pending CN112182102A (en) 2020-09-23 2020-09-23 Method and device for processing data in federal learning, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112182102A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434269A (en) * 2021-06-10 2021-09-24 湖南天河国云科技有限公司 Block chain-based distributed privacy calculation method and device
CN113901505A (en) * 2021-12-06 2022-01-07 北京笔新互联网科技有限公司 Data sharing method and device, electronic equipment and storage medium
CN114328432A (en) * 2021-12-02 2022-04-12 京信数据科技有限公司 Big data federal learning processing method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804928A (en) * 2018-07-09 2018-11-13 武汉工商学院 The secure and trusted block chain and management method of data in a kind of traceability system
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium
CN111061982A (en) * 2019-12-11 2020-04-24 电子科技大学 News information publishing and managing system based on block chain
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111212110A (en) * 2019-12-13 2020-05-29 清华大学深圳国际研究生院 Block chain-based federal learning system and method
CN111522669A (en) * 2020-04-29 2020-08-11 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium
CN111683117A (en) * 2020-05-11 2020-09-18 厦门潭宏信息科技有限公司 Method, equipment and storage medium
CN111698322A (en) * 2020-06-11 2020-09-22 福州数据技术研究院有限公司 Medical data safety sharing method based on block chain and federal learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804928A (en) * 2018-07-09 2018-11-13 武汉工商学院 The secure and trusted block chain and management method of data in a kind of traceability system
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium
CN111061982A (en) * 2019-12-11 2020-04-24 电子科技大学 News information publishing and managing system based on block chain
CN111212110A (en) * 2019-12-13 2020-05-29 清华大学深圳国际研究生院 Block chain-based federal learning system and method
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111522669A (en) * 2020-04-29 2020-08-11 深圳前海微众银行股份有限公司 Method, device and equipment for optimizing horizontal federated learning system and readable storage medium
CN111683117A (en) * 2020-05-11 2020-09-18 厦门潭宏信息科技有限公司 Method, equipment and storage medium
CN111698322A (en) * 2020-06-11 2020-09-22 福州数据技术研究院有限公司 Medical data safety sharing method based on block chain and federal learning
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434269A (en) * 2021-06-10 2021-09-24 湖南天河国云科技有限公司 Block chain-based distributed privacy calculation method and device
CN114328432A (en) * 2021-12-02 2022-04-12 京信数据科技有限公司 Big data federal learning processing method and system
CN113901505A (en) * 2021-12-06 2022-01-07 北京笔新互联网科技有限公司 Data sharing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112182102A (en) Method and device for processing data in federal learning, electronic equipment and storage medium
CN111209334B (en) Power terminal data security management method based on block chain
US20210004718A1 (en) Method and device for training a model based on federated learning
CN111901309B (en) Data security sharing method, system and device
CN112506753B (en) Efficient contribution assessment method in federated learning scene
US20220318412A1 (en) Privacy-aware pruning in machine learning
CN113901505B (en) Data sharing method and device, electronic equipment and storage medium
CN113792890B (en) Model training method based on federal learning and related equipment
CN115481441A (en) Difference privacy protection method and device for federal learning
CN110119621B (en) Attack defense method, system and defense device for abnormal system call
CN115168888A (en) Service self-adaptive data management method, device and equipment
Li et al. A practical introduction to federated learning
CN110730186A (en) Token issuing method, accounting node and medium based on block chain
CN116916309A (en) Communication security authentication method, equipment and storage medium
CN111177320A (en) Class case simultaneous judging method, equipment and medium based on block chain
CN116306905A (en) Semi-supervised non-independent co-distributed federal learning distillation method and device
CN114503632A (en) Adaptive mutual trust model for dynamic and diverse multi-domain networks
CN111193706B (en) Identity verification method and device
CN106909832A (en) The installation method and device of a kind of application program
CN114595830B (en) Privacy protection federation learning method oriented to edge computing scene
CN117521150B (en) Data collaborative processing method based on multiparty security calculation
CN116049322B (en) Data sharing platform and method based on privacy calculation
CN116707787A (en) Maintenance method and device for quantum key life cycle of rail transit signal system
CN114338730A (en) Block chain consensus method and system for communication scene of Internet of vehicles
CN118283122A (en) Federal inter-cluster call method, federal inter-cluster call device, blockchain system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination