CN116189874B - Telemedicine system data sharing method based on federal learning and federation chain - Google Patents

Telemedicine system data sharing method based on federal learning and federation chain Download PDF

Info

Publication number
CN116189874B
CN116189874B CN202310202417.5A CN202310202417A CN116189874B CN 116189874 B CN116189874 B CN 116189874B CN 202310202417 A CN202310202417 A CN 202310202417A CN 116189874 B CN116189874 B CN 116189874B
Authority
CN
China
Prior art keywords
data
data provider
model
reputation
reputation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310202417.5A
Other languages
Chinese (zh)
Other versions
CN116189874A (en
Inventor
欧嵬
李宁
黄子琳
张佳宁
杨佳煜
邵羊飞
郑希萌
陈可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202310202417.5A priority Critical patent/CN116189874B/en
Publication of CN116189874A publication Critical patent/CN116189874A/en
Application granted granted Critical
Publication of CN116189874B publication Critical patent/CN116189874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Epidemiology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer And Data Communications (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The application discloses a data sharing method of a telemedicine system based on federation learning and a federation chain. And secondly, the reputation value of the data provider and the soft label output by the model are stored in the alliance chain by utilizing the alliance chain, so that the communication overhead is reduced, the traceability and the non-falsification of the data are ensured, a safe and efficient data sharing method is provided for task publishers of federal learning, and the risk of data leakage in a remote medical system is reduced.

Description

Telemedicine system data sharing method based on federal learning and federation chain
Technical Field
The application relates to the technical field of blockchains, in particular to a data sharing method of a telemedicine system based on federal learning and a federation chain.
Background
Traditional medical techniques require the patient to visit the hospital in person. With the development of telemedicine systems, hospitals can access patient health data through remote wearable medical devices. Therefore, the time for the patient to go to and from the hospital can be reduced, and the diagnosis efficiency of doctors can be improved. Although the remote medical system injects new vitality for clinical scientific research, epidemic prevention and public health management, privacy disclosure is brought. According to the OCR HHS data, the number of victims of medical data leakage is increased by 1.5 times or more compared to 2020. Due to policy and legal restrictions, as well as patient awareness of protecting privacy and security, it is difficult to share data with other institutions from a hospital local server where there is a lack of enough data samples for machine learning, resulting in isolated data islands. Thus, a need exists for a secure data sharing method for telemedicine systems.
Fortunately, federal learning is a distributed computing technique that can address isolated data islands. Traditional machine learning transmits collected medical data to a cloud server for training. Federal learning reduces the risk of data leakage by deploying models on local devices and transmitting model training parameters. Rather than transmitting the patient's medical data directly to a cloud server with high computing power. The method ensures the privacy of the user data and solves the problem of data isolation. However, recent studies have shown that federal learning has some key problems. First, gradient updates may reveal important information for customer training, and an attacker may recover data from gradients uploaded by a local server. Furthermore, due to the huge CNN model, the federal learning training process contains millions of parameters. In the model training process, transmitting these massive model parameters can generate huge communication cost and increase the risk of malicious attacks, an attacker will upload low-quality data and affect the performance of the global model, and the model is dependent on a centralized server aggregation model, but the centralized server is easy to fail. Finally, in telemedicine systems, the medical data tends to be heterogeneous. And only the common information of the training results is interacted on different devices, the local personalized training results can be effectively improved. In view of these problems, there is currently no safe and efficient method.
Disclosure of Invention
The application mainly aims to provide a remote medical system data sharing method based on federation learning and federation chains, which provides a new solution for huge communication overhead of federation learning through knowledge distillation, stores the information in the federation chains in a distributed mode through a federation chain consensus mechanism, and invokes intelligent contracts for selecting nodes in federation learning so as to ensure the quality of uploaded data, thereby solving the practical problems in the prior art.
In order to achieve the above object, the present application provides a telemedicine system data sharing method based on federal learning and federation chains, which mainly includes the following steps:
step 1, a task publisher sets a reputation threshold according to own requirements, calls an intelligent contract to publish a remote medical task after consensus authentication, and generates an creating block, wherein the creating block comprises an initial global model, the number of communication rounds and the reputation threshold;
step 2, the reputation value of the data provider, e.g. each medical institution, is stored in the federation chain, from which it will download a model of the task publisher if the hospital meets the reputation threshold of the task publisher;
step 3, the selected data provider downloads the global model from the federation chain to a local server and trains the model using locally generated telemedicine data;
step 4, after local training, the hospital server generates extracted feature local knowledge, and the hospital local server uploads the soft label to the peer node to generate a new block through consensus authentication;
step 5, after training, the peer node combines the latest reputation value and reputation weight to form a final reputation value, the final reputation value is used as the reputation value of the next task selection data provider, after consensus authentication, the peer node generates a new block, and the reputation value is added into a alliance chain, so that all task publishers can select hospitals with high-quality data through the reputation value to train a model;
step 6, the peer node randomly selected by the alliance chain system carries out aggregation of the global model, a new aggregation global model is added to the alliance chain through consensus, and if the number of communication rounds is not reached, the next round of training is continued;
and 7, finally downloading the optimal global model from the alliance chain by the task publisher.
Further, step 2 the final reputation value of the data provider is determined from the initial reputation value, the latest reputation value and the reputation weight, calculated using the following formula:
wherein the method comprises the steps ofRepresenting the final reputation value of data provider n, < +.>Initial reputation value of data provider n, +.>Rweight representing the most recent reputation value of data provider n n Represented as the nearest reputation value weight of data provider n.
Further, the data provider initial reputation value is calculated as follows:
where i is represented as the data provider newly added to the telemedicine system, λ is represented as the initial reputation value weight of the data provider i, as the federal learning training progresses gradually decreases, reducing the impact of low data quality submitted due to high initial reputation values, N is represented as the number of data providers in the telemedicine system, rvalue n Representing reputation values of other data providers.
Further, the data provider's most recent reputation value is calculated as follows:
where n→ts represents one of the federal learning tasks that the data provider n participates in the system release, where α represents positive interactions (n) +1 and β represents negative interactions (n) +1.
Further, the data provider's recent reputation weight is calculated as follows:
where Ts represents the number of federal learning tasks received by the data provider n and Ts represents the total number of federal learning tasks in the telemedicine system.
Further, step 3 the update of the data provider local training model is according to the following calculation formula:
in each distillation step t, there are n data providers trained using their own local data sets, the local hospital server model being denoted M, where the value of ζ is the learning rate,expressed as an average gradient of model dip, θ represents model parameters during model training, and L represents a local model loss function, such training process typically requires a large number of update steps to converge.
Further, the soft tag transmitted by the data provider in step 4 has the following calculation formula:
wherein x is i Logit representing input of i telemedicine data, wherein Logit is the result output by the neural network and the probability of the log not being normalized, gamma is a softmax function, and a local model Z is output t In order to match peak probabilities among hospitals, it is required that all hospitals have a thinned peak probability of a constant value τ, and the hospitals perform knowledge distillation to update soft labels Z of local models t And sending the message to the peer node for global aggregation.
Further, in step 6, the global model loss function of the peer node is calculated as follows:
global penalty function helps reduce soft target Z for task publishers t And a soft tag Z received from a data provider S The gap between the task publishers is that the data of the task publishers is { D } t Gθ is a parameter of the global model, Z t (D t ) Is a soft label of the global model. Z is Z S (D t ) i For soft labels obtained from data provider i, at the peer node, reference data set D is used t The global model is trained, global knowledge is extracted from the results of the local hospital server, and then the peer node broadcasts its knowledge to the data provider's server, updating the soft target of the global model.
According to the application, according to the task number of the data provider participating in federal learning, the reputation value of other data providers, the latest reputation value and the like, a reputation value evaluation method of the data provider is provided, the data provider is encouraged to share high-quality telemedicine data, and the influence of poisoning attack is reduced. And secondly, the reputation value of the data provider and the soft label output by the model are stored in the alliance chain by utilizing the alliance chain, so that the communication overhead is reduced, the traceability and the non-falsification of the data are ensured, a safe and efficient data sharing method is provided for task publishers of federal learning, and the risk of data leakage in a remote medical system is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow diagram of a method for data sharing in a telemedicine system based on federal learning and federation chains of the present application;
FIG. 2 is a block diagram of a telemedicine system data sharing method based on federal learning and federation chains in accordance with an embodiment of the present application;
the achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application is further described below with reference to the accompanying drawings.
The embodiment of the application provides a telemedicine system data sharing method based on federal learning and a federation chain, as shown in fig. 1 and 2, which mainly comprises the following steps:
step 1, a task publisher sets a reputation threshold according to own requirements, calls an intelligent contract to publish a remote medical task after consensus authentication, and generates an creating block, wherein the creating block comprises an initial global model, the number of communication rounds and the reputation threshold;
step 2, the reputation value of the data provider, e.g. each medical institution, is stored in the federation chain, from which it will download a model of the task publisher if the hospital meets the reputation threshold of the task publisher;
step 3, the selected data provider downloads the global model from the federation chain to a local server and trains the model using locally generated telemedicine data;
step 4, after local training, the hospital server generates extracted feature local knowledge, and the hospital local server uploads the soft label to the peer node to generate a new block through consensus authentication;
step 5, after training, the peer node combines the latest reputation value and reputation weight to form a final reputation value, the final reputation value is used as the reputation value of the next task selection data provider, after consensus authentication, the peer node generates a new block, and the reputation value is added into a alliance chain, so that all task publishers can select hospitals with high-quality data through the reputation value to train a model;
step 6, the peer node randomly selected by the alliance chain system carries out aggregation of the global model, a new aggregation global model is added to the alliance chain through consensus, and if the number of communication rounds is not reached, the next round of training is continued;
and 7, finally downloading the optimal global model from the alliance chain by the task publisher.
Further, step 2 the final reputation value of the data provider is determined from the initial reputation value, the latest reputation value and the reputation weight, calculated using the following formula:
wherein the method comprises the steps ofRepresenting the final reputation value of data provider n, < +.>Initial reputation value of data provider n, +.>Rweight representing the most recent reputation value of data provider n n Represented as the nearest reputation value weight of data provider n.
Further, the data provider initial reputation value is calculated as follows:
where i is represented as the data provider newly added to the telemedicine system, λ is represented as the initial reputation value weight of the data provider i, as the federal learning training progresses gradually decreases, reducing the impact of low data quality submitted due to high initial reputation values, N is represented as the number of data providers in the telemedicine system, rvalue n Representing reputation values of other data providers.
Further, the data provider's most recent reputation value is calculated as follows:
where n→ts represents one of the federal learning tasks that the data provider n participates in the system release, where α represents positive interactions (n) +1 and β represents negative interactions (n) +1.
Further, the data provider's recent reputation weight is calculated as follows:
where Ts represents the number of federal learning tasks received by the data provider n and Ts represents the total number of federal learning tasks in the telemedicine system.
Further, step 3 the update of the data provider local training model is according to the following calculation formula:
in each distillation step t, there are n data providers trained using their own local data sets, the local hospital server model being denoted M, where the value of ζ is the learning rate,expressed as an average gradient of model dip, θ represents model parameters during model training, and L represents a local model loss function, such training process typically requires a large number of update steps to converge.
Further, the soft tag transmitted by the data provider in step 4 has the following calculation formula:
wherein x is i Logit representing input of i telemedicine data, wherein Logit is the result output by the neural network and the probability of the log not being normalized, gamma is a softmax function, and a local model Z is output t In order to match peak probabilities among hospitals, it is required that all hospitals have a thinned peak probability of a constant value τ, and the hospitals perform knowledge distillation to update soft labels Z of local models t And sending the message to the peer node for global aggregation.
Further, in step 6, the global model loss function of the peer node is calculated as follows:
global penalty function helps reduce soft target Z for task publishers t And a soft tag Z received from a data provider S The gap between the task publishers is that the data of the task publishers is { D } t Gθ is a parameter of the global model, Z t (D t ) Is a soft label of the global model. Z is Z S (D t ) i For soft labels obtained from data provider i, at the peer node, reference data set D is used t The global model is trained, global knowledge is extracted from the results of the local hospital server, and then the peer node broadcasts its knowledge to the data provider's server, updating the soft target of the global model.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present application is not limited to any specific combination of hardware and software.

Claims (5)

1. A telemedicine system data sharing method based on federal learning and federation chains is characterized in that,
step 1, a task publisher sets a reputation threshold according to own requirements, calls an intelligent contract to publish a remote medical task after consensus authentication, and generates an creating block, wherein the creating block comprises an initial global model, the number of communication rounds and the reputation threshold;
step 2, the reputation value of the data provider, e.g. each medical institution, is stored in the federation chain, from which it will download a model of the task publisher if the hospital meets the reputation threshold of the task publisher;
step 3, the selected data provider downloads the global model from the federation chain to a local server and trains the model using locally generated telemedicine data;
step 4, after local training, the hospital server generates extracted feature local knowledge, and the hospital local server uploads the soft label to the peer node to generate a new block through consensus authentication;
step 5, after training, the peer node combines the latest reputation value and reputation weight to form a final reputation value, the final reputation value is used as the reputation value of the next task selection data provider, after consensus authentication, the peer node generates a new block, and the reputation value is added into a alliance chain, so that all task publishers can select hospitals with high-quality data through the reputation value to train a model;
step 6, the peer node randomly selected by the alliance chain system carries out aggregation of the global model, a new aggregation global model is added to the alliance chain through consensus, and if the number of communication rounds is not reached, the next round of training is continued;
step 7, the task publisher finally downloads the optimal global model from the alliance chain;
the step 3 data provider local training model update is according to the following calculation formula:
in each distillation step t, there are n data providers trained using their own local data sets, the local hospital server model being denoted M, where the value of ζ is the learning rate,expressed as an average gradient of model dip, θ represents model parameters during model training, and L represents a local model loss function, such training process typically requires a large number of update steps to converge;
the soft tag transmitted by the data provider in the step 4 has the following calculation formula:
wherein x is i Logit representing input of i telemedicine data, wherein Logit is the result output by the neural network and is not normalized probability, in order to make peak probability among hospitals uniform, it is required that refined peak probability of all hospitals be constant value tau, and after knowledge distillation, the hospitals will update soft label Z of local model after update t Sending to the peer node for global aggregation;
the global model loss function of the peer node in the step 6 has the following calculation formula:
global penalty function helps reduce soft label Z for task publishers t And a soft tag Z received from a data provider S The gap between the task publishers is that the data of the task publishers is { D } t Gθ is a parameter of the global model, Z t (D t ) Soft labels that are global models; z is Z S (D t ) i For soft labels obtained from data provider i, at the peer node, reference data set D is used t The global model is trained, global knowledge is extracted from the results of the local hospital server, and then the peer node broadcasts its knowledge to the data provider's server, updating the soft target of the global model.
2. The method of claim 1, wherein the final reputation value of the data provider of step 2 is determined from the initial reputation value, the latest reputation value and the reputation weight, and is calculated using the following formula:
wherein the method comprises the steps ofRepresenting the final reputation value of data provider n, < +.>The initial reputation value of data provider n,representing the latest of data provider nReputation value, rweight n Represented as the nearest reputation value weight of data provider n.
3. The method of claim 2 wherein the data provider initial reputation value is calculated as follows:
where i is represented as the data provider newly added to the telemedicine system, λ is represented as the initial reputation value weight of the data provider i, as the federal learning training progresses gradually decreases, reducing the impact of low data quality submitted due to high initial reputation values, N is represented as the number of data providers in the telemedicine system, rvalue n Representing reputation values of other data providers.
4. The method of claim 2 wherein the data provider's most recent reputation value is calculated as follows:
where n→ts represents one of the federal learning tasks that the data provider n participates in the system release, where α represents positive interactions (n) +1 and β represents negative interactions (n) +1.
5. The method of claim 2 wherein the data provider's nearest reputation weight is calculated as follows:
where Ts represents the number of federal learning tasks received by the data provider n and Ts represents the total number of federal learning tasks in the telemedicine system.
CN202310202417.5A 2023-03-03 2023-03-03 Telemedicine system data sharing method based on federal learning and federation chain Active CN116189874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310202417.5A CN116189874B (en) 2023-03-03 2023-03-03 Telemedicine system data sharing method based on federal learning and federation chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310202417.5A CN116189874B (en) 2023-03-03 2023-03-03 Telemedicine system data sharing method based on federal learning and federation chain

Publications (2)

Publication Number Publication Date
CN116189874A CN116189874A (en) 2023-05-30
CN116189874B true CN116189874B (en) 2023-11-28

Family

ID=86440327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310202417.5A Active CN116189874B (en) 2023-03-03 2023-03-03 Telemedicine system data sharing method based on federal learning and federation chain

Country Status (1)

Country Link
CN (1) CN116189874B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895375B (en) * 2023-09-08 2023-12-01 南通大学附属医院 Medical instrument management traceability method and system based on data sharing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN113281048A (en) * 2021-06-25 2021-08-20 华中科技大学 Rolling bearing fault diagnosis method and system based on relational knowledge distillation
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN114091667A (en) * 2021-11-22 2022-02-25 北京理工大学 Federal mutual learning model training method oriented to non-independent same distribution data
CN114462624A (en) * 2022-02-11 2022-05-10 博雅正链(北京)科技有限公司 Method for developing credible federal learning based on block chain

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113704810B (en) * 2021-04-01 2024-04-26 华中科技大学 Federal learning-oriented cross-chain consensus method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348204A (en) * 2020-11-05 2021-02-09 大连理工大学 Safe sharing method for marine Internet of things data under edge computing framework based on federal learning and block chain technology
CN113281048A (en) * 2021-06-25 2021-08-20 华中科技大学 Rolling bearing fault diagnosis method and system based on relational knowledge distillation
CN114091667A (en) * 2021-11-22 2022-02-25 北京理工大学 Federal mutual learning model training method oriented to non-independent same distribution data
CN114048515A (en) * 2022-01-11 2022-02-15 四川大学 Medical big data sharing method based on federal learning and block chain
CN114462624A (en) * 2022-02-11 2022-05-10 博雅正链(北京)科技有限公司 Method for developing credible federal learning based on block chain

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Shah Zeb.el.《Industrial digital twins at the nexus of NextG wireless networks andcomputational intelligence: A survey》.2022,第1-23页. *
郭俊伦 等.基于知识蒸馏的轻量型神经网络设计 郭俊.2021,第36卷(第4期),第20-24页. *

Also Published As

Publication number Publication date
CN116189874A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN116189874B (en) Telemedicine system data sharing method based on federal learning and federation chain
CN107145556B (en) Universal distributed acquisition system
CN103685554A (en) Upgrading method, device and system
US20190392140A1 (en) Security information analysis device, security information analysis method, security information analysis program, security information evaluation device, security information evaluation method, security information analysis system, and recording medium
Yuan et al. Chainsfl: Blockchain-driven federated learning from design to realization
Yuan et al. Decentralized federated learning: A survey and perspective
CN110474870A (en) Network active defensive method, system and computer readable storage medium based on block chain
Rani et al. Blockchain-based IoT enabled health monitoring system
CN109561100A (en) Method and system based on the distributed duplexing energized network attacking and defending with artificial intelligence
Shen et al. Deep Q-network-based heuristic intrusion detection against edge-based SIoT zero-day attacks
Yu et al. IronForge: an open, secure, fair, decentralized federated learning
Lakshmanan et al. An efficient data science technique for IoT assisted healthcare monitoring system using cloud computing
CN111651121A (en) Data logic calculation method and device, electronic equipment and storage medium
Mishra et al. Cogni-Sec: A secure cognitive enabled distributed reinforcement learning model for medical cyber–physical system
Zhou et al. Novel defense schemes for artificial intelligence deployed in edge computing environment
Ren et al. Delayed spiking neural p systems with scheduled rules
Chen et al. Resource-aware knowledge distillation for federated learning
Kim et al. P2P computing for trusted networking of personalized IoT services
CN117834228A (en) Method and device for constructing reinforcement learning honeypot based on BERT model
CN111222885B (en) Data processing request endorsement method and device, computer equipment and storage medium
CN116663049A (en) Medical image segmentation cooperation method based on blockchain network
Das et al. Real-time Context-aware Learning System for IoT Applications
Pan Intelligent Monitoring System for Prison Perimeter Based on Cloud Intelligence Technology
Casadei et al. Combining trust and aggregate computing
CN109800091A (en) Prevent notice from ignoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant