CN117714217B - Method and device for trusted federal intelligent security computing platform - Google Patents

Method and device for trusted federal intelligent security computing platform Download PDF

Info

Publication number
CN117714217B
CN117714217B CN202410167961.5A CN202410167961A CN117714217B CN 117714217 B CN117714217 B CN 117714217B CN 202410167961 A CN202410167961 A CN 202410167961A CN 117714217 B CN117714217 B CN 117714217B
Authority
CN
China
Prior art keywords
node
parameter
training
parameter server
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410167961.5A
Other languages
Chinese (zh)
Other versions
CN117714217A (en
Inventor
李强
邢刚
鞠卓亚
韩嘉祺
刘文华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datatang Beijing Technology Co ltd
Hebei Shuyuntang Intelligent Technology Co ltd
Original Assignee
Datatang Beijing Technology Co ltd
Hebei Shuyuntang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datatang Beijing Technology Co ltd, Hebei Shuyuntang Intelligent Technology Co ltd filed Critical Datatang Beijing Technology Co ltd
Priority to CN202410167961.5A priority Critical patent/CN117714217B/en
Publication of CN117714217A publication Critical patent/CN117714217A/en
Application granted granted Critical
Publication of CN117714217B publication Critical patent/CN117714217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0435Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply symmetric encryption, i.e. same key used for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • H04L63/061Network architectures or network communication protocols for network security for supporting key management in a packet data network for key exchange, e.g. in peer-to-peer networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/50Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Storage Device Security (AREA)

Abstract

The invention relates to the field of electric digital data processing, in particular to a method and a device for a trusted federal intelligent security computing platform, wherein the method comprises the following steps: the parameter server and the nodes are connected through a public network, the nodes are not connected, and meanwhile, the parameter server and the nodes are connected through a private chain. The parameter server sends the initial parameters and the model structure to the node; the node uses the local data training model to obtain parameters and training logs; the node writes the training log into a private chain and sends the local parameters to a parameter server; the parameter server aggregates the local parameters with the parameters stored after previous aggregation, obtains aggregated parameters and sends the aggregated parameters to the nodes, and writes the aggregated log into a private chain; the node directly exchanges parameters with the parameter server without waiting for other nodes. And (5) cycling for preset times or stopping after the training model is stabilized. The invention has the effects of reducing the risk of leakage in federal learning and improving the safety of federal learning.

Description

Method and device for trusted federal intelligent security computing platform
Technical Field
The invention relates to the field of electric digital data processing, in particular to a method and a device for a trusted federal intelligent security computing platform.
Background
The value of the data is reflected in sharing and using, strict rules are made for the data safety according to relevant laws and regulations, and the sensitive data cannot leave the original place of storage, so that the traditional data centralized use mode is not feasible. The privacy security calculation is a scheme capable of using data and meeting the requirement of data security.
Privacy secure computing mainly includes three modes, a viable execution environment (TEE), multiparty secure computing (MPC), and Federal Learning (FL). The TEE protects the safe operation of data from the chip, the memory and the OS layer by layer, and isolates the safe environment. The multiparty security calculation encrypts the data through an algorithm, then calculates, and finally restores the calculation result. Federal learning deploys the model to the data party, and calculates through exchanging parameters, so as to finally obtain the model.
However, during federal learning, parameters are exchanged, and although the parameters are not data, the parameters may implicitly include information of partial data, and if the parameters are intercepted, there is a risk of disclosure. In addition, during federal learning, there is a possibility that the participants do not train according to the planned provided data and that a certain party participating in federal learning does not provide contracted data, which results in potential safety hazards in federal learning.
Disclosure of Invention
In order to reduce the risk of leakage in federal learning and improve the safety of federal learning, the invention provides a method and a device for a trusted federal intelligent safety computing platform.
The invention provides a method for a trusted federal intelligent security computing platform, which adopts the following technical scheme:
A method of a trusted federal intelligent security computing platform, comprising the steps of:
The parameter server and the nodes form a communication network through a public network, the parameter server and the nodes form a P2P network through private links, the nodes are not connected, and the parameter server stores a training model;
The parameter server encrypts the initial parameters and the model structure of the training model and then sends the encrypted initial parameters and the model structure of the training model to the node;
The node decrypts the initial parameters and the model structure, and the node performs first local training on the training model by combining local data and the initial parameters to obtain first local parameters and first training log information;
the node encrypts the first local parameter and then sends the encrypted first local parameter to the parameter server, and the node writes the first training log information into the private chain;
The parameter server decrypts the first local parameter, aggregates the first local parameter with the last aggregated parameter stored in the parameter server to obtain a first aggregated parameter, encrypts the first aggregated parameter and then sends the first aggregated parameter to the node, and the parameter server obtains a first aggregated log and writes the first aggregated log into a private chain;
after decrypting the n-1 aggregation parameter sent by the parameter server, the node performs n-th local training on the training model by combining the n-1 aggregation parameter and the local data to obtain n-th local parameter and n-th training log information;
the node encrypts the nth local parameter and then sends the encrypted nth local parameter to the parameter server, and writes the nth training log information into the private chain;
The parameter server decrypts the nth local parameter, the parameter server aggregates the nth local parameter and the parameter obtained by last aggregation of the parameter server to obtain an nth aggregated parameter, and the nth aggregated parameter is encrypted and then sent to the node; the parameter server obtains an nth aggregation log, and writes the nth aggregation log into the private chain;
and cycling for N times or enabling the training model to reach a stable state, wherein N is a preset total round of interaction between the node and the parameter server, N is a positive integer less than or equal to N, and N is more than or equal to 2.
In a specific embodiment, the first training log information includes time, node information, data characteristics, and model evaluations.
In a specific embodiment, the first aggregate log includes time, aggregate information, and model evaluation effects.
In a specific implementation, periodically analyzing the first training log information and the first aggregate log written into the private chain by each node to perform non-honest node detection;
And if the dishonest node exists, stopping sending the initial parameters, the model structure, the first aggregation parameters and the nth aggregation parameters to the dishonest node, and outputting an alarm report.
In a specific embodiment, the parameter server determines a password with the nodes when encrypting the initial parameters and the model structure, the password being different between each of the nodes and the parameter server.
In a specific implementation, the nodes are assigned in advance, and the nodes are assigned as trusted nodes, semi-honest nodes and unknown nodes;
if the node is an unknown node, resetting the password each time the parameter server sends data to the node;
If the node is a semi-honest node, resetting the password according to a preset time threshold;
if the node is a trusted node, the password is not reset.
In a specific implementation, the diffie hellman key exchange algorithm is adopted to determine the password, the symmetric encryption algorithm is adopted to reset the password, and the effect of secure password exchange on the public network is achieved.
In a specific embodiment, the parameter server and the node form a C/S architecture, the parameter server governs the learning process, and the node is responsible for data and computational effort.
In a specific embodiment, the P2P network is transparent to the user, and both the training log and the aggregate log are written by the system during the training process.
The invention provides a safe and reliable federal learning device, which adopts the following technical scheme:
An apparatus of a trusted federal intelligent security computing platform, comprising one or more processors; a memory;
And one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the processor, cause the apparatus of the trusted federal intelligent security computing platform to perform the method of the trusted federal intelligent security computing platform described above.
In summary, the present invention includes at least one of the following beneficial technical effects:
1. The nodes and the parameter server are connected through a public network, and a safe public network password exchange technology is adopted to realize safe and configurable encryption parameter exchange. The parameters between the nodes and the parameter server are not easy to intercept by a third party, and even if the parameters are intercepted, the parameters cannot be decrypted to obtain, so that the safety of the federal learning parameter exchange process is ensured.
2. The P2P network is formed between the nodes and the parameter server through the private chain, training and converging logs and model evaluation results are recorded, the model evaluation effect written in by the training nodes is verified through the effect of recalculating the model, whether the training nodes have illegal behaviors is judged, and potential safety hazards possibly occurring in the nodes participating in federal learning are prevented.
3. The nodes and the parameter server adopt timely parameter aggregation exchange, and do not need to wait for all the nodes participating in training to complete training, and then transmit training parameters to the server for aggregation. Federal learning using an asynchronous training mechanism can improve training speed and avoid training failure caused by communication anomalies.
Drawings
FIG. 1 is a flowchart of method steps S100-S300 of a trusted federal intelligent security computing platform.
FIG. 2 is a flowchart of method steps S400 and subsequent steps of a trusted federal intelligent security computing platform.
FIG. 3 is a schematic diagram of a process for federal learning by a parameter server and nodes.
Fig. 4 is a key management schematic.
Fig. 5 is a block diagram of a log write private chain.
Detailed Description
To facilitate understanding, some terms are explained below.
Federal learning (FEDERATED LEARNING, FL) is a distributed machine learning technology, and the core idea is to construct a global model based on virtual fusion data by performing distributed model training among a plurality of data sources with local data and only by exchanging a model structure or an intermediate result on the premise of not exchanging the local data, so as to realize balance of data privacy protection and data sharing calculation, namely, an application new paradigm of 'data available invisible' and 'data motionless model'.
The blockchain (blockchain), the intelligent peer-to-peer network of distributed database identification, propagation and logging information is first emulated in cryptocurrency and improved on workload proof and algorithm, employing rights proof and Crypt algorithms. First generation bitcoin and smart contract blockchain ethernet gateway appear. Generally comprises a six-layer architecture of a data layer, a network layer, a consensus layer, an incentive layer, a contract layer and an application layer. Blockchains include three forms, public, private and federation chains.
The access and write rights of the private chain (Private Blockchain), i.e., the private blockchain, are controlled only by a certain organization/organization.
The public chain (Public Blockchain), i.e., public/public blockchain, has access and write rights open to all people.
The federation chain (Consortium Blockchain), i.e., the blockchain of an organization's federation, has access and write rights open only to nodes that join the organization's federation.
The method of the trusted federal intelligent security computing platform provided by the embodiment of the invention can be applied to a server or a terminal. The server may be a physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform. The Terminal may be a Mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet personal computer (PAD), a User Equipment (UE), a handheld device, a vehicle-mounted device, a wearable device, a computing device, or other processing device connected to a wireless modem, a Mobile Station (MS), a Mobile Terminal (Mobile Terminal), or the like, which is not limited herein.
The invention is described in further detail below with reference to fig. 1-5.
The method of the trusted federal intelligent security computing platform comprises the following steps:
S100, network architecture.
The parameter server and each node are connected into a communication network through a public network, the parameter server and each node form a P2P network through a private chain, and the P2P network is transparent to users.
Based on the security and process protection considerations of the training log, a private chain is employed. The transformation of the blockchain cannot destroy the consensus layer, so that the log of the blockchain can form consensus, the blockchain is used as open source software, a plurality of chains are derived by transforming the source codes of bit coins or Ethernet, in the transformation, the consensus mechanism cannot be modified, or else, the log written by a certain node cannot be ensured to be transmitted to all nodes.
The parameter server stores the training model and initial parameters, and the node stores local data.
S200, the node accesses a parameter server, and the parameter server sends encrypted initial parameters to the node.
When the node accesses the parameter server, the interactive password of the parameter server and the node is determined through a Difeihman key exchange algorithm. And the parameter server writes the time, the node name and the model structure of the training model into a private chain to represent successful access of the node.
The parameter server encrypts the initial parameters and the model structure of the training model through the password and then sends the encrypted initial parameters and the model structure of the training model to the node. The passwords between the parameter server and each node are different, so that the password of one node cannot be used for decrypting the encrypted initial parameters and the model structure sent by the parameter server to other nodes.
S300, the node acquires initial parameters and performs the first local training.
After the node acquires the encrypted initial parameters and the model structure, decrypting the encrypted initial parameters and the encrypted model structure through the password, and carrying out first local training on the training model by combining the local data and the initial parameters by the node to obtain first local parameters and first training log information. The node encrypts the first local parameter through a password and then sends the encrypted first local parameter to a parameter server, and the first training log information is written into a private chain. The first log information includes time, data characteristics, and model evaluations for recording contributions of local data of the node to model training.
S400, the parameter server decrypts the first local parameters and aggregates the first local parameters. It should be noted that, for convenience of illustration, the steps S400 and the following steps and steps S100-S300 are split into two drawings.
The parameter server decrypts the first local parameter sent by the node through the password, and aggregates the first local parameter with the parameter obtained by last aggregation of the parameter server to obtain a first aggregated parameter. For convenience of explanation, the following description uses node a and node B to illustrate, and if the first local parameter a of node a is sent to the parameter server first, and the parameter server has not performed parameter aggregation at this time, the initial parameters and the model structure are saved. And then, the initial parameters and the first local parameters a of the node A are aggregated to obtain a first aggregation parameter z of the node A. And then the node B sends the first local parameter B to a parameter server, the parameter server stores the first aggregation parameter z and the model structure, and then the first aggregation parameter z and the first local parameter B are aggregated to obtain a first aggregation parameter x of the node B.
The parameter server generates a first aggregation log based on the first aggregation parameter, and writes the first aggregation log into the private chain. The first aggregate log includes time, aggregate information, and model evaluation effects.
S500, judging whether to reset the password between the parameter server and the node based on the reliability of the node.
Before the node is accessed to the parameter server, the node is assigned in advance, and the node is divided into a trusted node, a semi-honest node and an unknown node.
Before the parameter server encrypts and transmits the first aggregation parameter to the node, based on different assignments of the node, whether to reset the password between the parameter server and the node is judged.
If the node is a trusted node, the password is not reset.
If the node is a semi-honest node, resetting the password according to a preset time threshold. The time threshold may be a time interval of one week, one month, etc., or may be the number of interactions between the node and the parameter server, such as 5 times, 10 times, etc.
If the node is an unknown node, the password is reset every time. And resetting the password by using a Difeihman key exchange algorithm, wherein the processing of determining the interactive password of the parameter server and the node is the same as that when the node is accessed to the parameter server.
S600, the parameter server sends the first aggregation parameter to the node.
If the node is a trusted node or a semi-honest node, the password is not reset, and the parameter server encrypts the first aggregation parameter through the original password and sends the first aggregation parameter to the node.
If the node is an unknown node, resetting the password, encrypting the first aggregation parameter by the parameter server through the new password obtained by password resetting, and sending the encrypted first aggregation parameter to the node.
S700, interaction between the loop parameter server and the node is performed.
The node performs second local training on the training model by combining the local data and the first aggregation parameters through the first aggregation parameters sent by the original password or the new password decryption parameter server, and obtains second local parameters and second training log information. And the node encrypts the second local parameter through the password and then sends the second local parameter to the parameter server, and writes the second training log information into the private chain. The second training log information is the same as the first training log information in content type, and is not described in detail, and is used for recording the contribution of the local data of the node to model training.
The parameter server decrypts the second local parameters sent by the nodes through the original passwords or the new passwords, and the parameter server combines the parameters obtained by previous aggregation to aggregate the second local parameters of the plurality of nodes so as to obtain second aggregated parameters. And the parameter server generates a second aggregation log based on the second aggregation parameter, and writes the second aggregation log into the private chain. The content type of the second aggregate log is the same as that of the first aggregate log, and the description is omitted.
The parameter server encrypts the second aggregation parameter through the original password or the new password and then sends the encrypted second aggregation parameter to the node.
In the interaction between the nth round of parameter server and the node, the node acquires the nth-1 aggregation parameter, performs nth local model training by combining local data, acquires the nth training log information and the nth local parameter, encrypts the nth local parameter, transmits the encrypted nth local parameter to the parameter server, and writes the nth training log information into a private chain.
The parameter server decrypts the nth local parameter sent by the nodes through the password, and the parameter server combines the parameters obtained by last aggregation to aggregate the nth local parameters of the plurality of nodes to obtain the nth aggregated parameter. And the parameter server generates an nth aggregation log based on the nth aggregation parameter, and writes the nth aggregation log into the private chain.
Until the parameter server and the node complete the preset N interactive rounds, or the model training effect tends to be stable, the variation fluctuation enters the preset fluctuation range. Wherein N and N are positive integers, and N is more than or equal to 2 and less than or equal to N.
S800, performing dishonest node detection of the node, and judging whether the parameter server sends initial parameters, a model structure, a first aggregation parameter and an nth aggregation parameter to the node.
The detection period of the dishonest node can be preset in a time interval of one week, one month and the like, and can also be the interaction times between the node and the parameter server, such as 5 times, 10 times and the like.
When a node accesses a parameter server, performing dishonest node detection, and if the accessed node is a marked dishonest node, stopping sending initial parameters and a model structure to the node; otherwise, continuing the subsequent flow.
In the process of interaction between the node and the parameter server, before the parameter server sends the first aggregation parameter or the n aggregation parameter to the node, dishonest node detection is carried out.
And judging the model effect improvement caused by different data in each model training according to the data type, the data quantity and the super-parameter setting of the pre-agreed nodes.
And obtaining a model training effect of the actual model training of the node, wherein the improvement of the model effect does not reach or exceed an average level, which indicates that the node possibly does not use contracted data, or does not provide enough data, which indicates that the training behavior of the node is not credible, and stopping sending the first aggregation parameter or the nth aggregation parameter. Meanwhile, aiming at the dishonest node, an alarm report is output, and manual intervention is proposed.
It should be noted that each block of the private chain is linked by a parent block, and has a uni-directional linked list structure. The block comprises packing transactions, all transactions calculate Hash values, and the Hash values are reserved in Merkle roots of the block head. In this way, the basic structure of the blockchain is formed. The trade can be divided into Log trade and evaluation trade simply, log trade main record information, evaluation trade main record model's evaluation effect. The transaction includes the following table, in which binary text information is written in the input field, i.e., where the Log or evaluation is saved. If one training is complex, it is possible to write multiple times, i.e. to conduct multiple transactions.
The embodiment also discloses a device of the trusted federal intelligent security computing platform, which comprises one or more processors; a memory;
And one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the processor, cause the apparatus of the trusted federal intelligent security computing platform to perform the method of the trusted federal intelligent security computing platform described above.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (7)

1. A method of a trusted federal intelligent security computing platform, characterized by: the method comprises the following steps:
The parameter server and the nodes form a communication network through a public network, a plurality of nodes are not connected, the parameter server is respectively communicated with all the nodes, and respective initial passwords are determined and used for subsequent model training; the parameter server is connected with a plurality of nodes in a mesh mode, builds a blockchain in a private chain mode for the nodes to write the log information in model training and the converged log when the parameter server performs parameter aggregation, and forms a P2P network through the private chain;
The parameter server stores a training model, and the parameter server encrypts initial parameters and a model structure of the training model by adopting an initial password and then sends the initial password to the node; the node receives and decrypts the initial parameters and the model structure, and the node performs first local training on the training model by combining local data and the initial parameters to obtain first local parameters and first training log information, wherein the first training log information comprises time, node information, data characteristics and model evaluation effects; the node encrypts the first local parameter and then sends the encrypted first local parameter to the parameter server, and the node writes the first training log information into the private chain;
The parameter server decrypts the first local parameter, obtains a model training effect of the actual model training of the node, and if the improvement of the model effect does not reach or exceed the average level, the node is indicated to be a dishonest node, parameter exchange with the node is stopped, and an alarm report is output; otherwise, the first local parameters are aggregated with the last aggregated parameters stored in the parameter server to obtain first aggregated parameters, the first aggregated parameters are encrypted and then sent to the node, the parameter server obtains a first aggregated log, and the first aggregated log is written into a private chain; the first aggregation log comprises time, aggregation information and model evaluation effect;
After decrypting the n-1 aggregation parameter sent by the parameter server, the node performs n-th local training on the training model by combining the n-1 aggregation parameter and the local data to obtain n-th local parameter and n-th training log information; the node encrypts the nth local parameter and then sends the encrypted nth local parameter to the parameter server, and writes the nth training log information into the private chain;
The parameter server decrypts the nth local parameter, the parameter server aggregates the nth local parameter and the parameter obtained by last aggregation of the parameter server to obtain an nth aggregated parameter, and the nth aggregated parameter is encrypted and then sent to the node; the parameter server obtains an nth aggregation log, and writes the nth aggregation log into the private chain;
and cycling for N times or enabling the training model to reach a stable state, wherein N is a preset total round of interaction between the node and the parameter server, N is a positive integer less than or equal to N, and N is more than or equal to 2.
2. The method of a trusted federal intelligent security computing platform of claim 1, wherein: the passwords are different between each of the nodes and the parameter server.
3. The method of a trusted federal intelligent security computing platform of claim 2, wherein: assigning the nodes in advance, wherein the nodes are assigned to be trusted nodes, semi-honest nodes and unknown nodes;
if the node is an unknown node, resetting the password each time the parameter server sends data to the node;
If the node is a semi-honest node, resetting the password according to a preset time threshold;
if the node is a trusted node, the password is not reset.
4. A method of a trusted federal intelligent security computing platform according to claim 3, wherein: and determining the password by adopting a Difeihman key exchange algorithm, resetting the password, and adopting a symmetric encryption algorithm to achieve the effect of secure password exchange on the public network.
5. The method of a trusted federal intelligent security computing platform of claim 1, wherein: the parameter server and the node form a C/S architecture, the parameter server controls the learning process, and the node is responsible for data and calculation.
6. The method of a trusted federal intelligent security computing platform of claim 1, wherein: the P2P network is transparent to the user, and both the training log and the aggregate log are written by the system during the training process.
7. A device of a trusted federal intelligent security computing platform, characterized in that: including one or more processors; a memory;
And one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the processor, cause the apparatus of the trusted federal intelligent security computing platform to perform the method of the trusted federal intelligent security computing platform of any one of claims 1-6.
CN202410167961.5A 2024-02-06 2024-02-06 Method and device for trusted federal intelligent security computing platform Active CN117714217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410167961.5A CN117714217B (en) 2024-02-06 2024-02-06 Method and device for trusted federal intelligent security computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410167961.5A CN117714217B (en) 2024-02-06 2024-02-06 Method and device for trusted federal intelligent security computing platform

Publications (2)

Publication Number Publication Date
CN117714217A CN117714217A (en) 2024-03-15
CN117714217B true CN117714217B (en) 2024-05-28

Family

ID=90150146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410167961.5A Active CN117714217B (en) 2024-02-06 2024-02-06 Method and device for trusted federal intelligent security computing platform

Country Status (1)

Country Link
CN (1) CN117714217B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443067A (en) * 2019-07-30 2019-11-12 卓尔智联(武汉)研究院有限公司 Federal model building device, method and readable storage medium storing program for executing based on secret protection
CN112214342A (en) * 2020-09-14 2021-01-12 德清阿尔法创新研究院 Efficient error data detection method in federated learning scene
CN112232527A (en) * 2020-09-21 2021-01-15 北京邮电大学 Safe distributed federal deep learning method
CN113609508A (en) * 2021-08-24 2021-11-05 上海点融信息科技有限责任公司 Block chain-based federal learning method, device, equipment and storage medium
CN113992360A (en) * 2021-10-01 2022-01-28 浙商银行股份有限公司 Block chain cross-chain-based federated learning method and equipment
CN114679332A (en) * 2022-04-14 2022-06-28 浙江工业大学 APT detection method of distributed system
CN115766135A (en) * 2022-11-02 2023-03-07 上海交通大学 Network monitoring system and method for federal learning
CN116579442A (en) * 2023-05-11 2023-08-11 北方工业大学 Federal learning system and excitation method for energy block chain

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3672142B1 (en) * 2018-12-20 2021-04-21 Siemens Healthcare GmbH Method and system for securely transferring a data set

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443067A (en) * 2019-07-30 2019-11-12 卓尔智联(武汉)研究院有限公司 Federal model building device, method and readable storage medium storing program for executing based on secret protection
CN112214342A (en) * 2020-09-14 2021-01-12 德清阿尔法创新研究院 Efficient error data detection method in federated learning scene
CN112232527A (en) * 2020-09-21 2021-01-15 北京邮电大学 Safe distributed federal deep learning method
CN113609508A (en) * 2021-08-24 2021-11-05 上海点融信息科技有限责任公司 Block chain-based federal learning method, device, equipment and storage medium
CN113992360A (en) * 2021-10-01 2022-01-28 浙商银行股份有限公司 Block chain cross-chain-based federated learning method and equipment
CN114679332A (en) * 2022-04-14 2022-06-28 浙江工业大学 APT detection method of distributed system
CN115766135A (en) * 2022-11-02 2023-03-07 上海交通大学 Network monitoring system and method for federal learning
CN116579442A (en) * 2023-05-11 2023-08-11 北方工业大学 Federal learning system and excitation method for energy block chain

Also Published As

Publication number Publication date
CN117714217A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US20230023857A1 (en) Data processing method and apparatus, intelligent device, and storage medium
Liang et al. PDPChain: A consortium blockchain-based privacy protection scheme for personal data
CN108830601B (en) Smart city information safe use method and system based on block chain
CN113127916B (en) Data set processing method, data processing method, device and storage medium
Hardin et al. Amanuensis: Information provenance for health-data systems
CN112380578A (en) Edge computing framework based on block chain and trusted execution environment
CN109039578A (en) Secret protection encryption method, information data processing terminal based on homomorphic cryptography
CN113901505B (en) Data sharing method and device, electronic equipment and storage medium
CN111709029A (en) Data operation and privacy transaction method based on block chain and trusted computing network
CN114357492A (en) Medical data privacy fusion method and device based on block chain
CN109995530A (en) A kind of safe distribution database exchange method suitable for movable positioning system
CN113111386A (en) Privacy protection method for block chain transaction data
CN114547209B (en) Data sharing interaction method and system based on block chain
Qin et al. A privacy-preserving blockchain-based tracing model for virus-infected people in cloud
Gao et al. BFR‐SE: A Blockchain‐Based Fair and Reliable Searchable Encryption Scheme for IoT with Fine‐Grained Access Control in Cloud Environment
CN117714217B (en) Method and device for trusted federal intelligent security computing platform
KR102517001B1 (en) System and method for processing digital signature on a blockchain network
Zhang et al. A General Access Architecture for Blockchain-Based Semi-Quantum 6G Wireless Communication and its Application
CN115242554A (en) Data use right transaction method and system based on security sandbox
Zhu et al. Multimedia fusion privacy protection algorithm based on iot data security under network regulations
Zhang et al. OWL: A data sharing scheme with controllable anonymity and integrity for group users
Liu et al. [Retracted] Mathematical Modeling of Static Data Attribute Encryption Based on Big Data Technology
CN114422215B (en) Cross-platform and trusted energy data sharing system and method based on blockchain
Yan et al. Power blockchain guarantee mechanism based on trusted computing
US20230421540A1 (en) Systems and methods for generating secure, encrypted communications using multi-party computations in order to perform blockchain operations in decentralized applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant