CN112166445A - Joint learning method and joint learning equipment based on block chain network - Google Patents

Joint learning method and joint learning equipment based on block chain network Download PDF

Info

Publication number
CN112166445A
CN112166445A CN201980007212.3A CN201980007212A CN112166445A CN 112166445 A CN112166445 A CN 112166445A CN 201980007212 A CN201980007212 A CN 201980007212A CN 112166445 A CN112166445 A CN 112166445A
Authority
CN
China
Prior art keywords
joint learning
channel
initial model
application server
server node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980007212.3A
Other languages
Chinese (zh)
Inventor
路博
潘时林
谢美伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112166445A publication Critical patent/CN112166445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Telephonic Communication Services (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The application provides a joint learning method and joint learning equipment based on a block chain network, wherein the method comprises the following steps: acquiring initial model parameters, wherein the initial model parameters are parameters for establishing an initial model for equipment participating in joint learning; training the initial model parameters to generate a training result for updating the initial model parameters; and sending the training result to an application server node through a state channel, wherein the state channel is an off-link channel between the device and the application server node, and the off-link channel is positioned outside the blockchain network. According to the technical scheme, the equipment participating in the joint learning feeds back the training result of the joint learning through the state channel, and the requirement of high transaction throughput of the combination of the joint learning and the block chain technology can be met.

Description

Joint learning method and joint learning equipment based on block chain network Technical Field
The present application relates to the field of information technology, and more particularly, to a joint learning method and a joint learning device based on a blockchain network.
Background
The proposal of the joint learning scheme enables model sharing on the mobile device. The joint learning means that all training data are stored at the mobile terminal, and the model training and cloud storage process of machine learning are decoupled. The method enables the mobile terminal to carry out model training and evolution, and solves the problem that the prior model can only send the trained model at the cloud end and cannot be trained locally. In modern society, privacy of personal data is increasingly important, and in supervised learning, training data is an essential element.
In order to implement process automation of the joint learning method and provide an incentive mechanism for devices participating in joint learning, a scheme of combining a joint learning technique and a blockchain technique is considered in the industry, but the method of combining the joint learning technique and the blockchain technique needs to share data of a training result of the joint learning at any node in the blockchain. If multiple mobile devices participate in joint learning, each training result needs to be shared by each node in the blockchain, which may result in that the existing blockchain architecture cannot meet the requirement of Transaction Per Second (TPS).
Disclosure of Invention
The application provides a joint learning method and joint learning equipment based on a block chain network, and the method can feed back the training result of the joint learning through a state channel, and can meet the requirement of high TPS (thermoplastic elastomer) of the combination of the joint learning and a block chain technology.
In a first aspect, a joint learning method based on a blockchain network is provided, including: acquiring initial model parameters, wherein the initial model parameters are parameters used for building an initial model by equipment participating in joint learning; training the initial model parameters to generate a training result for updating the initial model parameters; and sending the training result to an application server node through a state channel, wherein the state channel is an off-link channel between the equipment and the application server node, and the off-link channel is positioned outside a block link network.
According to the joint learning method, the device can send the training results participating in the joint learning through the state channel, namely the state channel can be established between the application server node and the device participating in the joint learning, so that the training results obtained after the device participates in the joint learning and carries out local training do not need to be shared at a plurality of nodes of the block chain network, and the requirement of high TPS (thermoplastic elastomer) of the combination of the joint learning and the block chain technology can be met. In addition, the training result is sent through the state channel, the problem that the training result is shared by all nodes of the block chain network can be avoided, and the data privacy of the training result is ensured.
It should be understood that a status Channel (State Channel) is an "off-Channel" technique for performing transactions and other status updates, i.e., a status Channel may refer to an off-link Channel of a block chain, and data or information sent through the off-link Channel does not need to be shared among nodes of the block chain, and the off-link Channel may be an off-link Channel established between two nodes.
For example, the status channel may be a one-way chain lower payment channel, for example, a one-way chain lower payment channel may be established between the application server node and the device participating in the joint learning, and the one-way chain lower payment channel is used for the application server node to pay the digital currency to the terminal device after receiving the training result participating in the joint learning sent by the device.
For example, the status channel may be a bidirectional link-down channel, for example, a bidirectional link-down channel established between the application server node and the device participating in joint learning, through which the device may participate in the local training result of joint learning to the application server node, and the application server node may pay digital money to the device after receiving the training result sent by the device.
It should be noted that the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model used in machine learning, and when the initial model is the neural network model, the initial parameters may be weight values used for building the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative and not limiting of the present application.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and receiving channel address information sent by the application server node, wherein the channel address information is used for identifying the state channel.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: acquiring a joint learning intelligent contract, wherein the joint learning intelligent contract comprises a training logic instruction of the initial model parameter;
the training the initial model parameters and generating a training result for updating the initial model parameters includes: and running the joint learning intelligent contract to train the initial model parameters to generate the training result.
In a possible implementation manner, the joint learning intelligent contract may include data information required for joint learning, content requirements of the training result fed back to the application server by the device, and cost information corresponding to the training result fed back by the device.
With reference to the first aspect, in certain implementations of the first aspect, the obtaining a joint learning intelligence contract includes: and acquiring the joint learning intelligent contract through the block chain network, wherein the joint learning intelligent contract is a down-chain intelligent contract, and the execution environment of the down-chain intelligent contract does not belong to the intelligent contract of the block chain network.
In this application, the joint learning intelligent contract may be an intelligent contract that is obtained by deploying a chain-down intelligent contract to the blockchain network, where the chain-down intelligent contract refers to an intelligent contract whose execution environment does not belong to the blockchain network. Deploying the intelligent contracts under the chain to a blockchain network may ensure the trustworthiness of the federated intelligent contracts.
With reference to the first aspect, in certain implementations of the first aspect, the running the joint learning intelligence contract includes: running the joint learning intelligence contract in a joint learning application.
For example, the devices participating in the joint learning may install a joint learning application, the application server node may publish an address of the joint learning intelligence contract to the joint learning application, and the devices participating in the joint learning may download and run the joint learning intelligence contract in the joint learning application.
With reference to the first aspect, in certain implementations of the first aspect, the running the joint learning intelligence contract in a joint learning application to train the initial model parameters includes: running the joint learning intelligence contract under a Trusted Execution Environment (TEE) to train the initial model parameters.
In the application, the device can run the joint learning intelligent contract under the trusted execution environment, so that the safety of running the joint intelligent contract can be ensured.
For example, the device is based on the hardware of a system on chip (SoC), and may provide a three-layer hardware security architecture including a Rich Execution Environment (REE), a Trusted Execution Environment (TEE), and a Secure Execution Environment (SEE), where the TEE may execute a security-sensitive program and save security-sensitive data.
For example, the joint learning intelligence contract may be run in a joint learning application under a trusted execution environment TEE to train the initial model parameters.
With reference to the first aspect, in certain implementations of the first aspect, after the sending the training result to the application server node through the status channel, the method further includes: receiving digital currency paid by the application server node over the status channel.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: sending a first message to the application server node, the first message indicating that the device is involved in joint learning, the first message including a digital currency wallet address of the device.
In the application, the device may send a message to the application server node indicating participation in joint learning and information of sending a digital currency wallet address, so as to obtain corresponding digital currency after sending the training result to the application server node.
It should be understood that the digital currency wallet address may be used to identify the device. When the plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that the digital currency is paid to the plurality of devices respectively after the training results of the joint learning sent by the plurality of devices are received.
With reference to the first aspect, in certain implementations of the first aspect, the receiving digital currency paid by the application server node through the status channel includes: receiving, under a Trusted Execution Environment (TEE), digital currency paid by the application server node over the status channel.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: determining that the joint learning is finished; and sending a first transaction to the blockchain network, wherein the first transaction is used for indicating to close the status channel.
It will be appreciated that the device may receive many transactions sent by the application server node, and therefore the digital currency of the device in the status channel is incremented, only the latest transactions need to be maintained. When the device decides to close the status channel, it can sign the latest transaction, send it to the blockchain network, and take out the digital money belonging to the device.
In a second aspect, a joint learning method based on a blockchain network is provided, including: sending initial model parameters to equipment, wherein the initial model parameters are parameters used by the equipment for establishing an initial model participating in joint learning; receiving a training result for updating the initial model parameter through a state channel, wherein the state channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a block chain network.
According to the block chain network-based joint learning method, the application server node can receive the training result sent by the equipment participating in joint learning through the state channel, namely the state channel can be established between the application server node and the equipment participating in joint learning, so that the training result obtained after the equipment participates in joint learning and carries out local training is not required to be shared by a plurality of nodes of the block chain network, and the requirement of high TPS (thermoplastic elastomer) of the combination of the joint learning and the block chain technology can be met.
It should be understood that a status Channel (State Channel) is an "off-Channel" technique for performing transactions and other status updates, i.e., a status Channel may refer to an off-link Channel of a block chain, data or information sent through the off-link Channel does not need to be shared among nodes of the block chain, and the off-link Channel may be an off-link Channel established between two nodes.
For example, the status channel may be a one-way chain lower payment channel, for example, a one-way chain lower payment channel may be established between the application server node and the device participating in the joint learning, and the one-way chain lower payment channel is used for the application server node to pay the digital money to the device participating in the joint learning after receiving the training result participating in the joint learning sent by the terminal device.
For example, the status channel may be a bidirectional link-down channel, for example, a bidirectional link-down channel established between the application server node and the device participating in joint learning, through which the device may participate in the local training result of joint learning to the application server node, and the application server node may pay digital money to the device after receiving the training result sent by the device.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: receiving channel address information sent by a block chain network, wherein the channel address information is used for identifying the state channel; and sending the channel address information to the equipment.
With reference to the second aspect, in some implementations of the second aspect, before receiving the channel address information sent by the blockchain network, the method further includes: sending a second transaction to the blockchain network, the second transaction to deploy the status channel.
In the application, the application server node may send a downlink channel in which a transaction is deployed between devices participating in joint learning, so as to receive a training result for updating the initial model parameter sent by the device through the downlink channel. Not only can satisfy the requirement of TPS, can protect the data privacy nature of training result simultaneously.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: sending a third transaction to the blockchain network, wherein the third transaction is used for deploying a joint learning intelligent contract to the blockchain network, the joint learning intelligent contract is a down-link intelligent contract, an execution environment of the down-link intelligent contract does not belong to the blockchain network, and the joint learning intelligent contract comprises a training logic instruction of the initial model parameter.
In this application, the application server node may deploy a chain-down intelligent contract to the intelligent contract of the blockchain network, where the chain-down intelligent contract refers to an intelligent contract whose execution environment does not belong to the blockchain network. Deploying the intelligent contracts under the chain to a blockchain network may ensure the trustworthiness of the federated intelligent contracts.
With reference to the second aspect, in some implementations of the second aspect, after receiving the training result through the status channel, the method further includes: paying digital currency to the device through the status channel.
With reference to the second aspect, in certain implementations of the second aspect, before the paying digital money to the device over the status channel, the method further includes: receiving a first message sent by the device, wherein the first message is used for indicating the device to participate in joint learning, and the first message comprises a digital currency wallet address of the device.
It should be understood that the digital currency wallet address may be used to identify the device. When the plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that the digital currency is paid to the plurality of devices respectively after the training results of the joint learning sent by the plurality of devices are received.
In a third aspect, a joint learning apparatus is provided, including: the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving initial parameters sent by an application server node, and the initial model parameters are parameters used for building an initial model by equipment participating in joint learning; the processing unit is used for training the initial parameters and generating a training result for updating the initial parameters; a sending unit, configured to send the training result to an application server node through a status channel, where the status channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a block link network.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is further configured to: and receiving channel address information sent by the application server node, wherein the channel address information is used for identifying the state channel.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is further configured to: acquiring a joint learning intelligent contract, wherein the joint learning intelligent contract comprises a training logic instruction of the initial model parameter; the processing unit is specifically configured to: and running the joint learning intelligent contract to train the initial model parameters, and generating a training result for updating the initial model parameters.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is specifically configured to: and acquiring the joint learning intelligent contract through the block chain network, wherein the joint learning intelligent contract is a down-chain intelligent contract, and the execution environment of the down-chain intelligent contract does not belong to the block chain network.
With reference to the third aspect, in some implementations of the third aspect, the processing unit is specifically configured to: the device runs the joint learning intelligence contract in a joint learning application.
With reference to the third aspect, in some implementations of the third aspect, the processing unit is specifically configured to: running the joint learning intelligence contract under a Trusted Execution Environment (TEE) to train the initial model parameters.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is further configured to: receiving digital currency paid by the application server node over the status channel.
With reference to the third aspect, in some implementations of the third aspect, the sending unit is further configured to: sending a first message to the application server node, the first message indicating that the device is involved in joint learning, the first message including a digital currency wallet address of the device.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is specifically configured to: receiving, under a Trusted Execution Environment (TEE), digital currency paid by the application server node over the status channel.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is further configured to: determining that a training task of the joint learning is finished; the sending unit is further configured to: and sending a first transaction to the blockchain network, wherein the first transaction is used for indicating to close the status channel.
In a fourth aspect, there is provided a joint learning apparatus including: a sending unit, configured to send an initial model parameter to a device, where the initial model parameter is a parameter used by the device to establish an initial model participating in joint learning; a receiving unit, configured to receive a training result used for updating the initial model parameter through a state channel, where the state channel is an off-link channel between the device and an application server node, and the off-link channel is located outside a blockchain network.
With reference to the fourth aspect, in some implementations of the fourth aspect, the receiving unit is further configured to: receiving channel address information sent by the block chain network, wherein the channel address information is used for identifying the state channel; the sending unit is further configured to: and sending the channel address information to the equipment.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending unit is further configured to: sending a second transaction to the blockchain network, the second transaction to deploy the status channel.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending unit is further configured to: sending a third transaction to the blockchain network, wherein the third transaction is used for deploying a joint learning intelligent contract to the blockchain network, the joint learning intelligent contract is a down-link intelligent contract, an execution environment of the down-link intelligent contract does not belong to the blockchain network, and the joint learning intelligent contract comprises a training logic instruction of the initial model parameter.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending unit is further configured to: paying digital currency to the device through the status channel.
With reference to the fourth aspect, in some implementations of the fourth aspect, the receiving unit is further configured to: receiving a first message sent by the device, wherein the first message is used for indicating the terminal device to participate in joint learning, and the first message comprises information of a digital currency wallet address of the terminal device.
In a fifth aspect, there is provided a joint learning apparatus, including: a processor, a memory for storing a computer program, the processor being configured to invoke and run the computer program from the memory, such that the joint learning apparatus performs the joint learning method of the first aspect and its various possible implementations.
For example, the joint learning device may be a terminal device.
Optionally, the number of the processors is one or more, and the number of the memories is one or more.
Alternatively, the memory may be integral to the processor or provided separately from the processor.
In a sixth aspect, there is provided a joint learning apparatus including: a processor, a memory for storing a computer program, the processor being configured to invoke and run the computer program from the memory, such that the joint learning apparatus performs the joint learning method of the second aspect and its various possible implementations.
For example, the joint learning device may be an application server node.
Optionally, the number of the processors is one or more, and the number of the memories is one or more.
Alternatively, the memory may be integral to the processor or provided separately from the processor.
In a seventh aspect, a computer program product is provided, the computer program product comprising: a computer program (also referred to as code, or instructions), which when executed, causes a computer or any at least one processor to perform the method of the first aspect and its various implementations described above.
In an eighth aspect, there is provided a computer program product comprising: a computer program (also referred to as code, or instructions), which when executed, causes a computer or any at least one processor to perform the method of the second aspect and its various implementations described above.
In a ninth aspect, a computer-readable medium is provided, which stores a computer program (also referred to as code, or instructions), which when run on a computer or any at least one processor, causes the computer or the processor to perform the method of the first aspect and its various implementations.
In a tenth aspect, a computer-readable medium is provided, which stores a computer program (also referred to as code, or instructions), which when run on a computer or any at least one processor, causes the computer or the processor to perform the method of the second aspect and its various implementations.
In an eleventh aspect, a chip system is provided, which comprises a processor for enabling a server in a computer to implement the functions referred to in the first aspect and its various implementations.
In a twelfth aspect, a chip system is provided, which comprises a processor for enabling a server in a computer to implement the functions referred to in the second aspect and its various implementations.
Drawings
FIG. 1 is an exemplary flow diagram of a prior art joint learning technique;
FIG. 2 is a schematic flow chart diagram of a joint learning method provided in accordance with an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a joint learning method provided in accordance with another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a joint learning apparatus provided according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a joint learning apparatus provided in accordance with another embodiment of the present application;
FIG. 6 is a schematic structural diagram of a joint learning apparatus provided according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a joint learning apparatus provided in accordance with another embodiment of the present application;
fig. 8 is a schematic structural diagram of a joint learning apparatus according to another embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
First, a brief description of the concepts of blockchain skills, intelligent contracts, and joint learning involved in the embodiments of the present application will be given.
1. Intelligent contract
An intelligent contract is a collection of code and data, which may also be referred to as a "programmable contract". Generally speaking, the intelligent contract is defined by program coding and preset with running conditions; an action is performed when a run condition is triggered. The "intelligence" is the intelligence in execution, that is, if a certain preset condition is reached, the contract runs automatically.
2. Joint learning
Fig. 1 is a schematic flow chart of joint learning. Including steps 110 through 140.
110. The model requirement direction sends initial parameters to one or more devices participating in joint learning, and the initial parameters are used for providing the initial models participating in the joint learning for the one or more devices.
It should be understood that the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model employed in machine learning, and when the initial model is a neural network model, the initial parameters may be weight values used to build the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative and not limiting of the present application.
It should be understood that the model demander may be an application server node, and may also be an application APP in the application server node. The device may be a terminal device, e.g. may be a user equipment, a mobile device, a user terminal, a wireless communication device or a user equipment. The foregoing is illustrative and not limiting of the present application.
120. And the equipment participating in the joint learning carries out local training according to the obtained initial model parameters to obtain a training result of updating the initial model parameters.
130. And the equipment participating in the joint learning feeds back the training result to the model demander.
It should be noted that the training result may be to improve the initial model by learning according to the private data of the user using the device, and then compress the changed part of the initial model into a small update package for feedback.
For example, where the initial model is a neural network model, the changed portion of the initial model may be the changed portion of the network weight values of the initial model.
140. And the model demander adjusts the model parameters according to the training result fed back by one or more devices participating in the joint learning.
In other words, in a joint learning scenario, a device participating in joint learning may download a current up-to-date initial model, locally train the up-to-date initial model to improve the initial model, and then compress the changed portions of the initial model into a small update package. And sending the updated part of the model to a model requiring party by using an encryption communication method, and averaging the received updated part of the model fed back by the equipment participating in the joint learning by the model requiring party so as to improve the initial model. Therefore, all training data are on the equipment participating in the joint learning, and the personal privacy data of the user used for local training in the equipment is not sent to the model requiring party, but only sent to the changed part of the model.
3. Under-chain pay-off channel technology
The branch channel under the blockchain is one of the technical routes of expansion under the blockchain, that is, a general technical scheme for updating the related state of partial transactions in the blockchain outside the blockchain. The core idea is that some Transaction related party establishes a communication Channel under the chain, so that two parties can perform interaction action in the Channel under the chain, the update of the state is not submitted to a main chain miner, and when the state Channel needs to be closed, the final state is submitted to the main chain through a Close Channel Transaction (Close Channel Transaction) and is synchronized to a main chain account. The state channel has very high execution efficiency because the intermediate process does not interact with the main chain.
4. Block chaining techniques
The block chain technology realizes a chain data structure formed by connecting data and information blocks in sequence according to time sequence, and distributed storage which is ensured in a cryptology mode and cannot be tampered and forged is realized. Data and information in a blockchain are generally referred to as "transactions".
The blockchain technology is not a single technology, but is a system which is integrated and applied by point-to-point transmission, a consensus mechanism, distributed data storage and a cryptology principle, and has the technical characteristics of full disclosure and tamper resistance.
First, point-to-point transmission: the nodes participating in the block chain are independent and peer-to-peer, and the synchronization of data and information is realized between the nodes through a point-to-point transmission technology. The nodes can be different physical machines or different instances of the cloud.
Second, consensus mechanism: the consensus mechanism of the block chain refers to a process that specific data and information of each node are agreed by interaction among nodes under a preset logic rule by nodes participating in multiple parties. The consensus mechanism needs to rely on a well-designed algorithm, so that different consensus mechanism performances (such as TPS), time delay for achieving consensus, consumed computing resources, consumed transmission resources, and the like) have certain differences.
Thirdly, distributed data storage: distributed storage in the block chain is that independent and complete data are stored in each node participating in the block chain, so that the data storage is ensured to be fully disclosed among the nodes. Unlike traditional distributed data storage, which divides data into multiple parts according to a certain rule for backup or synchronous storage, block chain distributed data storage relies on the consensus among independent nodes with equal positions in a block chain to realize high-consistency data storage.
Fourth, the principle of cryptography: the block chain is usually based on asymmetric encryption technology to realize credible information propagation, verification and the like.
The concept of "block" is to organize one or more data records in the form of "block", and the size of "block" can be customized according to the actual application scenario; and a "chain" is a data structure that connects "chunks" storing data records in chronological order and with a HASH (HASH) technique. In the blockchain, each 'block' comprises two parts of a 'block head' and a 'block body', wherein the 'block body' comprises transaction records packed into the 'block'; the "chunk header" contains the root HASH of all transactions in the "chunk" and the HASH of the previous "chunk". The data structure of the blockchain ensures that the data stored on the blockchain has the property of being not tampered.
In the prior art, in a combined scheme of blockchain and joint learning, a training result of each training of a device is sent to a node in a blockchain network for processing. That is, the training result after each device participates in joint learning for local training needs to be shared by multiple nodes in the blockchain. Each training is performed to link the chain, and when the device is a ten million mobile phone terminals, the Transaction Per Second (TPS) requirement is high, and the existing block chain architecture cannot meet the TPS requirement.
In view of this, the present application provides a method for combining a blockchain and joint learning, which may send a training result of device participating in joint learning to a model requiring party through a status channel, that is, status channels may be respectively established between an application server node and devices participating in joint learning, so that training results obtained after the devices participate in the joint learning to perform local training do not need to be shared at multiple nodes of a blockchain network, and a requirement of high TPS of combining the joint learning and a blockchain technology may be met.
In the present application, a device may be a terminal device, for example, a user equipment, a mobile device, a user terminal, a wireless communication device or a user equipment. The terminal device may also be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved Public Land Mobile Network (PLMN), and the like, which are not limited in this embodiment.
A method 200 for combining blockchains and joint learning provided by the embodiment of the present application is described below with reference to fig. 2, and fig. 2 shows a schematic flow chart of the method 200 for combining blockchains and joint learning provided by the embodiment of the present application, which includes steps 210 to 230.
Step 210, the device receives an initial model parameter sent by the application server node, where the initial model parameter is a parameter used by the device participating in the joint learning to establish an initial model.
It should be noted that the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model used in machine learning, and when the initial model is the neural network model, the initial parameters may be weight values used for building the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative and not limiting of the present application.
It should be understood that in the present application, an application server node may be a model demander as shown in fig. 1, for example, an application server node may be a server, a device for providing computing services. Alternatively, the application server node may also be an application program APP in the application server node.
It should be understood that in embodiments of the present application, the device and application server nodes may be light nodes, i.e. nodes that own transaction data related to themselves. For the block chain network, if all nodes, namely nodes having all transaction data of the whole network are regarded as nodes in the block chain network, the equipment and the application server nodes do not belong to the block chain network; if both the full node and the light node are considered as nodes in the blockchain network, the device and the application server node belong to the blockchain network.
Step 220, the device trains the initial model parameters and generates a training result for updating the initial model parameters.
The training result may be data information generated by the device performing learning to improve the initial model according to the private data and then compressing the changed part of the initial model.
For example, when the initial model is a neural network model, the training result may be a changed part of the weight values of the neural network, that is, assuming that the initial parameter is the weight value W1 of the neural network model, the device participating in joint learning may train the neural network locally, and the training result may be a difference part existing between W1 and W2 when the trained weight value of the neural network model is W2.
In one example, a device participating in joint learning may obtain a joint learning intelligence contract, which may include training logic instructions for the initial model parameters; the equipment can train the initial model parameters according to the acquired joint learning intelligent contract to generate a training result. The structural design of the joint learning intelligent contract can be shown in table 1.
TABLE 1
Joint intelligent contract structure
1. Requirement for user data, data processing flow
2. Model architecture (DNN super parameter)
3. Model training logic
4. Device return content
5. Cost of single training
As shown in table 1, the joint learning intelligent contract may include data information required for joint learning, content requirements of training results fed back to the application server node by the devices participating in joint learning, and cost information corresponding to the training results fed back by the devices participating in joint learning.
It should be understood that the initial model may be a machine learning model, for example, the initial model may be a neural network model used in machine learning, when the initial model is the neural network model, the initial parameters may be weight values for establishing the neural network model, and the training logic instructions of the initial model included in the smart contract may be to execute a gradient descent algorithm once.
It should be understood that the initial model may be a computer algorithm model, for example, when the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model, and the training logic instructions of the initial model included in the smart contract may be the algorithm required by the computer to perform one model calculation. The foregoing is illustrative and not limiting of the present application. In an embodiment of the application, the joint learning intelligent contract may be an intelligent contract that is obtained by deploying a chain-down intelligent contract to the blockchain network, wherein the chain-down intelligent contract refers to an intelligent contract whose execution environment does not belong to the blockchain network.
For example, the application server node may obtain an intelligent contract obtained by redesigning a structure of the intelligent contract under the chain, where the redesigned intelligent contract under the chain may meet a requirement of joint learning, that is, the intelligent contract may be used as a carrier of a joint learning program, for example, joint learning may be a section of code encapsulated in the intelligent contract, and the redesigned intelligent contract under the chain may be regarded as a joint learning intelligent contract. The application server node may send a transaction (e.g., a third transaction) to the blockchain network, which may be used to deploy the downchain smart contract to the blockchain network.
Furthermore, the application server node can also issue the address of the joint learning intelligent contract for receiving the third-party organization to check the joint learning program, so as to ensure the credibility of the joint learning program. It should be appreciated that the address of the federated intelligent contract may be a hash value of the federated intelligent contract, and the third party authority may find the federated learning intelligent contract in the blockchain network based on the hash value of the federated intelligent contract. The published information of the application server node may be as shown in table 2.
TABLE 2
Transaction identification (Transaction ID)
From: application server node Address (requiring party Address)
Hash String (Hash String): hash value of joint learning program code
Data (Data): joint learning intelligent contract program
In one example, a device participating in joint learning may obtain a joint learning intelligence contract over a blockchain network, where the joint learning intelligence contract is an intelligence contract that an application server node deploys a downlinked intelligence contract to the blockchain network.
For example, the devices participating in the joint learning may install a joint learning application, the application server node may publish an address of the joint learning intelligence contract to the joint learning application, and the devices participating in the joint learning may download the joint learning intelligence contract in the joint learning application.
It should be understood that the joint learning intelligent contract deployed in the blockchain network is shared among the nodes of the blockchain network, so that malicious tampering of the joint learning intelligent contract can be avoided to a certain extent, and the credibility of the joint intelligent contract is ensured.
In one example, a device participating in joint learning may also obtain a joint learning intelligence contract from an application server node. The method and the device do not limit the specific source of the joint learning intelligent contract acquired by the equipment under the condition of ensuring the authenticity and credibility of the joint learning intelligent contract.
In an embodiment of the application, after obtaining the joint learning intelligent contract, the device may send a first message to the application server node, where the first message is used to instruct the device to participate in the joint learning, that is, the device may indicate to participate in the joint learning by sending the first message to the application server node, where the first message further includes a digital currency wallet address of the device, and after sending a local training result of the joint learning to the application server, the application server node may pay digital currency according to the digital currency wallet address of the device.
It should be understood that the digital currency wallet address may be used to identify devices participating in joint learning. When the plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that the digital currency is paid to the plurality of devices respectively after the training results of the joint learning sent by the plurality of devices are received.
In one example, the application server node, upon receiving the digital currency wallet address of the device, may issue a transaction (e.g., a second transaction) to the blockchain network, deploy an off-link channel contract, and establish an off-link channel between the application server node and the device, wherein the off-link channel is located outside of the blockchain network.
Further, the blockchain network sends address information of the status channel to the application server, and the address information of the status channel is used for identifying the status channel.
It should be noted that the channel address information may be an index value, and the application server node may find the established status channel according to the index value. For example, the channel address information may be a hash value of the status channel.
It should be understood that a status Channel (State Channel) is an "off-Channel" technique for performing transactions and other status updates, i.e., a status Channel may refer to an off-link Channel of a blockchain network, and data or information sent through the off-link Channel does not need to be shared among nodes of a blockchain, and the off-link Channel may be an off-link Channel established between two nodes. For example, the status channel may be a one-way chain lower payment channel, for example, a one-way chain lower payment channel may be established between the application server node and the device, and the one-way chain lower payment channel is used for the application server node to pay the digital currency to the device after receiving the training result participating in the joint learning sent by the device.
For example, the status channel may be a bidirectional link-down channel, such as a bidirectional link-down channel established between the application server node and the device, through which the device may participate in the local training results of the joint learning to the application server node, and the application server node may pay digital money to the device after receiving the training results sent by the device.
In an embodiment of the present application, the device may provide a three-layer hardware security system based on system on chip (SoC) hardware, including a Rich Execution Environment (REE), a Trusted Execution Environment (TEE), and a Secure Execution Environment (SEE), where the REE runs a security-insensitive program to store security-insensitive data, the TEE runs a security-sensitive program to store security-sensitive data, and the SEE runs a financial payment high-security program to store financial payment high-security data.
By way of example and not limitation, the SoC may provide at least one runtime environment such as Trustzone, Bowmore, eSE, or inSE, which may be referred to as TEE and eSE.
For example, when the device is a smartphone, the SoC may be configured in the smartphone, and the smartphone provides an operating environment such as Trustzone, Bowmore, eSE, and inSE through the SoC.
By way of example and not limitation, the SoC described above may support an instruction set that runs a reduced instruction set machine (ARM) based architecture, and an SoC that supports an instruction set that runs an ARM based architecture is referred to as an ARM based SoC. For example, an ARM-based SoC may be configured on a device, thereby providing a three-layer hardware security architecture for the device.
In one example, under a trusted execution environment TEE, the joint learning intelligence contract is run to train the initial model parameters.
For example, the device may run a joint learning intelligence contract in a joint learning application under the trusted execution environment TEE to train the initial model parameters, generating a training result for updating the initial model parameters.
Step 230, sending a training result to the application server node through a status channel, where the status channel refers to an off-link channel between the device and the application server node, and the off-link channel is located outside the blockchain network.
For example, the status channel may be a one-way chain lower payment channel, for example, a one-way chain lower payment channel may be established between the application server node and the device, and the one-way chain lower payment channel is used for the application server node to pay the digital currency to the device after receiving the training result participating in the joint learning sent by the device.
It should be understood that in the embodiments of the present application, digital money refers to electronic money that is created, issued, and circulated by means of checksum cryptographic techniques. The method is characterized in that P2P peer-to-peer network technology is used for issuing, managing and circulating currency.
For example, the status channel may be a bidirectional link-down channel, such as a bidirectional link-down channel established between the application server node and the device of the joint learning device, through which the device may send local training results participating in the joint learning to the device, and the application server node may pay digital money to the device after receiving the training results sent by the device.
The application server node may adjust the initial model parameters according to the received training results sent by the device. Wherein, a device may refer to one or more devices participating in joint learning.
In an embodiment of the application, after the device sends the training result to the application server node over the status channel, the application server node may pay digital money to the device according to the acquired digital money wallet address of the device.
For example, the application server node may send a transaction to the device through a status channel (e.g., a downlink payment channel), where the total digital currency in the channel may be S (a + B ═ S), the digital currency of the application server node is a, the digital currency of the device is B, the fee of the single training is t, and after the transaction, the digital currency of the application server node is a-t and the digital currency of the device is B + t. In the transaction, the signature of the application server node is attached. After the device signs the transaction, it can send the transaction to the blockchain network.
In one example, to ensure that the environment that pays for the digital currency is secure and trusted, the device may receive the digital currency paid by the application server node over the status channel under the trusted execution environment TEE.
In an embodiment of the application, the device determines that a training task of joint learning is finished; the device may send a transaction (e.g., a first transaction) to the blockchain network, which may be used to indicate that the status channel is closed.
It will be appreciated that the device may receive many transactions sent by the application server node, and therefore the digital currency of the device in the status channel is incremented, only the latest transactions need to be maintained. When the device decides to close the status channel, it can sign the latest transaction, send it to the blockchain network, and take out the digital money belonging to the device.
In the embodiment of the application, training parameters generated after the device participates in the joint learning and performs local training can be fed back to the application server node through the state channel, the requirement of high TPS in a scene of joint learning and block chain combination can be met by sending the training result through the state channel, meanwhile, the training result of the device also has certain privacy, the training result is sent through the state channel, sharing of each node in the block chain network can be avoided, and privacy of device data can be protected.
Next, a specific flow of the joint learning method in the embodiment of the present application is described with reference to fig. 3.
Fig. 3 is a schematic flowchart of a joint learning method according to an embodiment of the present application. The method shown in fig. 3 includes steps 301 to 314, and the steps 301 to 314 are described in detail below.
It should be understood that the model demander in fig. 3 may be the application server node shown in fig. 2, or may also be an application APP in the application server node. The device may be a terminal device, e.g. may be a user equipment, a mobile device, a user terminal, a wireless communication device or a user equipment, etc. The foregoing is illustrative and not limiting of the present application.
Step 301, the device locally installs a joint learning APP.
Wherein the federated learning APP may collect different personal privacy data for each different federated learning intelligence contract. And (5) preprocessing data. At the same time, the provided initialization model data w of the model demander is received.
In one example, as shown in fig. 4, an ARM-based SoC may provide a three-layer hardware security architecture for a device. The joint learning App can ensure the security of the data processing process in the TEE trusted environment. After the reliable joint learning App is installed on the equipment, a plurality of joint learning intelligent contracts can be loaded at the same time, and the requirements of a plurality of model demanders are met.
For example, different personal privacy data collected by the joint learning APP may be stored in a database of local personal privacy data. The personal privacy data are stored in the TEE trusted environment, so that on one hand, the safety of the personal privacy data can be protected and the personal privacy data are not stolen by other malicious software; on the other hand, it can be ensured that the personal privacy data provided to the user using the device is authentic and is of real value to the model demanding party.
And step 302, the model requiring party deploys the intelligent contract under the chain onto the chain, receives the audit, ensures the credibility and can track.
For example, the model demander deploys the intelligent contracts on the chain through transactions. Meanwhile, the address of the intelligent contract under the chain can be issued, for example, the address of the intelligent contract under the chain can be issued to the joint learning APP. Therefore, the joint learning program is checked by the third-party organization, and credibility is ensured.
It should be understood that the address of the intelligent contract under the chain may refer to a hash value of the intelligent contract under the chain, and the intelligent contract under the chain may be found according to the address of the intelligent contract under the chain. It should be noted that, unlike the general intelligent contract in the block chain, the execution environment of the intelligent contract is not on the chain, but on the mobile phone side. The joint learning can be combined with the block chain and the intelligent contract under the chain, and the credibility of the joint learning program can be guaranteed. The structural design of the intelligent contract under the chain can be shown in table 1.
Step 303, the device may receive a recommendation of a program for joint learning of the intelligent contract, and download the program for the intelligent contract under the link deployed in the blockchain network.
Step 304, the device sends the digital currency wallet address to the model demander and indicates to the model demander to take part in the joint learning process.
Step 305, the model demander issues a transaction (e.g., a second transaction) to the blockchain network for deploying the off-link channel contract to establish a state channel (e.g., a down-link payment channel) with the device.
The state channel established by the application server node and the equipment can be a one-way chain lower payment channel and is used for a model demander to pay digital currency to the equipment; alternatively, it may be a two-way down-link channel for the device to generate training results to the model requiring party and for the model requiring party to pay digital money to the device upon receiving the training results sent by the device.
For example, the model demander has S in the pay-through under the unidirectional chain. The off-link channel contract requires that the signature SigA of the demand party is aggregated and the transaction with the signature SigB of the user is required to close the payment channel. The total amount of the transaction is S, the amount of the address output to the demander is A, and the amount of the device address is B, wherein A + B is S.
Step 306, the model demander receives channel address information sent by the blockchain network, and the channel address information is used for identifying the state channel.
Step 307, the model request direction sends the channel address information to the device.
It should be noted that, in step 305, the model requests to send a transaction to the blockchain network, requesting to establish a status channel with the device; after the blockchain network receives the transaction, a state channel between the model demander and the equipment, namely an off-link channel outside the blockchain network, is deployed, and address information of the state channel is sent to the model demander. And after receiving the address information of the state channel, the model demander sends the address information of the state channel to the equipment for the equipment to identify the established state channel.
Step 308, the model requirement direction sends the initial model parameters to the device.
For example, the model requirement direction sends the initial model parameters of the joint learning to the device, and the initial parameters may be the values of the weights w of the neural network.
Step 309, the device executes the intelligent contract, and performs the training of the single step locally, wherein the training of the local single step may be a stochastic gradient descent optimization algorithm, so as to update the value of the parameter w.
In one example, as shown in fig. 4, the execution environment of the intelligent contract under the federated learning chain is in a Virtual Machine (VM). The joint learning App uses user data of the device and initial model parameters for a certain joint learning contract, and can run in the intelligent contract VM under the chain. The execution environment of the intelligent contract under the chain can support not only the joint learning program but also other intelligent contract programs under the chain.
And step 310, the model demander receives the training result sent by the equipment.
For example, the device may feed back the training result to the model demander through a downlink channel established with the model demander.
At step 311, the model demander pays the digital currency to the device via a status channel (e.g., a sub-chain payment channel).
For example, through the status channel, the model requesting party sends a transaction to the device, such that the total assets in the channel is S, wherein the requesting party has A and the device has B, and the cost of the single training is t. The output of this transaction is a-t for the model requiring party and B + t for the device, where a + B is S. In this transaction, the signature of the model requirer is attached. When the device signs the transaction again, it can send the transaction to the blockchain network, close the channel, and take out the digital currency belonging to the device.
In one example, the status channel may operate in a TEE environment, and the device may be in a secure environment both receiving digital currency of the model demander and sending model data updates to the model demander. And spending digital money, it needs to sign in the SEE environment with the private key in the digital money wallet.
And step 312, the model demander receives the training results fed back by the devices participating in the joint learning to perform average processing to obtain the latest model parameters, returns to step 306, and then sends the latest model parameters to the devices participating in the joint learning.
Step 313, the device can view the on-chain digital currency and the off-chain digital currency belonging to the device at any time.
The device can receive many transactions sent by the model demander, so the digital currency of the device in the status channel is incremented, only the latest transactions need to be maintained.
And step 314, when the joint learning ending device determines to close the channel, signing the latest transaction, sending the signature to the block chain network, and taking out the digital currency belonging to the device.
It should be noted that the example of fig. 3 is merely to assist those skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the particular scenarios illustrated. It will be apparent to those skilled in the art from the example given in fig. 3 that various equivalent modifications or variations can be made, and such modifications or variations also fall within the scope of the embodiments of the present application.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In the embodiment of the present application, the training parameters generated after the devices participating in the joint learning participate in the local training may be fed back to the application server node through the status channel, and the requirement of high TPS in the scene where the joint learning and the block chain are combined may be met by sending the training result through the status channel. It should be understood that the joint learning apparatus of the embodiment of the present application may perform the foregoing various methods of the embodiment of the present application, that is, the following specific working processes of various products, and reference may be made to the corresponding processes in the foregoing method embodiments.
Fig. 5 is a schematic block diagram of a joint learning apparatus 500 provided in an embodiment of the present application. It should be understood that the joint learning apparatus 500 is capable of performing the steps performed by the apparatus in the method of fig. 2 or fig. 3, and will not be described in detail herein to avoid repetition. The joint learning apparatus 500 includes: a receiving unit 510, a processing unit 520 and a transmitting unit 530.
The receiving unit 510 is configured to receive an initial model parameter sent by an application server node, where the initial model parameter is a parameter used by a device participating in joint learning to establish an initial model; a processing unit 520, configured to train the initial model parameters, and generate a training result for updating the initial model parameters; a sending unit 530, configured to send the training result to an application server node through a status channel, where the status channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a block link network.
Optionally, as an embodiment, the receiving unit 510 is further configured to: and receiving channel address information sent by the application server node, wherein the channel address information is used for identifying the state channel.
Optionally, as an embodiment, the receiving unit 510 is further configured to: acquiring a joint learning intelligent contract, wherein the joint learning intelligent contract comprises a training logic instruction of the initial model parameter; the processing unit 520 is specifically configured to: and running the joint learning intelligent contract to train the initial model parameters, and generating a training result for updating the initial model parameters.
Optionally, as an embodiment, the receiving unit 510 is further specifically configured to: acquiring the joint learning intelligent contract through a block chain network, wherein the joint learning intelligent contract is about a down-chain intelligent contract department, and an execution environment of the down-chain intelligent contract does not belong to the intelligent contract of the block chain network.
Optionally, as an embodiment, the processing unit 520 is specifically configured to: running the joint learning intelligence contract in a joint learning application to train the initial model parameters.
Optionally, as an embodiment, the processing unit 520 is specifically configured to: running the joint learning intelligence contract under a Trusted Execution Environment (TEE) to train the initial model parameters.
Optionally, as an embodiment, the receiving unit 510 is specifically configured to: receiving digital currency paid by the application server node over the status channel.
Optionally, as an embodiment, the receiving unit 510 is specifically configured to: receiving, under a Trusted Execution Environment (TEE), digital currency paid by the application server node over the status channel.
Optionally, as an embodiment, the sending unit 530 is further configured to: sending a first message to the application server node, the first message indicating that the device is involved in joint learning, the first message including a digital currency wallet address of the device.
Optionally, as an embodiment, the processing unit 520 is further configured to: determining that a training task of the joint learning is finished; the sending unit 530 is further configured to: and sending a first transaction to the blockchain network, wherein the first transaction is used for indicating to close the status channel.
It should be appreciated that the joint learning apparatus 500 herein is embodied in the form of a functional unit. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto. For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 6 is a schematic block diagram of a joint learning apparatus 600 provided in an embodiment of the present application. It should be understood that the joint learning apparatus 600 may be an application server node capable of executing the steps executed by the application server node in the method of fig. 2 or fig. 3, and will not be described in detail here in order to avoid repetition. The joint learning apparatus 600 includes: a transmitting unit 610 and a receiving unit 620.
The sending unit 610 is configured to send initial model parameters to a device, where the initial model parameters are parameters used by the device to establish an initial model participating in joint learning; the receiving unit 620 is configured to receive a training result for updating the initial model parameter through a state channel, where the state channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a blockchain network.
It is to be understood that the application server node 600 may further comprise a processing unit, which may be used to control the receiving unit 620 and the sending unit 610 to perform the relevant steps.
Optionally, as an embodiment, the receiving unit 620 is further configured to: receiving channel address information sent by the block chain network, wherein the channel address information is used for identifying the state channel; the sending unit 610 is further configured to send the channel address information to the device.
Optionally, as an embodiment, the sending unit 610 is further configured to: sending a second transaction to the blockchain network, the second transaction to deploy the status channel.
Optionally, as an embodiment, the sending unit 610 is further configured to: and sending a third transaction to the blockchain network, wherein the third transaction is used for deploying a joint learning intelligent contract to the blockchain network, the joint learning intelligent contract is a down-link intelligent contract, the execution environment of the down-link intelligent contract does not belong to the blockchain network, and the joint learning intelligent contract comprises the training logic instruction of the initial model parameter.
Optionally, as an embodiment, the sending unit 610 is further configured to: paying digital currency to the device through the status channel.
Optionally, as an embodiment, the receiving unit 620 is further configured to: receiving a first message sent by the device, wherein the first message is used for indicating the device to participate in joint learning, and the first message comprises a digital currency wallet address of the device.
It should be appreciated that the joint learning apparatus 600 herein is embodied in the form of a functional unit. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto. For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. Accordingly, the units of the respective examples described in the embodiments of the present application can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 7 shows a schematic block diagram of a joint learning apparatus 700 according to another embodiment of the present application. The joint learning apparatus 700 may be a terminal apparatus, as shown in fig. 7, and the joint learning apparatus 700 includes a processor 720, a memory 760, a communication interface 740, and a bus 750. The processor 720, the memory 760, and the communication interface 740 communicate via the bus 750, and may also communicate via other means such as wireless transmission. The memory 760 is configured to store instructions and the processor 720 is configured to execute the instructions stored by the memory 760. The memory 760 stores the program code 711, and the processor 720 may call the program code 711 stored in the memory 760 to perform the joint learning method shown in fig. 2 or 3.
For example, processor 720 may be configured to execute the above-mentioned training model parameters 220 in fig. 2 to generate training results; or 307 in fig. 3, performs the intelligent contract for local training.
The memory 760 may include both read-only memory and random access memory, and provides instructions and data to the processor 720. Memory 760 may also include non-volatile random access memory. The memory 760 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct bus RAM (DR RAM).
The bus 750 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled as bus 750 in figure 7.
It should be understood that the joint learning apparatus 700 shown in fig. 7 can implement various processes performed by the apparatus in the method embodiments shown in fig. 2 and fig. 3. The operations and/or functions of the modules in the joint learning apparatus 700 are respectively for implementing the corresponding processes of the apparatus in the above method embodiments. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
Fig. 8 shows a schematic block diagram of a joint learning apparatus 800 according to another embodiment of the present application. The joint learning device 800 may be an application server node, as shown in fig. 8, the joint learning device 800 including a processor 820, a memory 860, a communication interface 840, and a bus 850. The processor 820, the memory 860 and the communication interface 840 communicate with each other through the bus 850, and may also communicate with each other by other means such as wireless transmission. The memory 860 is configured to store instructions and the processor 820 is configured to execute the instructions stored in the memory 860. The memory 860 stores the program code 811 and the processor 820 may call the program code 811 stored in the memory 860 to perform the joint learning method shown in fig. 2 or 3.
Among them, the communication interface 840 shown in fig. 8 may correspond to the receiving unit 620 and the transmitting unit 610 in the terminal device shown in fig. 6.
The memory 820 may include both read-only memory and random access memory, and provides instructions and data to the processor 820. The memory 860 may also include non-volatile random access memory. The memory 860 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct bus RAM (DR RAM).
The bus 850 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 8 as bus 850.
It should be understood that the joint learning apparatus 800 shown in fig. 8 can implement the processes performed by the application server node in the method embodiments shown in fig. 2 and fig. 3. The operations and/or functions of the modules in the joint learning apparatus 800 are respectively for implementing the corresponding processes of the application server node in the above method embodiments. Reference may be made specifically to the description of the above method embodiments, and a detailed description is appropriately omitted herein to avoid redundancy.
The present application also provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform the steps of the joint learning method as shown in fig. 2 and fig. 3.
The present application also provides a computer program product containing instructions which, when run on a computer or any at least one processor, causes the computer to perform the steps of the joint learning method as shown in fig. 2 and 3.
The application also provides a chip comprising a processor. The processor is used for reading and running the computer program stored in the memory to execute the corresponding operations and/or processes executed by the method for waking up the screen provided by the application.
Optionally, the chip further comprises a memory, the memory is connected with the processor through a circuit or a wire, and the processor is used for reading and executing the computer program in the memory. Further optionally, the chip further comprises a communication interface, and the processor is connected to the communication interface. The communication interface is used for receiving data and/or information needing to be processed, and the processor acquires the data and/or information from the communication interface and processes the data and/or information. The communication interface may be an input output interface.
In the above embodiments, the processor may include, for example, a Central Processing Unit (CPU), a microprocessor, a microcontroller, or a digital signal processor, and may further include a GPU, an NPU, and an ISP, and the processor may further include necessary hardware accelerators or logic processing hardware circuits, such as an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program according to the present invention. Further, the processor may have the functionality to operate one or more software programs, which may be stored in the memory.
The memory may be a read-only memory (ROM), other types of static storage devices that may store static information and instructions, a Random Access Memory (RAM), or other types of dynamic storage devices that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

  1. A joint learning method based on a block chain network is characterized by comprising the following steps:
    receiving initial model parameters sent by an application server node, wherein the initial model parameters are parameters used for building an initial model by equipment participating in joint learning;
    training the initial model parameters to generate a training result for updating the initial model parameters;
    and sending the training result to an application server node through a state channel, wherein the state channel is an off-link channel between the equipment and the application server node, and the off-link channel is positioned outside a block chain network.
  2. The method of claim 1, wherein the method further comprises:
    and receiving channel address information sent by the application server node, wherein the channel address information is used for identifying the state channel.
  3. The method of claim 1 or 2, wherein the method further comprises:
    acquiring a joint learning intelligent contract, wherein the joint learning intelligent contract comprises a training logic instruction of the initial model parameter;
    the training the initial model parameters and generating a training result for updating the initial model parameters includes:
    and running the joint learning intelligent contract to train the initial model parameters, and generating a training result for updating the initial model parameters.
  4. The method of claim 3, wherein obtaining a joint learning intelligence contract comprises:
    and acquiring the joint learning intelligent contract through the block chain network, wherein the joint learning intelligent contract is a down-chain intelligent contract, and the execution environment of the down-chain intelligent contract does not belong to the block chain network.
  5. The method of claim 3 or 4, wherein the running the joint learning intelligence contract to train the initial model parameters comprises:
    running the joint learning intelligence contract under a Trusted Execution Environment (TEE) to train the initial model parameters.
  6. The method of any of claims 1 to 5, wherein after sending the training results to an application server node over a status channel, the method further comprises:
    receiving digital currency paid by the application server node over the status channel.
  7. The method of any of claims 1 to 6, wherein prior to said receiving digital currency paid by said application server node over said status channel, said method further comprises:
    sending a first message to the application server node, the first message indicating that the device is involved in joint learning, the first message including a digital currency wallet address of the device.
  8. The method of claim 6 or 7, wherein said receiving digital currency paid by said application server node over said status channel comprises:
    receiving, under a Trusted Execution Environment (TEE), digital currency paid by the application server node over the status channel.
  9. The method of any of claims 1 to 8, further comprising:
    determining that a training task of the joint learning is finished;
    and sending a first transaction to the blockchain network, wherein the first transaction is used for indicating to close the status channel.
  10. A joint learning method based on a block chain network is characterized by comprising the following steps:
    sending initial model parameters to equipment, wherein the initial model parameters are parameters used by the equipment for establishing an initial model participating in joint learning;
    receiving a training result for updating the initial model parameter through a state channel, wherein the state channel is an off-link channel between the device and an application server node, and the off-link channel is located outside a block chain network.
  11. The method of claim 10, wherein the method further comprises:
    receiving channel address information sent by the block chain network, wherein the channel address information is used for identifying the state channel;
    and sending the channel address information to the equipment.
  12. The method of claim 11, wherein prior to receiving the channel address information sent by the blockchain network, the method further comprises:
    sending a second transaction to the blockchain network, the second transaction to deploy the status channel.
  13. The method of any of claims 10 to 12, further comprising:
    sending a third transaction to the blockchain network, wherein the third transaction is used for deploying a joint learning intelligent contract to the blockchain network, the joint learning intelligent contract is a down-link intelligent contract, an execution environment of the down-link intelligent contract does not belong to the blockchain network, and the joint learning intelligent contract comprises a training logic instruction of the initial model parameter.
  14. The method of any of claims 10 to 13, wherein after receiving, via a status channel, a training result that updates the initial model parameters, the method further comprises:
    paying digital currency to the device through the status channel.
  15. The method of claim 14, wherein prior to said paying digital currency to said device over said status channel, said method further comprises:
    receiving a first message sent by the device, wherein the first message is used for indicating the device to participate in joint learning, and the first message comprises a digital currency wallet address of the device.
  16. A joint learning device comprising a memory for storing a computer program and a processor for invoking and running the computer program from the memory to perform the method of any one of claims 1 to 9.
  17. A joint learning device comprising a memory for storing a computer program and a processor for invoking and running the computer program from the memory to perform the method of any one of claims 10 to 15.
CN201980007212.3A 2019-04-16 2019-04-16 Joint learning method and joint learning equipment based on block chain network Pending CN112166445A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082865 WO2020210979A1 (en) 2019-04-16 2019-04-16 Blockchain-network-based joint learning method and joint learning device

Publications (1)

Publication Number Publication Date
CN112166445A true CN112166445A (en) 2021-01-01

Family

ID=72836866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980007212.3A Pending CN112166445A (en) 2019-04-16 2019-04-16 Joint learning method and joint learning equipment based on block chain network

Country Status (2)

Country Link
CN (1) CN112166445A (en)
WO (1) WO2020210979A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094761A (en) * 2021-04-25 2021-07-09 中山大学 Method for monitoring federated learning data tamper-proofing and related device
CN113792347A (en) * 2021-08-24 2021-12-14 上海点融信息科技有限责任公司 Block chain-based federal learning method, device, equipment and storage medium
CN114844785A (en) * 2021-02-01 2022-08-02 大唐移动通信设备有限公司 Model updating method, device and storage medium in communication system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562228B2 (en) * 2019-06-12 2023-01-24 International Business Machines Corporation Efficient verification of machine learning applications
US11694110B2 (en) 2019-06-12 2023-07-04 International Business Machines Corporation Aggregated machine learning verification for database
US11983608B2 (en) 2019-06-12 2024-05-14 International Business Machines Corporation Efficient verification of machine learning applications

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
CN108694669A (en) * 2018-07-18 2018-10-23 矩阵元技术(深圳)有限公司 A kind of block chain intelligence contract implementation method and device
CN109167695A (en) * 2018-10-26 2019-01-08 深圳前海微众银行股份有限公司 Alliance Network construction method, equipment and readable storage medium storing program for executing based on federation's study
US20190081793A1 (en) * 2017-09-12 2019-03-14 Kadena, LLC Parallel-chain architecture for blockchain systems
CN109493216A (en) * 2018-09-30 2019-03-19 北京小米移动软件有限公司 Model training method, device, system and storage medium
US20190095879A1 (en) * 2017-09-26 2019-03-28 Cornell University Blockchain payment channels with trusted execution environments
CN109543726A (en) * 2018-11-06 2019-03-29 联动优势科技有限公司 A kind of method and device of training pattern

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156810B (en) * 2015-04-26 2019-12-03 阿里巴巴集团控股有限公司 General-purpose machinery learning algorithm model training method, system and calculate node
CN105575389B (en) * 2015-12-07 2019-07-30 百度在线网络技术(北京)有限公司 Model training method, system and device
CN106934715A (en) * 2017-01-23 2017-07-07 天津米游科技有限公司 A kind of high frequency method of commerce and system based on block chain
CN108280522B (en) * 2018-01-03 2021-08-20 北京大学 Plug-in distributed machine learning calculation framework and data processing method thereof
CN109194508B (en) * 2018-08-27 2020-12-18 联想(北京)有限公司 Data processing method and device based on block chain

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
US20190081793A1 (en) * 2017-09-12 2019-03-14 Kadena, LLC Parallel-chain architecture for blockchain systems
US20190095879A1 (en) * 2017-09-26 2019-03-28 Cornell University Blockchain payment channels with trusted execution environments
CN108694669A (en) * 2018-07-18 2018-10-23 矩阵元技术(深圳)有限公司 A kind of block chain intelligence contract implementation method and device
CN109493216A (en) * 2018-09-30 2019-03-19 北京小米移动软件有限公司 Model training method, device, system and storage medium
CN109167695A (en) * 2018-10-26 2019-01-08 深圳前海微众银行股份有限公司 Alliance Network construction method, equipment and readable storage medium storing program for executing based on federation's study
CN109543726A (en) * 2018-11-06 2019-03-29 联动优势科技有限公司 A kind of method and device of training pattern

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114844785A (en) * 2021-02-01 2022-08-02 大唐移动通信设备有限公司 Model updating method, device and storage medium in communication system
CN114844785B (en) * 2021-02-01 2024-02-06 大唐移动通信设备有限公司 Model updating method, device and storage medium in communication system
CN113094761A (en) * 2021-04-25 2021-07-09 中山大学 Method for monitoring federated learning data tamper-proofing and related device
CN113792347A (en) * 2021-08-24 2021-12-14 上海点融信息科技有限责任公司 Block chain-based federal learning method, device, equipment and storage medium
CN113792347B (en) * 2021-08-24 2023-09-26 上海点融信息科技有限责任公司 Federal learning method, device, equipment and storage medium based on block chain

Also Published As

Publication number Publication date
WO2020210979A1 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
CN112166445A (en) Joint learning method and joint learning equipment based on block chain network
JP6892513B2 (en) Off-chain smart contract service based on a reliable execution environment
US11588643B2 (en) Blockchain management system
AU2019203863B2 (en) Preventing misrepresentation of input data by participants in a secure multi-party computation
CN110300985A (en) Transaction is executed parallel in block chain network based on intelligent contract white list
CN108712488B (en) Data processing method and device based on block chain and block chain system
CN111324446A (en) Multi-access edge computing node and method for deploying distributed accounting application
US11397943B2 (en) System and method of multi-round token distribution using a blockchain network
AU2017257449A1 (en) Operating system for blockchain IOT devices
EP4002786A1 (en) Distributed ledger system
CN110400217B (en) Rule change processing method and device for intelligent contract
US20200293361A1 (en) Method and distributed database system for computer-aided execution of a program code
TW202101350A (en) Method and device for cross-chain transmission of authenticable message based on processing module
CN111597567B (en) Data processing method, data processing device, node equipment and storage medium
CN111985007A (en) Contract signing and executing method and device based on block chain
CN110597916A (en) Data processing method and device based on block chain, storage medium and terminal
CN111401875B (en) Block chain transfer method and device based on account model
CN111311258A (en) Block chain based trusted transaction method, device, system, equipment and medium
CN112488683A (en) Method and device for offline transaction of block chain
CN112417052B (en) Data synchronization method, device, equipment and storage medium in block chain network
CN115705601A (en) Data processing method and device, computer equipment and storage medium
US20220114276A1 (en) Controlling a data network with respect to a use of a distributed database
CN110276693B (en) Insurance claim settlement method and system
CN114860402B (en) Scheduling strategy model training method, scheduling device, scheduling equipment and scheduling medium
CN116975901A (en) Identity verification method, device, equipment, medium and product based on block chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination