CN112001502A - Federal learning training method and device for high-delay network environment robustness - Google Patents

Federal learning training method and device for high-delay network environment robustness Download PDF

Info

Publication number
CN112001502A
CN112001502A CN202010858570.XA CN202010858570A CN112001502A CN 112001502 A CN112001502 A CN 112001502A CN 202010858570 A CN202010858570 A CN 202010858570A CN 112001502 A CN112001502 A CN 112001502A
Authority
CN
China
Prior art keywords
target
encrypted data
received
data uploading
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010858570.XA
Other languages
Chinese (zh)
Other versions
CN112001502B (en
Inventor
曾昱为
王健宗
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010858570.XA priority Critical patent/CN112001502B/en
Priority to PCT/CN2020/118938 priority patent/WO2021155671A1/en
Publication of CN112001502A publication Critical patent/CN112001502A/en
Application granted granted Critical
Publication of CN112001502B publication Critical patent/CN112001502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0442Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Environmental & Geological Engineering (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a federal learning training method, a federal learning training device, computer equipment and a storage medium for high-delay network environment robustness, which relate to the artificial intelligence technology and comprise the steps of obtaining the current system time, and obtaining a corresponding target data uploading terminal if encrypted data uploaded by a plurality of data uploading terminals are not received; acquiring a current network delay value of each target data uploading terminal to obtain a maximum network delay value; calculating to obtain a delay step length according to the maximum network delay value and the unit time sequence interval step length; and summing the current system time and the delay step length to obtain target system time, and stopping local federal learning training if the current time is the target system time and target encrypted data uploaded by the target data uploading terminal is not received, and recovering the local federal learning training until all target encrypted data uploaded by the target data uploading terminal are received. The method realizes that the training efficiency of federal learning is kept in a time delay sparse updating mode under the condition of network time delay.

Description

Federal learning training method and device for high-delay network environment robustness
Technical Field
The invention relates to the technical field of artificial intelligence model hosting, in particular to a high-latency network environment robust federal learning training method and device, computer equipment and a storage medium.
Background
Federated machine learning is a machine learning framework based on distributed parameter aggregation techniques, which focuses on distributed multiple users and corresponding federated parameter aggregation mechanisms. The system can effectively help a plurality of organizations to carry out data use and machine learning modeling under the condition of meeting the requirements of user privacy protection, data safety and government regulations. The federated learning is used as a distributed machine learning paradigm, the data island problem can be effectively solved, participators can jointly model on the basis of not sharing data, the data island can be technically broken, and AI cooperation is realized.
At present, the mainstream federal learning technology is based on the traditional synchronous parameter aggregation technology, and the synchronous parameter aggregation technology can only be used in a local cluster with low time delay and high bandwidth to ensure sufficient training efficiency.
However, in the context of federal learning, the training data is typically scattered across many geographical locations (e.g., across-country banking data nodes), and the physical distance between these geographical locations is long, so network latency is typically severe, in which case conventional synchronous distributed training does not work well. Experimental results prove that when the network delay is 100ms, the training efficiency of the traditional synchronous random gradient descent method is already reduced from 0.8 to 0.1, and the speed loss is eight times as high; in reality, the network delay time between two remote nodes often reaches hundreds of ms easily, and at this time, the synchronous random gradient descent method cannot work normally at all.
Disclosure of Invention
The embodiment of the invention provides a high-delay network environment robust federated learning training method, a device, computer equipment and a storage medium, and aims to solve the problem that in the prior art, the federated learning synchronous random gradient descent method greatly reduces the training efficiency when the network delay is serious.
In a first aspect, an embodiment of the present invention provides a federal learning training method for high latency network environment robustness, which includes:
acquiring current system time, and judging whether encrypted data uploaded by each data uploading terminal is received or not;
if encrypted data uploaded by a plurality of data uploading terminals are not received, acquiring corresponding target data uploading terminals to form a target data uploading terminal set;
acquiring current network delay values of all target data uploading terminals in the target data uploading terminal set to obtain maximum network delay values in all current network delay values;
calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step length of the local storage;
summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by a target data uploading terminal is received or not if the current time is the target system time; and
and if the target encrypted data uploaded by the target data uploading terminal is not received, stopping local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received, and recovering local federal learning training.
In a second aspect, an embodiment of the present invention provides a high latency network environment robust federal learning training device, which includes:
the data receiving and judging unit is used for acquiring the current system time and judging whether encrypted data uploaded by each data uploading terminal is received or not;
the target terminal acquisition unit is used for acquiring corresponding target data uploading terminals to form a target data uploading terminal set if encrypted data uploaded by a plurality of data uploading terminals are not received;
a maximum network delay value obtaining unit, configured to obtain a current network delay value of each target data uploading terminal in the target data uploading terminal set, so as to obtain a maximum network delay value among the current network delay values;
the delay step acquiring unit is used for calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step of the local storage;
the target encrypted data receiving and judging unit is used for summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by a target data uploading terminal is received or not if the current time is the target system time; and
and the time delay sparse updating unit is used for stopping local federal learning training if the target encrypted data uploaded by the target data uploading terminal is not received, and recovering local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the robust federal learning training method for a high latency network environment according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the high-latency network environment robust federated learning training method according to the first aspect.
The embodiment of the invention provides a federal learning training method, a device, computer equipment and a storage medium for high-delay network environment robustness, which comprises the steps of obtaining the current system time, and judging whether encrypted data uploaded by each data uploading terminal is received or not; if encrypted data uploaded by a plurality of data uploading terminals are not received, acquiring corresponding target data uploading terminals to form a target data uploading terminal set; acquiring current network delay values of all target data uploading terminals in the target data uploading terminal set to obtain maximum network delay values in all current network delay values; calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step length of the local storage; summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by a target data uploading terminal is received or not if the current time is the target system time; and if the target encrypted data uploaded by the target data uploading terminal is not received, stopping local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received, and recovering local federal learning training. The method realizes that the training efficiency of federal learning is kept in a time delay sparse updating mode under the condition of network time delay.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a high-latency network environment robust federal learning training method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a federated learning training method for high latency network environment robustness according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a high latency network environment robust federated learning training apparatus provided in an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a robust federal learning training method for a high latency network environment according to an embodiment of the present invention; fig. 2 is a schematic flow chart of the high latency network environment robust federal learning training method provided in an embodiment of the present invention, where the high latency network environment robust federal learning training method is applied in a server, and the method is executed by application software installed in the server.
As shown in fig. 2, the method includes steps S110 to S160.
And S110, acquiring the current system time, and judging whether encrypted data uploaded by each data uploading terminal is received.
In this embodiment, in order to more clearly understand the technical solution of the present application, the following detailed description is made on the terminal concerned. The technical scheme is described in the angle of the cloud server.
The first is a cloud server, which is used for receiving gradient parameters of each participant (i.e. a data uploading terminal), performing parameter aggregation locally in the cloud server, and then sending updated gradient parameters to each participant, thereby realizing federal learning. In specific implementation, the model trained by federal learning in the cloud server can be applied to face recognition, OCR text recognition, voice recognition and the like.
And the data uploading terminals are in communication connection with the cloud server, and serve as participants of federal learning, and can upload local gradient parameters to the cloud server for parameter aggregation after the local gradient parameters are trained respectively.
When each data uploading terminal is in communication connection with the cloud server, the cloud server may not receive the encrypted data uploaded by each data uploading terminal at the same time due to the problem of network delay of each data uploading terminal, and at this time, the cloud server needs to know which data uploading terminals upload the encrypted data that have been received and also needs to know which data uploading terminals upload the encrypted data that have not been received yet.
And S120, if the encrypted data uploaded by the plurality of data uploading terminals are not received, acquiring corresponding target data uploading terminals to form a target data uploading terminal set.
In this embodiment, when it is determined that some data uploading terminals do not upload encrypted data in the cloud server, since a connection terminal list is stored in the cloud server before, and terminal MAC addresses (or terminal IP addresses) of all data uploading terminals participating in the federate learning model training are listed in detail in the connection terminal list, when encrypted data uploaded by a plurality of data uploading terminals in the connection terminal list is received, an uploaded identifier of the encrypted data in the current round is added to an uploading identifier bit of the corresponding data uploading terminal in the connection terminal list, and uploading time of the encrypted data in the current round is recorded. Once the uploading identification bit of the data uploading terminal does not increase the uploaded mark of the data in the current round and the uploading time of the encrypted data corresponding to the data uploading terminal is still the uploading time of the encrypted data in the previous round, the connection terminal list indicates that the encrypted data in the current round of the data uploading terminal is not uploaded to the cloud server. Through the screening process, the condition of network delay of the data uploading terminal can be effectively judged in advance.
And if the encrypted data uploaded by all the data uploading terminals are received, continuing the federate learning training of the current round until the aggregation parameters of the current round are obtained, and sending the aggregation parameters of the current round to all the data uploading terminals.
S130, obtaining the current network delay value of each target data uploading terminal in the target data uploading terminal set to obtain the maximum network delay value in each current network delay value.
In this embodiment, after the set of target data uploading terminals that have not uploaded the encrypted data is obtained in step S120, a current network delay value of each target data uploading terminal may be obtained, and a maximum value of the current network delay values is taken as a maximum network delay value. Then, the maximum network delay value is taken as a reference, and the delay step length can be calculated.
And S140, calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step of the local storage.
In this embodiment, after the maximum network delay value is known, a unit time sequence interval step length stored locally is called, and a delay step length can be calculated according to the maximum network delay value/the unit time sequence interval step length, where the delay step length is used as an important parameter for delay adjustment of the cloud server in the federate learning process of the current round.
S150, summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by the target data uploading terminal is received or not if the current time is the target system time.
In this embodiment, the current system time and the delay step are summed to obtain the target system time, because the uploaded encrypted data is locally decrypted and then parameter aggregation is performed on the encrypted data in the cloud server in a period from the current system time to the target system time, if the parameter aggregation is completed on all the uploaded encrypted data in a period from the current system time to the target system time, the target data uploading terminal still does not upload the encrypted data, at this time, the parameter aggregation is stopped in the cloud server, and the target system time is delayed and waited. And then the cloud server detects whether the target encrypted data uploaded by the target data uploading terminal is received at the moment of the target system time.
And S160, if the target encrypted data uploaded by the target data uploading terminal is not received, stopping local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received, and recovering local federal learning training.
In this embodiment, for example, assuming that the current system time is t1, the delay step S is 12 (the delay step S is 12 corresponding to a duration of 400 ms), and the target system time is t1+ S is t2, the cloud server decrypts the encrypted data according to the received encrypted data in a time period from t1 to t2, and then performs parameter aggregation. If the moment reaches t2, but the target data uploading terminal still does not upload the target encrypted data, the cloud server starts to wait for uploading of the target encrypted data without performing parameter aggregation until all target data uploading terminals upload the target encrypted data, and local federal learning training is recovered.
Therefore, the selection of the delay step is related to the length of the network delay value, and the proper delay step can be optimized to the greatest extent; the higher the network delay value, the larger the delay step. When the network delay value is 100ms, the delay step S is the best 4; when the network delay value is 500ms, the delay step S is the best 8; when the network delay value is 1000ms, the delay step S is 12 optimal; when the network delay value is 5000ms, the delay step S is 20, which is the best. When the network delay value is between 100ms and 500ms, the delay step S is 8.
In an embodiment, step S160 is followed by:
and if the target encrypted data uploaded by the target data uploading terminal is received, adding the received current target encrypted data into a training set of the federal learning model to carry out local federal learning training.
In this embodiment, with reference to the above example, if all the target encrypted data uploaded by the target data uploading terminal is uploaded before t2, the cloud server may then synchronously send the parameter aggregation result to the data uploading terminal for the next round of training. By setting the parameter of the time delay step length, the cloud server can firstly aggregate a part of parameters for processing, and then newly received target encrypted data is added into the parameter aggregation in real time, so that the efficiency of parameter aggregation can be effectively improved by firstly processing the part of parameter aggregation and then waiting for the new data.
In an embodiment, step S160 is followed by:
obtaining model parameters of a local federal learning model and a local current global aggregation gradient;
calling the stored learning factor to obtain a target data aggregation gradient of the target encrypted data;
according to
Figure BDA0002647222350000071
Calculating to obtain compensated model parameters; wherein ω'(n,t)Representing the compensated model parameter, ω(n,t)Model parameters representing a local federated learning model, lambda represents a learning factor,
Figure BDA0002647222350000072
representing the current global aggregate gradient locally,
Figure BDA0002647222350000073
representing the target data aggregation gradient.
In this embodiment, since synchronous random gradient descent is no longer adopted, in order to fully restore the effect of the synchronous random gradient, an error-compensated gradient update term is introduced to offset an error caused by delay update. After all data uploading terminals in the round of federal learning upload encrypted data, acquiring compensated model parameters according to the compensated model parameters, namely model parameters of a local federal learning model and learning factors (local current global aggregation gradient-target data aggregation gradient).
In an embodiment, the step of obtaining the model parameters of the local federated learning model and the local current global aggregation gradient, calling the stored learning factors, obtaining the target data aggregation gradient of the target encrypted data, and obtaining the post-compensation model parameters according to the post-compensation model parameters (the model parameters of the local federated learning model + the learning factors (the local current global aggregation gradient — the target data aggregation gradient)) further includes:
and calling the locally stored public key, and sending the public key and the compensated model parameters to each data uploading terminal.
In this embodiment, after the current round is completed and all parameter data sets obtain the compensated model parameters, the cloud server may send the local public key and the compensated model parameters to each data uploading terminal respectively, so as to update the local model parameters of the data uploading terminal. And before uploading data, the data uploading terminal in the next round needs to encrypt the data according to the public key and then upload the data to the cloud server so as to improve the data security.
In an embodiment, the step of obtaining the model parameters of the local federated learning model and the local current global aggregation gradient, calling the stored learning factors, obtaining the target data aggregation gradient of the target encrypted data, and obtaining the post-compensation model parameters according to the post-compensation model parameters (the model parameters of the local federated learning model + the learning factors (the local current global aggregation gradient — the target data aggregation gradient)) further includes:
and uploading the compensated model parameters to a block chain network.
In this embodiment, the cloud server may serve as a block chain link point device to upload the compensated model parameters to a block chain network, and the non-falsification characteristic of block chain data is fully utilized to realize the solidified storage of the compensated model parameters.
The corresponding digest information is obtained based on the compensated model parameters, and specifically, the digest information is obtained by performing hash processing on the compensated model parameters, for example, by using a sha256 algorithm. Uploading summary information to the blockchain can ensure the safety and the fair transparency of the user. The server may download the summary information from the blockchain to verify whether the post-compensation model parameters are tampered. The blockchain referred to in this example is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In an embodiment, step S110 further includes:
acquiring the sending time of the last model parameter;
judging whether the time interval between the current system time and the last model parameter sending time is equal to a preset time sequence interval or not;
and if the time interval between the current system time and the last model parameter sending time is equal to the time sequence interval, executing the steps of obtaining the current system time and judging whether encrypted data uploaded by each data uploading terminal is received or not.
In this embodiment, in order to reduce the period for each data uploading terminal to upload encrypted data to the cloud server, a time interval may be set, that is, the last model parameter sending time may be obtained at this time, and it is determined whether the time interval between the current system time and the model parameter sending time is equal to the preset time interval. If the time interval between the current system time and the last model parameter sending time is equal to the time sequence interval, the cloud server can receive the encrypted data again after the time of the time sequence interval time length elapses after the last time of uploading the encrypted data by each data uploading terminal. By the method, time sequence sparse updating is realized, and the pressure of network IO is greatly relieved.
Experimental verification shows that the training efficiency of the traditional synchronous random gradient descent method is already reduced from 0.8 to 0.1 when the network delay is 100ms, the speed loss is as high as eight times, and the training efficiency is almost equal to the standstill after more than 300 ms. In the application, the training efficiency and the training effect which are equivalent to those of a 10ms delay network can still be kept under the network delay of 500 ms-1000 ms under the limit condition.
The method realizes that the training efficiency of federal learning is kept in a time delay sparse updating mode under the condition that network time delay exists.
The embodiment of the invention also provides a high-delay network environment robust federal learning training device, which is used for executing any embodiment of the high-delay network environment robust federal learning training method. Specifically, referring to fig. 3, fig. 3 is a schematic block diagram of a robust federal learning training apparatus in a high latency network environment according to an embodiment of the present invention. The federated learning training device 100 that is robust in a high-latency network environment may be deployed in a server.
As shown in fig. 3, the federal learning training device 100 robust in a high latency network environment includes: a data receiving and judging unit 110, a target terminal obtaining unit 120, a maximum network delay value obtaining unit 130, a delay step obtaining unit 140, a target encrypted data receiving and judging unit 150, and a delay sparse updating unit 160.
The data receiving and determining unit 110 is configured to obtain a current system time, and determine whether encrypted data uploaded by each data uploading terminal is received.
In this embodiment, when each data uploading terminal is in communication connection with the cloud server, the cloud server may not receive the encrypted data uploaded by each data uploading terminal at the same time due to the problem of network delay of each data uploading terminal, and at this time, the cloud server needs to know which data uploading terminals have received the encrypted data uploaded, and also needs to know which data uploading terminals have not received the encrypted data uploaded.
The target terminal obtaining unit 120 is configured to, if encrypted data uploaded by a plurality of data uploading terminals is not received, obtain corresponding target data uploading terminals to form a target data uploading terminal set.
In this embodiment, when it is determined that some data uploading terminals do not upload encrypted data in the cloud server, since a connection terminal list is stored in the cloud server before, and terminal MAC addresses (or terminal IP addresses) of all data uploading terminals participating in the federate learning model training are listed in detail in the connection terminal list, when encrypted data uploaded by a plurality of data uploading terminals in the connection terminal list is received, an uploaded identifier of the encrypted data in the current round is added to an uploading identifier bit of the corresponding data uploading terminal in the connection terminal list, and uploading time of the encrypted data in the current round is recorded. Once the uploading identification bit of the data uploading terminal does not increase the uploaded mark of the data in the current round and the uploading time of the encrypted data corresponding to the data uploading terminal is still the uploading time of the encrypted data in the previous round, the connection terminal list indicates that the encrypted data in the current round of the data uploading terminal is not uploaded to the cloud server. Through the screening process, the condition of network delay of the data uploading terminal can be effectively judged in advance.
A maximum network delay value obtaining unit 130, configured to obtain a current network delay value of each target data uploading terminal in the target data uploading terminal set, so as to obtain a maximum network delay value in each current network delay value.
In this embodiment, after the set of target data uploading terminals that have not uploaded the encrypted data is obtained in step S120, a current network delay value of each target data uploading terminal may be obtained, and a maximum value of the current network delay values is taken as a maximum network delay value. Then, the maximum network delay value is taken as a reference, and the delay step length can be calculated.
And a delay step obtaining unit 140, configured to calculate a delay step according to the maximum network delay value and the called unit time interval step of the local storage.
In this embodiment, after the maximum network delay value is known, a unit time sequence interval step length stored locally is called, and a delay step length can be calculated according to the maximum network delay value/the unit time sequence interval step length, where the delay step length is used as an important parameter for delay adjustment of the cloud server in the federate learning process of the current round.
And the target encrypted data receiving and judging unit 150 is configured to sum the current system time with the delay step to obtain a target system time, and if the current time is the target system time, judge whether the target encrypted data uploaded by the target data uploading terminal is received.
In this embodiment, the current system time and the delay step are summed to obtain the target system time, because the uploaded encrypted data is locally decrypted and then parameter aggregation is performed on the encrypted data in the cloud server in a period from the current system time to the target system time, if the parameter aggregation is completed on all the uploaded encrypted data in a period from the current system time to the target system time, the target data uploading terminal still does not upload the encrypted data, at this time, the parameter aggregation is stopped in the cloud server, and the target system time is delayed and waited. And then the cloud server detects whether the target encrypted data uploaded by the target data uploading terminal is received at the moment of the target system time.
And the time delay sparse updating unit 160 is configured to stop local federal learning training if the target encrypted data uploaded by the target data uploading terminal is not received, and restore local federal learning training until all target encrypted data uploaded by the target data uploading terminal are received.
In this embodiment, for example, assuming that the current system time is t1, the delay step S is 12 (the delay step S is 12 corresponding to a duration of 400 ms), and the target system time is t1+ S is t2, the cloud server decrypts the encrypted data according to the received encrypted data in a time period from t1 to t2, and then performs parameter aggregation. If the moment reaches t2, but the target data uploading terminal still does not upload the target encrypted data, the cloud server starts to wait for uploading of the target encrypted data without performing parameter aggregation until all target data uploading terminals upload the target encrypted data, and local federal learning training is recovered.
Therefore, the selection of the delay step is related to the length of the network delay value, and the proper delay step can be optimized to the greatest extent; the higher the network delay value, the larger the delay step. When the network delay value is 100ms, the delay step S is the best 4; when the network delay value is 500ms, the delay step S is the best 8; when the network delay value is 1000ms, the delay step S is 12 optimal; when the network delay value is 5000ms, the delay step S is 20, which is the best. When the network delay value is between 100ms and 500ms, the delay step S is 8.
In one embodiment, the high latency network environment robust federated learning training apparatus 100 further comprises:
and the continuous training control unit is used for adding the received current target encrypted data into a training set of the federal learning model to carry out local federal learning training if the target encrypted data uploaded by the target data uploading terminal is received.
In this embodiment, with reference to the above example, if all the target encrypted data uploaded by the target data uploading terminal is uploaded before t2, the cloud server may then synchronously send the parameter aggregation result to the data uploading terminal for the next round of training. By setting the parameter of the time delay step length, the cloud server can firstly aggregate a part of parameters for processing, and then newly received target encrypted data is added into the parameter aggregation in real time, so that the efficiency of parameter aggregation can be effectively improved by firstly processing the part of parameter aggregation and then waiting for the new data.
In one embodiment, the high latency network environment robust federated learning training apparatus 100 further comprises:
the current global aggregation gradient obtaining unit is used for obtaining model parameters of a local federated learning model and a local current global aggregation gradient;
the target data aggregation gradient number acquisition unit is used for calling the stored learning factor and acquiring a target data aggregation gradient of the target encrypted data;
a post-compensation model parameter calculation unit for calculating a post-compensation model parameter based on
Figure BDA0002647222350000111
Calculating to obtain compensated model parameters; wherein ω'(n,t)Representing the compensated model parameter, ω(n,t)Model parameters representing a local federated learning model, lambda represents a learning factor,
Figure BDA0002647222350000112
representing the current global aggregate gradient locally,
Figure BDA0002647222350000113
representing the target data aggregation gradient.
In this embodiment, since synchronous random gradient descent is no longer adopted, in order to fully restore the effect of the synchronous random gradient, an error-compensated gradient update term is introduced to offset an error caused by delay update. After all data uploading terminals in the round of federal learning upload encrypted data, acquiring compensated model parameters according to the compensated model parameters, namely model parameters of a local federal learning model and learning factors (local current global aggregation gradient-target data aggregation gradient).
In one embodiment, the high latency network environment robust federated learning training apparatus 100 further comprises:
and the model parameter sending unit is used for calling the locally stored public key and sending the public key and the compensated model parameters to each data uploading terminal.
In this embodiment, after the current round is completed and all parameter data sets obtain the compensated model parameters, the cloud server may send the local public key and the compensated model parameters to each data uploading terminal respectively, so as to update the local model parameters of the data uploading terminal. And before uploading data, the data uploading terminal in the next round needs to encrypt the data according to the public key and then upload the data to the cloud server so as to improve the data security.
In one embodiment, the high latency network environment robust federated learning training apparatus 100 further comprises:
and the model parameter uplink unit is used for uploading the compensated model parameters to the block chain network.
In this embodiment, the cloud server may serve as a block chain link point device to upload the compensated model parameters to a block chain network, and the non-falsification characteristic of block chain data is fully utilized to realize the solidified storage of the compensated model parameters.
The corresponding digest information is obtained based on the compensated model parameters, and specifically, the digest information is obtained by performing hash processing on the compensated model parameters, for example, by using a sha256 algorithm. Uploading summary information to the blockchain can ensure the safety and the fair transparency of the user. The server may download the summary information from the blockchain to verify whether the post-compensation model parameters are tampered. The blockchain referred to in this example is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In one embodiment, the high latency network environment robust federated learning training apparatus 100 further comprises:
a last model parameter sending time obtaining unit, configured to obtain a last model parameter sending time;
the time sequence sparse updating unit is used for judging whether the time interval between the current system time and the last model parameter sending time is equal to a preset time sequence interval or not; and if the time interval between the current system time and the last model parameter sending time is equal to the time sequence interval, executing the steps of obtaining the current system time and judging whether encrypted data uploaded by each data uploading terminal is received or not.
In this embodiment, in order to reduce the period for each data uploading terminal to upload encrypted data to the cloud server, a time interval may be set, that is, the last model parameter sending time may be obtained at this time, and it is determined whether the time interval between the current system time and the model parameter sending time is equal to the preset time interval. If the time interval between the current system time and the last model parameter sending time is equal to the time sequence interval, the cloud server can receive the encrypted data again after the time of the time sequence interval time length elapses after the last time of uploading the encrypted data by each data uploading terminal. By the method, time sequence sparse updating is realized, and the pressure of network IO is greatly relieved.
The device realizes that the training efficiency of federal learning is kept in a time delay sparse updating mode under the condition that network time delay exists.
The above-described high latency network environment robust federated learning training apparatus may be implemented in the form of a computer program that may be run on a computer device as shown in fig. 4.
Referring to fig. 4, fig. 4 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 500 is a server, and the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 4, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 when executed may cause the processor 502 to perform a robust federated learning training approach for a high latency network environment.
The processor 502 is used to provide computing and control capabilities that support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be enabled to perform a robust federal learning training method in a high latency network environment.
The network interface 505 is used for network communication, such as providing transmission of data information. Those skilled in the art will appreciate that the configuration shown in fig. 4 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 500 to which aspects of the present invention may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The processor 502 is configured to run a computer program 5032 stored in the memory to implement the robust federal learning training method for a high latency network environment disclosed in the embodiments of the present invention.
Those skilled in the art will appreciate that the embodiment of a computer device illustrated in fig. 4 does not constitute a limitation on the specific construction of the computer device, and that in other embodiments a computer device may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 4, and are not described herein again.
It should be understood that, in the embodiment of the present invention, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In another embodiment of the invention, a computer-readable storage medium is provided. The computer readable storage medium may be a non-volatile computer readable storage medium. The computer readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the robust federated learning training method for a high latency network environment disclosed by embodiments of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only a logical division, and there may be other divisions when the actual implementation is performed, or units having the same function may be grouped into one unit, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A federal learning training method for high-latency network environment robustness is characterized by comprising the following steps:
acquiring current system time, and judging whether encrypted data uploaded by each data uploading terminal is received or not;
if encrypted data uploaded by a plurality of data uploading terminals are not received, acquiring corresponding target data uploading terminals to form a target data uploading terminal set;
acquiring current network delay values of all target data uploading terminals in the target data uploading terminal set to obtain maximum network delay values in all current network delay values;
calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step length of the local storage;
summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by a target data uploading terminal is received or not if the current time is the target system time; and
and if the target encrypted data uploaded by the target data uploading terminal is not received, stopping local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received, and recovering local federal learning training.
2. The federal learning training method for high latency network environment robustness as claimed in claim 1, wherein the method comprises the steps of summing the current system time and the delay step to obtain a target system time, and if the current time is the target system time, judging whether target encrypted data uploaded by a target data uploading terminal is received, and further comprising:
and if the target encrypted data uploaded by the target data uploading terminal is received, adding the received current target encrypted data into a training set of the federal learning model to carry out local federal learning training.
3. The high-latency network environment robust federated learning training method according to claim 1, wherein if target encrypted data uploaded by the target data uploading terminal is not received, the local federated learning training is stopped until after target encrypted data uploaded by all target data uploading terminals are received, and after the local federated learning training is resumed, the method further includes:
obtaining model parameters of a local federal learning model and a local current global aggregation gradient;
calling the stored learning factor to obtain a target data aggregation gradient of the target encrypted data;
according to
Figure FDA0002647222340000011
Calculating to obtain compensated model parameters; wherein ω'(n,t)Representing the compensated model parameter, ω(n,t)Model parameters representing a local federated learning model, lambda represents a learning factor,
Figure FDA0002647222340000012
representing the current global aggregate gradient locally,
Figure FDA0002647222340000013
representing the target data aggregation gradient.
4. The robust federated learning training method for high latency network environments of claim 3, further comprising:
and calling the locally stored public key, and sending the public key and the compensated model parameters to each data uploading terminal.
5. The robust federated learning training method for high latency network environments of claim 4, further comprising:
and uploading the compensated model parameters to a block chain network.
6. The federal learning training method for high latency network environment robustness as claimed in claim 4, wherein before the obtaining of the current system time and the judgment of whether the encrypted data uploaded by each data uploading terminal is received, the method further comprises:
acquiring the sending time of the last model parameter;
judging whether the time interval between the current system time and the last model parameter sending time is equal to a preset time sequence interval or not;
and if the time interval between the current system time and the last model parameter sending time is equal to the time sequence interval, executing the steps of acquiring the current system time and judging whether encrypted data uploaded by each data uploading terminal is received or not.
7. A high-latency network environment robust federated learning training device is characterized by comprising:
the data receiving and judging unit is used for acquiring the current system time and judging whether encrypted data uploaded by each data uploading terminal is received or not;
the target terminal acquisition unit is used for acquiring corresponding target data uploading terminals to form a target data uploading terminal set if encrypted data uploaded by a plurality of data uploading terminals are not received;
a maximum network delay value obtaining unit, configured to obtain a current network delay value of each target data uploading terminal in the target data uploading terminal set, so as to obtain a maximum network delay value among the current network delay values;
the delay step acquiring unit is used for calculating to obtain a delay step according to the maximum network delay value and the called unit time sequence interval step of the local storage;
the target encrypted data receiving and judging unit is used for summing the current system time and the delay step length to obtain target system time, and judging whether target encrypted data uploaded by a target data uploading terminal is received or not if the current time is the target system time; and
and the time delay sparse updating unit is used for stopping local federal learning training if the target encrypted data uploaded by the target data uploading terminal is not received, and recovering local federal learning training until all the target encrypted data uploaded by the target data uploading terminal are received.
8. The robust federated learning training apparatus in a high latency network environment according to claim 7, further comprising:
and the continuous training control unit is used for adding the received current target encrypted data into a training set of the federal learning model to carry out local federal learning training if the target encrypted data uploaded by the target data uploading terminal is received.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the high latency network environment robust federal learning training method as claimed in any of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the high latency network environment robust federal learning training method as claimed in any one of claims 1 to 6.
CN202010858570.XA 2020-08-24 2020-08-24 Federal learning training method and device for high-delay network environment robustness Active CN112001502B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010858570.XA CN112001502B (en) 2020-08-24 2020-08-24 Federal learning training method and device for high-delay network environment robustness
PCT/CN2020/118938 WO2021155671A1 (en) 2020-08-24 2020-09-29 High-latency network environment robust federated learning training method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010858570.XA CN112001502B (en) 2020-08-24 2020-08-24 Federal learning training method and device for high-delay network environment robustness

Publications (2)

Publication Number Publication Date
CN112001502A true CN112001502A (en) 2020-11-27
CN112001502B CN112001502B (en) 2022-06-21

Family

ID=73471709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010858570.XA Active CN112001502B (en) 2020-08-24 2020-08-24 Federal learning training method and device for high-delay network environment robustness

Country Status (2)

Country Link
CN (1) CN112001502B (en)
WO (1) WO2021155671A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516253A (en) * 2021-07-02 2021-10-19 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN114584581A (en) * 2022-01-29 2022-06-03 华东师范大学 Federal learning system and federal learning training method for smart city Internet of things and letter fusion
CN114818011A (en) * 2022-06-27 2022-07-29 国网智能电网研究院有限公司 Federal learning method and system suitable for carbon credit evaluation and electronic equipment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070775B (en) * 2021-10-15 2023-07-07 上海智能网联汽车技术中心有限公司 Block chain network slicing security intelligent optimization method for 5G intelligent networking system
CN114491623A (en) * 2021-12-30 2022-05-13 北京邮电大学 Asynchronous federal learning method and system based on block chain
CN114650227B (en) * 2022-01-27 2023-08-18 北京邮电大学 Network topology construction method and system in hierarchical federation learning scene
CN115277689B (en) * 2022-04-29 2023-09-22 国网天津市电力公司 Cloud edge network communication optimization method and system based on distributed federal learning
CN115174412B (en) * 2022-08-22 2024-04-12 深圳市人工智能与机器人研究院 Dynamic bandwidth allocation method for heterogeneous federal learning system and related equipment
CN116506307B (en) * 2023-06-21 2023-09-12 大有期货有限公司 Network delay condition analysis system of full link

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232445A1 (en) * 2015-02-06 2016-08-11 Google Inc. Distributed training of reinforcement learning systems
US20180262402A1 (en) * 2016-04-15 2018-09-13 Nec Laboratories America, Inc. Communication efficient sparse-reduce in distributed machine learning
CN109165725A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Neural network federation modeling method, equipment and storage medium based on transfer learning
CN110263921A (en) * 2019-06-28 2019-09-20 深圳前海微众银行股份有限公司 A kind of training method and device of federation's learning model
US10438695B1 (en) * 2015-09-30 2019-10-08 EMC IP Holding Company LLC Semi-automated clustered case resolution system
US20200090045A1 (en) * 2017-06-05 2020-03-19 D5Ai Llc Asynchronous agents with learning coaches and structurally modifying deep neural networks without performance degradation
CN111382706A (en) * 2020-03-10 2020-07-07 深圳前海微众银行股份有限公司 Prediction method and device based on federal learning, storage medium and remote sensing equipment
CN111401621A (en) * 2020-03-10 2020-07-10 深圳前海微众银行股份有限公司 Prediction method, device, equipment and storage medium based on federal learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263908B (en) * 2019-06-20 2024-04-02 深圳前海微众银行股份有限公司 Federal learning model training method, apparatus, system and storage medium
CN110443375B (en) * 2019-08-16 2021-06-11 深圳前海微众银行股份有限公司 Method and device for federated learning
CN111091200B (en) * 2019-12-20 2021-03-19 深圳前海微众银行股份有限公司 Updating method and system of training model, intelligent device, server and storage medium
CN111401552B (en) * 2020-03-11 2023-04-07 浙江大学 Federal learning method and system based on batch size adjustment and gradient compression rate adjustment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232445A1 (en) * 2015-02-06 2016-08-11 Google Inc. Distributed training of reinforcement learning systems
US10438695B1 (en) * 2015-09-30 2019-10-08 EMC IP Holding Company LLC Semi-automated clustered case resolution system
US20180262402A1 (en) * 2016-04-15 2018-09-13 Nec Laboratories America, Inc. Communication efficient sparse-reduce in distributed machine learning
US20200090045A1 (en) * 2017-06-05 2020-03-19 D5Ai Llc Asynchronous agents with learning coaches and structurally modifying deep neural networks without performance degradation
CN109165725A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Neural network federation modeling method, equipment and storage medium based on transfer learning
CN110263921A (en) * 2019-06-28 2019-09-20 深圳前海微众银行股份有限公司 A kind of training method and device of federation's learning model
CN111382706A (en) * 2020-03-10 2020-07-07 深圳前海微众银行股份有限公司 Prediction method and device based on federal learning, storage medium and remote sensing equipment
CN111401621A (en) * 2020-03-10 2020-07-10 深圳前海微众银行股份有限公司 Prediction method, device, equipment and storage medium based on federal learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONGYU WANG等: "Reinforcement Learning-Based Joint Task Offloading and Migration Schemes Optimization in Mobility-Aware MEC Network", 《中国通信》 *
秦超等: "基于Bagging-Down SGD算法的分布式深度网络", 《系统工程与电子技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516253A (en) * 2021-07-02 2021-10-19 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN113516253B (en) * 2021-07-02 2022-04-05 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN114584581A (en) * 2022-01-29 2022-06-03 华东师范大学 Federal learning system and federal learning training method for smart city Internet of things and letter fusion
CN114584581B (en) * 2022-01-29 2024-01-09 华东师范大学 Federal learning system and federal learning training method for intelligent city internet of things (IOT) letter fusion
CN114818011A (en) * 2022-06-27 2022-07-29 国网智能电网研究院有限公司 Federal learning method and system suitable for carbon credit evaluation and electronic equipment

Also Published As

Publication number Publication date
CN112001502B (en) 2022-06-21
WO2021155671A1 (en) 2021-08-12

Similar Documents

Publication Publication Date Title
CN112001502B (en) Federal learning training method and device for high-delay network environment robustness
EP3780553B1 (en) Blockchain-based transaction consensus processing method and apparatus, and electrical device
US10630463B2 (en) Meta block chain
WO2018076760A1 (en) Block chain-based transaction processing method, system, electronic device, and storage medium
CN107171810B (en) Verification method and device of block chain
WO2022162498A1 (en) Method and system for federated learning
CN108989045B (en) Apparatus and system for preventing global tampering
CN110597489B (en) Random number generation method, equipment and medium
CN111556120A (en) Data processing method and device based on block chain, storage medium and equipment
CN113609508A (en) Block chain-based federal learning method, device, equipment and storage medium
CN116739660A (en) Lottery drawing method and system based on block chain
CN109861828A (en) A kind of node access and node authentication method based on edge calculations
CN113225297B (en) Data hybrid encryption method, device and equipment
CN111523150A (en) Block chain-based document editing method, device and system
CN110113334A (en) Contract processing method, equipment and storage medium based on block chain
CN111033491A (en) Storing shared blockchain data based on error correction coding
US9350545B1 (en) Recovery mechanism for fault-tolerant split-server passcode verification of one-time authentication tokens
US11509469B2 (en) Methods and systems for password recovery based on user location
US10785025B1 (en) Synchronization of key management services with cloud services
CN110585727B (en) Resource acquisition method and device
CN111865595A (en) Block chain consensus method and device
CN110784318B (en) Group key updating method, device, electronic equipment, storage medium and communication system
CN116992480A (en) Method for providing publicly verifiable outsourcing computing service
KR20210100865A (en) Method and system for building fast synchronizable decentralized distributed database
CN112417052B (en) Data synchronization method, device, equipment and storage medium in block chain network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant