CN113837397B - Model training method and device based on federal learning and related equipment - Google Patents

Model training method and device based on federal learning and related equipment Download PDF

Info

Publication number
CN113837397B
CN113837397B CN202111136508.0A CN202111136508A CN113837397B CN 113837397 B CN113837397 B CN 113837397B CN 202111136508 A CN202111136508 A CN 202111136508A CN 113837397 B CN113837397 B CN 113837397B
Authority
CN
China
Prior art keywords
model
weight
target
trained
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111136508.0A
Other languages
Chinese (zh)
Other versions
CN113837397A (en
Inventor
黄晨宇
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111136508.0A priority Critical patent/CN113837397B/en
Publication of CN113837397A publication Critical patent/CN113837397A/en
Application granted granted Critical
Publication of CN113837397B publication Critical patent/CN113837397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application relates to a data processing technology, and provides a model training method, a model training device, computer equipment and a storage medium based on federal learning, which comprise the following steps: sending an initial weight to a target participation node, and calculating a first duration; receiving a first local model weight, processing the first local model weight to obtain a second weight, updating a model to be trained, and judging whether the model is converged; when the result is negative, calculating a second duration for transmitting the weight of the first local model; determining target network information according to the first time length and the second time length; updating quantization precision and model updating frequency; receiving the weight of the second local model, processing the weight of the second local model to obtain a third weight, updating the model to be trained, and judging whether the model is converged; and when the result is yes, determining that model training is completed. The intelligent city training method and system can improve the training efficiency of the model and promote the rapid development of the intelligent city.

Description

Model training method and device based on federal learning and related equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a model training method, apparatus, computer device, and medium based on federal learning.
Background
The federal learning does not need to share the original data of all data owners, so that the original data of all data owners can be fully utilized for model training under the condition of ensuring the safety, and the data island problem in the artificial intelligence era is effectively solved. In the existing federal learning, the central node needs to collect the calculation results of each participating node in each iteration to calculate the weight after the iteration. For example, in lateral federal learning, a central node needs to calculate the final gradient of the model to be trained from the local model gradients of the individual participating nodes. However, this requires that the network conditions of each participating node be similar, and if one of them has poor network bandwidth or the network fluctuates, the central node and all other participating nodes need to wait for the updated gradient of that node, resulting in inefficient training of the model to be trained.
In the process of implementing the present application, the inventor finds that the following technical problems exist in the prior art: the prior art adopts two modes of reducing the transmitted data amount or enabling slower nodes to transmit less things when considering the network conditions of each participant, however, the first mode cannot fundamentally solve the problem that other nodes need to wait for the nodes with slower speed, and the second mode cannot cope with the condition of network fluctuation although solving the problem that certain nodes are poor in network consistency, namely, the nodes may have poor network in certain time periods and better network conditions in other time periods.
Therefore, it is necessary to provide a model training method based on federal learning, which can improve the efficiency of model training.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a model training method based on federal learning, a model training apparatus based on federal learning, a computer device, and a medium, which can improve the efficiency of model training.
An embodiment of the present invention provides a model training method based on federal learning, configured to train a model to be trained, applied to a center node, where the model training method based on federal learning includes:
sending an initial weight to a target participation node, and calculating a first duration of the initial weight received by the target participation node;
receiving a first local model weight sent by the target participating node, and calling a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained;
updating the model to be trained according to the second weight, and judging whether the updated model to be trained converges or not;
when the updated model to be trained is not converged, calculating a second time length for the target participating node to send the weight of the first local model;
Determining target network information of the target participating node according to the first time length and the second time length;
updating the quantization precision and the model updating frequency of the target participation node according to the target network information, and transmitting the quantization precision and the model updating frequency to the target participation node;
receiving a second local model weight obtained by the target participating node according to the quantization precision and the model updating frequency, and calling the preset model to process the second local model weight to obtain a third weight corresponding to the model to be trained;
updating the model to be trained according to the third weight, and judging whether the updated model to be trained converges or not;
and when the updated model to be trained converges, determining that the training of the model to be trained is completed.
Further, in the foregoing model training method based on federal learning according to the embodiment of the present application, the calculating the first time period for the target participating node to receive the initial weight includes:
acquiring a starting time point for starting to send the initial weight;
acquiring an ending time point which is output by the target participating node and receives the initial weight;
And calculating the difference between the ending time point and the starting time point to obtain a first time length.
Further, in the foregoing model training method based on federal learning provided in the embodiment of the present application, the invoking the preset model to process the first local model weight, and obtaining the second weight corresponding to the model to be trained includes:
acquiring the number of the target participation nodes and the first local model weight corresponding to each target participation node;
summing a plurality of first local model weights to obtain a first local model weight sum;
calculating the ratio of the first local model weight to the number to obtain an initial local model weight;
and calling a preset encryption key to decrypt and process the initial local model weight to obtain a second weight corresponding to the model to be trained.
Further, in the foregoing model training method based on federal learning provided in the embodiment of the present application, the determining, according to the first duration and the second duration, the target network information of the target participating node includes:
acquiring the number of the target participation nodes and the first duration and the second duration corresponding to each target participation node;
Respectively calculating a second time length average value and a second time length standard deviation corresponding to the second time length, and determining a second network interval according to the second time length average value and the second time length standard deviation;
and determining target network information of each target participating node according to the relation between the first time length and the first network interval and the relation between the second time length and the second network interval.
Further, in the foregoing model training method based on federal learning provided in the embodiment of the present application, the updating the quantization precision and the model updating frequency of the target participating node according to the target network information includes:
acquiring a first mapping relation between preset network information and quantization precision, and traversing the first mapping relation according to the target network information to obtain target quantization precision;
and acquiring a second mapping relation between preset network information and model updating frequency, and traversing the second mapping relation according to the target network information to obtain the target model updating frequency.
The second aspect of the embodiment of the application also provides a model training method based on federal learning, which is used for training a model to be trained and is applied to a target participating node, and the method comprises the following steps:
Acquiring preset initial model parameters corresponding to the model to be trained, and adjusting the model to be trained according to the preset initial model parameters to obtain a local training model, wherein the preset initial model parameters comprise initial quantization accuracy, initial model updating frequency and learning rate;
when an initial weight value sent by a central node is received, training data corresponding to the local training model is obtained, and the local training model is trained according to the initial weight value and the training data, so that an updated local training model is obtained;
obtaining an intermediate local model weight according to the local model weight of the local training model after the initial quantization precision quantization processing update;
invoking a preset encryption key to encrypt the intermediate local model weight to obtain a first local model weight;
and sending the first local model weight to the central node according to the initial model updating frequency.
Further, in the foregoing federal learning-based model training method provided in the embodiment of the present application, the step of quantitatively processing the updated local model weight of the local training model according to the initial quantization accuracy, and the step of obtaining the intermediate local model weight includes:
Acquiring an initial quantization model;
updating the initial quantization model according to the quantization precision to obtain a target quantization model;
and inputting the local model weight into the target quantization model to obtain an intermediate local model weight.
The third aspect of the embodiment of the application also provides a model training device based on federal learning, which is used for training a model to be trained and applied to a central node, and the model training device based on federal learning comprises:
the initial weight sending module is used for sending an initial weight to a target participation node and calculating a first duration of the initial weight received by the target participation node;
the second weight acquisition module is used for receiving the first local model weight sent by the target participating node, and calling a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained;
the model convergence judging module is used for updating the model to be trained according to the second weight and judging whether the updated model to be trained converges or not;
the time calculation module is used for calculating a second time length for the target participating node to send the weight of the first local model when the updated model to be trained is not converged;
The network information determining module is used for determining target network information of the target participating node according to the first time length and the second time length;
the data updating module is used for updating the quantization precision and the model updating frequency of the target participation node according to the target network information and sending the quantization precision and the model updating frequency to the target participation node;
the third weight acquisition module is used for receiving a second local model weight obtained by the target participating node according to the quantization precision and the model updating frequency, and calling the preset model to process the second local model weight to obtain a third weight corresponding to the model to be trained;
the model convergence judging module is also used for updating the model to be trained according to the third weight and judging whether the updated model to be trained converges or not;
and the training completion determining module is used for determining that the training of the model to be trained is completed when the updated model to be trained converges as a judging result.
A fourth aspect of the embodiments of the present application further provides a computer device, the computer device including a processor configured to implement the federal learning-based model training method according to any one of the above when executing a computer program stored in a memory.
The fifth aspect of the embodiments of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the model training method based on federal learning according to any one of the above.
According to the model training method based on federal learning, the model training device based on federal learning, the computer equipment and the computer readable storage medium, the central node calculates the first time length when the target participation node receives the initial weight and the second time length when the target participation node sends the first local model weight, determines the target network information of the target participation node according to the first time length and the second time length, and updates the quantization precision and the model updating frequency of the target participation node according to the target network information. The intelligent city intelligent management system can be applied to various functional modules of intelligent cities such as intelligent government affairs and intelligent traffic, for example, the intelligent government affairs can be promoted to develop rapidly based on federal learning model training modules and the like.
Drawings
Fig. 1 is a flowchart of a model training method based on federal learning according to an embodiment of the present application.
Fig. 2 is a block diagram of a model training device based on federal learning according to a second embodiment of the present application.
Fig. 3 is a schematic structural diagram of a computer device according to a third embodiment of the present application.
The following detailed description will further illustrate the application in conjunction with the above-described figures.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The model training method based on federal learning provided by the embodiment of the invention is executed by computer equipment, and correspondingly, the model training device based on federal learning is operated in the computer equipment.
Fig. 1 is a flowchart of a federally learning-based model training method according to a first embodiment of the present application. As shown in fig. 1, the model training method based on federal learning is applied to a central node, and the model training method based on federal learning may include the following steps, where the order of the steps in the flowchart may be changed according to different requirements, and some may be omitted:
s11, sending an initial weight to a target participation node, and calculating a first duration that the initial weight is received by the target participation node.
Federal learning is a completely new distributed artificial intelligence network architecture that includes a central node and a number of target participating nodes. Original data do not need to be shared between the central node and the target participation nodes and among a plurality of target participation nodes, so that data security and data privacy are ensured. And the central node transmits a model to be trained to the target participating nodes, each target participating node carries out training based on the local training data owned by each target participating node, the weight obtained by training is transmitted back to the central node, the central node judges whether the model to be trained is converged, and when the model to be trained is not converged, iterative training is carried out until the global model is converged.
In at least one embodiment of the present application, the model to be trained includes, but is not limited to, an identification model, a classification model, a detection model, a prediction model, etc., and in particular, an artificial neural network model, a support vector machine, a convolutional neural network model, etc. may be used in the above embodiment, which is not limited herein. The target participation nodes are nodes which have respective local training data and participate in model collaborative training, and the number of the target participation nodes is more than or equal to two.
The method comprises the steps that a center node sends a model to be trained to a target participation node, the target participation node obtains preset initial model parameters corresponding to the model to be trained, and the model to be trained is adjusted according to the preset initial model parameters to obtain a local training model, wherein the preset initial model parameters comprise initial quantization precision, initial model updating frequency and learning rate. The initial quantization precision, the initial model update frequency and the learning rate are values preset by system staff, the initial quantization precision is used for identifying the quantization size of a floating point number, the initial model update frequency is used for identifying the iteration interval of a model corresponding to a target participation node, for example, when the initial model update frequency is 1, the model corresponding to the target participation node is indicated to participate in each iteration.
In at least one embodiment of the present application, the initial weight is a weight of a first iteration preset by a system staff, and the first duration of receiving the initial weight by each target participating node is also different due to different network states of each target participating node. The first duration may be determined by means of an ack packet.
Optionally, the calculating the first time length for the target participating node to receive the initial weight includes:
acquiring a starting time point for starting to send the initial weight;
acquiring an ending time point which is output by the target participating node and receives the initial weight;
and calculating the difference between the ending time point and the starting time point to obtain a first time length.
The central node records a starting time point a when the initial weight starts to be sent, the target participating node receives the initial weight reply ack packet to the central node, and the central node records a time b when the ack packet is received, the first duration t1 i =b-a。
S12, receiving a first local model weight sent by the target participating node, and calling a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained.
In at least one embodiment of the present application, the target participating node trains the local training model according to the initial weight and the local training data, obtains the updated local training model, and obtains the local model weight of the updated local training model. And carrying out quantization processing on the local model weight by the target participating node to obtain an intermediate local model weight, and then calling a preset encryption key to carry out encryption processing on the intermediate local model weight to obtain the first local model weight. The preset encryption key can be preset by a system personnel and is sent to each participating node by the central node. The application can adopt homomorphic encryption mode to carry out encryption processing, and can use [ · for example]Characterizing homomorphic encryption, e.g. m being plaintext, [ m ]]Then the ciphertext is homomorphically encrypted with the public key pk, in one embodiment, homomorphic encryption with the addition homomorphic, i.e. [ m ] 1 +m 2 ]=[m 1 ]+[m 2 ]And [ cm ]]=c[m]Where c is a positive integer constant. Since homomorphic encryption is required in the positive integer domain, all data is required to be quantized to positive integers, specifically for a floating point number x, the quantized value is Wherein p and q are positive integers, < >>Is a round down function. We can change the quantization accuracy by changing the quantization size by adjusting the size of p. Wherein (1)>Also referred to herein as an initial quantization model, and for models of known p-value size, also referred to as target quantization models. By invoking the target quantization model, each participating node is able to dose the local model weightsAnd (5) performing chemical treatment to obtain the intermediate local model weight.
Optionally, the calling the preset model to process the weight of the first local model, and obtaining the second weight corresponding to the model to be trained includes:
acquiring the number of the target participation nodes and the first local model weight corresponding to each target participation node;
summing a plurality of first local model weights to obtain a first local model weight sum;
calculating the ratio of the first local model weight to the number to obtain an initial local model weight;
and calling a preset encryption key to decrypt and process the initial local model weight to obtain a second weight corresponding to the model to be trained.
The number of the target participation nodes is multiple, each target participation node comprises a corresponding first local model weight, and a second weight corresponding to the model to be trained can be obtained by calling the preset model to process the multiple first local model weights. The preset encryption key is a key preset by system personnel.
And S13, updating the model to be trained according to the second weight, judging whether the updated model to be trained is converged, and executing the step S14 when the judgment result is that the updated model to be trained is not converged.
In at least one embodiment of the present application, the initial model to be trained includes the initial weight, and updating the model to be trained according to the second weight includes:
acquiring a target position of the initial weight in the model to be trained;
and replacing the initial weight at the target position with the second weight to obtain the updated model to be trained.
It can be understood that when the judgment result is that the updated model to be trained converges, the training of the model to be trained is completed.
S14, calculating a second time length for the target participating node to send the first local model weight.
In at least one embodiment of the present application, when the updated model to be trained is not converged as a result of the determination, the weight of the model to be trained needs to be updated by using the local model weight obtained by training the target participating node, so as to complete iterative training of the model to be trained, so that the model to be trained converges. When the weight of the model to be trained is updated by using the local model weight obtained by training the target participating nodes, if the network condition of one or more nodes in the target participating nodes is poor due to different network conditions, the central node and other target participating nodes need to wait for the updated local model weight of the node, so that the efficiency of the model to be trained is poor. Wherein, the network condition is the comprehensive condition of data transmission speed and network delay. According to the method and the device, the respective quantization precision and model updating frequency are adjusted according to the network conditions of the target participating nodes, so that the accuracy and the efficiency of model training are improved.
Optionally, the calculating the second duration for the target participating node to send the first local model weight includes:
acquiring a starting time point when the target participating node starts to send the first local model weight;
acquiring an ending time point of receiving the first local model weight;
and calculating the difference between the ending time point and the starting time point to obtain a second duration.
And S15, determining target network information of the target participating node according to the first time length and the second time length.
In at least one embodiment of the present application, the target network information may be a combination of data transmission speed and network delay, and when the number of the target participating nodes is plural, the number of the target network information is plural.
Optionally, the determining the target network information of the target participating node according to the first duration and the second duration includes:
respectively calculating a first time length average value and a first time length standard deviation corresponding to the first time length, and determining a first network interval according to the first time length average value and the first time length standard deviation;
respectively calculating a second time length average value and a second time length standard deviation corresponding to the second time length, and determining a second network interval according to the second time length average value and the second time length standard deviation;
And determining target network information of each target participating node according to the relation between the first time length and the first network interval and the relation between the second time length and the second network interval.
Wherein the first time period of the ith target participating node is denoted as t1 i The second time period of the ith target participating node is denoted as t2 i The first time average value is recorded asThe second duration average is denoted +.>The first time length standard deviation is denoted as sigma 1 The second time standard deviation is denoted as sigma 2 . The first network interval may be the sum of the first time average value and the first time standard deviation, denoted ∈>The second network interval can be the sum of the average value of the second time length and the standard deviation of the second time length, which is recorded as +.>
S16, updating the quantization precision and the model updating frequency of the target participation node according to the target network information, and sending the quantization precision and the model updating frequency to the target participation node.
In at least one embodiment of the present application, a first mapping relationship exists between network information and quantization accuracy, a second mapping relationship exists between network information and model update frequency, and a target quantization accuracy and a target model update frequency can be obtained by querying the first mapping relationship and the second mapping relationship.
Optionally, the updating the quantization precision and the model updating frequency of the target participating node according to the target network information includes:
acquiring a first mapping relation between preset network information and quantization precision, and traversing the first mapping relation according to the target network information to obtain target quantization precision;
and acquiring a second mapping relation between preset network information and model updating frequency, and traversing the second mapping relation according to the target network information to obtain the target model updating frequency.
Illustratively, when the first time period is greater than the first network interval and the second time period is greater than the second network interval, i.e., whenAnd->Then p is i =p i /2,k i =k i *2; when said first time period is longer than said first network interval or said second time period is longer than said second network interval, when ∈>And->Only one meeting p i =p i 2; when the first time length is smaller than the first network interval and the second time length is smaller than the second network interval, i.e. when +>And->Then p is i =p i *2。
S17, receiving a second local model weight obtained by the target participation node according to the quantization precision and the model updating frequency, and calling the preset model to process the second local model weight to obtain a third weight corresponding to the model to be trained.
In at least one embodiment of the present application, before the receiving the second local model weights obtained by the target participating nodes according to the quantization precision and the model update frequency, the central node sends the second weights to each of the target participating nodes, and each of the target participating nodes processes the second weights according to the quantization precision to obtain second local model weights. When the target participating node obtains a second local model weight according to the quantization precision processing, the method further comprises: and the target participating node trains the local training model according to the second weight and the local training data to obtain an updated local training model, and acquires the local model weight of the updated local training model. And before the local model weight is transmitted to the central node, the target participating node quantizes and encrypts the local model weight to obtain the second local model weight. Specifically, for the local model weight x, the quantized value isWherein p and q are positive integers, +.>Is a round down function. We can change the quantization accuracy by changing the quantization size by adjusting the size of p. The target participation node processes the obtained second local model weight according to the model updating frequency, namely the target participation node updates the frequency according to the model Participating in the iteration of the model to be trained, and when the model updating frequency is 1, the target participating node participating in each iteration of the model to be trained; when the model updating frequency is 2, after the rest target participation nodes participate in the iteration of the model to be trained once, the target participation nodes participate in the iteration of the model to be trained again, and so on, and no further description is given here.
And S18, updating the model to be trained according to the third weight, and judging whether the updated model to be trained converges or not.
In at least one embodiment of the present application, the model to be trained before updating includes the second weight, and updating the model to be trained according to the third weight includes:
acquiring a target position of the second weight in the model to be trained;
and replacing the second weight value at the target position with the third weight value to obtain the updated model to be trained.
And S19, when the updated model to be trained converges, determining that the training of the model to be trained is completed.
In at least one embodiment of the present application, when the determination result is that the updated model to be trained converges, determining that training of the model to be trained is completed; and when the updated model to be trained is not converged, re-acquiring the network condition of each target participation node, and determining the quantization precision and the model updating frequency of the target participation node according to the network condition, so as to obtain a fourth weight until the model to be trained is converged.
According to the model training method based on federal learning, a central node calculates a first time length when the target participation node receives the initial weight and a second time length when the target participation node transmits the first local model weight, target network information of the target participation node is determined according to the first time length and the second time length, quantization precision and model updating frequency of the target participation node are updated according to the target network information, and the respective quantization precision and model updating frequency are adjusted according to network conditions of the target participation node, so that federal learning of self-adaptive network optimization is realized, high-efficiency network utilization rate and model learning efficiency are achieved under the condition that training precision is not reduced, and model training efficiency is improved. The intelligent city intelligent management system can be applied to various functional modules of intelligent cities such as intelligent government affairs and intelligent traffic, for example, the intelligent government affairs can be promoted to develop rapidly based on federal learning model training modules and the like.
Fig. 2 is a block diagram of a model training device based on federal learning according to a second embodiment of the present application.
In some embodiments, the federal learning-based model training apparatus 20 may include a plurality of functional modules consisting of computer program segments. The computer program of each program segment in the federal learning-based model training apparatus 20 may be stored in a memory of a computer device and executed by at least one processor to perform (see fig. 1 for details) the functions of model training for federal learning-based model training.
In this embodiment, the model training device 20 based on federal learning may be divided into a plurality of functional modules according to the functions performed by the model training device. The functional module may include: an initial weight sending module 201, a second weight obtaining module 202, a model convergence judging module 203, a time calculating module 204, a network information determining module 205, a data updating module 206, a third weight obtaining module 207 and a training completion determining module 208. A module as referred to in this application refers to a series of computer program segments, stored in a memory, capable of being executed by at least one processor and of performing a fixed function. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The initial weight sending module 201 is configured to send an initial weight to the target participating node, and calculate a first duration for the target participating node to receive the initial weight.
Federal learning is a completely new distributed artificial intelligence network architecture that includes a central node and a number of target participating nodes. Original data do not need to be shared between the central node and the target participation nodes and among a plurality of target participation nodes, so that data security and data privacy are ensured. And the central node transmits a model to be trained to the target participating nodes, each target participating node carries out training based on the local training data owned by each target participating node, the weight obtained by training is transmitted back to the central node, the central node judges whether the model to be trained is converged, and when the model to be trained is not converged, iterative training is carried out until the global model is converged.
In at least one embodiment of the present application, the model to be trained includes, but is not limited to, an identification model, a classification model, a detection model, a prediction model, etc., and in particular, an artificial neural network model, a support vector machine, a convolutional neural network model, etc. may be used in the above embodiment, which is not limited herein. The target participation nodes are nodes which have respective local training data and participate in model collaborative training, and the number of the target participation nodes is more than or equal to two.
The method comprises the steps that a center node sends a model to be trained to a target participation node, the target participation node obtains initial quantization precision, initial model updating frequency and learning rate corresponding to the model to be trained, and the model to be trained is adjusted according to the initial quantization precision, the initial model updating frequency and the learning rate to obtain a local training model. The initial quantization precision, the initial model update frequency and the learning rate are preset values by system personnel, the initial quantization precision is used for identifying the quantization size of a floating point number, the initial model update frequency is used for identifying the iteration interval of a model, for example, when the initial model update frequency is 1, the model is indicated to participate in each iteration.
In at least one embodiment of the present application, the initial weight is a weight of a first iteration preset by a system staff, and the first duration of receiving the initial weight by each target participating node is also different due to different network states of each target participating node. The first duration may be determined by means of an ack packet.
Optionally, the calculating the first time length for the target participating node to receive the initial weight includes:
acquiring a starting time point for starting to send the initial weight;
acquiring an ending time point which is output by the target participating node and receives the initial weight;
and calculating the difference between the ending time point and the starting time point to obtain a first time length.
The central node records a starting time point a when the initial weight starts to be sent, the target participating node receives the initial weight reply ack packet to the central node, and the central node records a time b when the ack packet is received, the first duration t1 i =b-a。
The second weight obtaining module 202 is configured to receive the first local model weight sent by the target participating node, and call a preset model to process the first local model weight, so as to obtain a second weight corresponding to the model to be trained.
In at least one embodiment of the present application, the target participating node trains the local training model according to the initial weight and the local training data, obtains the updated local training model, and obtains the local model weight of the updated local training model. And carrying out quantization processing on the local model weight by the target participating node to obtain an intermediate local model weight, and then calling a preset encryption key to carry out encryption processing on the intermediate local model weight to obtain the first local model weight. The preset encryption key can be preset by a system personnel and is sent to each participating node by the central node. The application can adopt homomorphic encryption mode to carry out encryption processing, and can use [ · for example]Characterizing homomorphic encryption, e.g. m being plaintext, [ m ]]Then the ciphertext is homomorphically encrypted with the public key pk, in one embodiment, homomorphic encryption with the addition homomorphic, i.e. [ m ] 1 +m 2 ]=[m 1 ]+[m 2 ]And [ cm ]]=c[m]Where c is a positive integer constant. Since homomorphic encryption is required in the positive integer domain, all data is required to be quantized to positive integers, specifically for a floating point number x, the quantized value is Wherein p and q are positive integers, < >>Is a round down function. We can change the quantization accuracy by changing the quantization size by adjusting the size of p. Wherein (1)>Also referred to herein as an initial quantization model, and for models of known p-value size, also referred to as target quantization models. And each participating node can conduct quantization processing on the local model weight by calling the target quantization model to obtain an intermediate local model weight.
Optionally, the calling the preset model to process the weight of the first local model, and obtaining the second weight corresponding to the model to be trained includes:
acquiring the number of the target participation nodes and the first local model weight corresponding to each target participation node;
summing a plurality of first local model weights to obtain a first local model weight sum;
calculating the ratio of the first local model weight to the number to obtain an initial local model weight;
and calling a preset encryption key to decrypt and process the initial local model weight to obtain a second weight corresponding to the model to be trained.
The number of the target participation nodes is multiple, each target participation node comprises a corresponding first local model weight, and a second weight corresponding to the model to be trained can be obtained by calling the preset model to process the multiple first local model weights. The preset encryption key is a key preset by system personnel.
And the model convergence judging module 203 is configured to update the model to be trained according to the second weight, and judge whether the updated model to be trained converges.
In at least one embodiment of the present application, the initial model to be trained includes the initial weight, and updating the model to be trained according to the second weight includes:
acquiring a target position of the initial weight in the model to be trained;
and replacing the initial weight at the target position with the second weight to obtain the updated model to be trained.
It can be understood that when the judgment result is that the updated model to be trained converges, the training of the model to be trained is completed.
And the time calculation module 204 is configured to calculate a second duration for the target participating node to send the weight of the first local model when the updated model to be trained is not converged.
In at least one embodiment of the present application, when the updated model to be trained is not converged as a result of the determination, the weight of the model to be trained needs to be updated by using the local model weight obtained by training the target participating node, so as to complete iterative training of the model to be trained, so that the model to be trained converges. When the weight of the model to be trained is updated by using the local model weight obtained by training the target participating nodes, if the network condition of one or more nodes in the target participating nodes is poor due to different network conditions, the central node and other target participating nodes need to wait for the updated local model weight of the node, so that the efficiency of the model to be trained is poor. Wherein, the network condition is the comprehensive condition of data transmission speed and network delay. According to the method and the device, the respective quantization precision and model updating frequency are adjusted according to the network conditions of the target participating nodes, so that the accuracy and the efficiency of model training are improved.
Optionally, the calculating the second duration for the target participating node to send the first local model weight includes:
acquiring a starting time point when the target participating node starts to send the first local model weight;
acquiring an ending time point of receiving the first local model weight;
and calculating the difference between the ending time point and the starting time point to obtain a second duration.
The network information determining module 205 is configured to determine target network information of the target participating node according to the first duration and the second duration.
In at least one embodiment of the present application, the target network information may be a combination of data transmission speed and network delay, and when the number of the target participating nodes is plural, the number of the target network information is plural.
Optionally, the determining the target network information of the target participating node according to the first duration and the second duration includes:
respectively calculating a first time length average value and a first time length standard deviation corresponding to the first time length, and determining a first network interval according to the first time length average value and the first time length standard deviation;
respectively calculating a second time length average value and a second time length standard deviation corresponding to the second time length, and determining a second network interval according to the second time length average value and the second time length standard deviation;
And determining target network information of each target participating node according to the relation between the first time length and the first network interval and the relation between the second time length and the second network interval.
Wherein the first time period of the ith target participating node is denoted as t1 i The second time period of the ith target participating node is denoted as t2 i The first time average value is recorded asThe second duration average is denoted +.>The first time length standard deviation is denoted as sigma 1 The second time standard deviation is denoted as sigma 2 . The first network interval may be the sum of the first time average value and the first time standard deviation, denoted ∈>The second network interval can be the sum of the average value of the second time length and the standard deviation of the second time length, which is recorded as +.>
And the data updating module 206 is configured to update the quantization precision and the model updating frequency of the target participating node according to the target network information, and send the quantization precision and the model updating frequency to the target participating node.
In at least one embodiment of the present application, a first mapping relationship exists between network information and quantization accuracy, a second mapping relationship exists between network information and model update frequency, and a target quantization accuracy and a target model update frequency can be obtained by querying the first mapping relationship and the second mapping relationship.
Optionally, the updating the quantization precision and the model updating frequency of the target participating node according to the target network information includes:
acquiring a first mapping relation between preset network information and quantization precision, and traversing the first mapping relation according to the target network information to obtain target quantization precision;
and acquiring a second mapping relation between preset network information and model updating frequency, and traversing the second mapping relation according to the target network information to obtain the target model updating frequency.
Illustratively, when the first time period is greater than the first network interval, andwhen the second time period is longer than the second network interval, i.e. whenAnd->Then p is i =p i /2,k i =k i *2; when said first time period is longer than said first network interval or said second time period is longer than said second network interval, when ∈>And->Only one meeting p i =p i 2; when the first time length is smaller than the first network interval and the second time length is smaller than the second network interval, i.e. when +>And->Then p is i =p i *2。
And a third weight obtaining module 207, configured to receive a second local model weight obtained by the target participating node according to the quantization precision and the model update frequency, and call the preset model to process the second local model weight, so as to obtain a third weight corresponding to the model to be trained.
In at least one embodiment of the present application, before the receiving the second local model weight obtained by the target participating node according to the quantization precision and the model update frequency, the central node sends the second weight to each of the target participating nodes, and each of the target participating nodes processes the second weight according to the quantization precision and the model update frequency to obtain the second local model weight. Wherein,when the target participating node obtains a second local model weight according to the quantization precision processing, the method further comprises: and the target participating node trains the local training model according to the second weight and the local training data to obtain an updated local training model, and acquires the local model weight of the updated local training model. And before the local model weight is transmitted to the central node, the target participating node quantizes and encrypts the local model weight to obtain the second local model weight. Specifically, for the local model weight x, the quantized value isWherein p and q are positive integers, +. >Is a round down function. We can change the quantization accuracy by changing the quantization size by adjusting the size of p. The target participation node participates in the iteration of the model to be trained according to the model updating frequency, namely the target participation node participates in each iteration of the model to be trained according to the model updating frequency, and when the model updating frequency is 1; when the model updating frequency is 2, after the rest target participation nodes participate in the iteration of the model to be trained once, the target participation nodes participate in the iteration of the model to be trained again, and so on, and no further description is given here.
And the model convergence judging module 203 is configured to update the model to be trained according to the third weight, and judge whether the updated model to be trained converges.
In at least one embodiment of the present application, the model to be trained before updating includes the second weight, and updating the model to be trained according to the third weight includes:
acquiring a target position of the second weight in the model to be trained;
And replacing the second weight value at the target position with the third weight value to obtain the updated model to be trained.
And the training completion determining module 208 is configured to determine that the training of the model to be trained is completed when the updated model to be trained converges as a result of the determination.
In at least one embodiment of the present application, when the determination result is that the updated model to be trained converges, determining that training of the model to be trained is completed; and when the updated model to be trained is not converged, re-acquiring the network condition of each target participation node, and determining the quantization precision and the model updating frequency of the target participation node according to the network condition, so as to obtain a fourth weight until the model to be trained is converged.
Referring to fig. 3, a schematic structural diagram of a computer device according to a third embodiment of the present application is shown. In the preferred embodiment of the present application, the computer device 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the computer device shown in fig. 3 is not limiting of the embodiments of the present application, and that either a bus-type configuration or a star-type configuration may be used, and that the computer device 3 may include more or less other hardware or software than that shown, or a different arrangement of components.
In some embodiments, the computer device 3 is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The computer device 3 may also include a client device, which includes, but is not limited to, any electronic product that can interact with a client by way of a keyboard, mouse, remote control, touch pad, or voice control device, such as a personal computer, tablet, smart phone, digital camera, etc.
It should be noted that the computer device 3 is only used as an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application and are incorporated herein by reference.
In some embodiments, the memory 31 has stored therein a computer program that, when executed by the at least one processor 32, performs all or part of the steps in the federal learning-based model training method as described. The Memory 31 includes Read-Only Memory (ROM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (EPROM), one-time programmable Read-Only Memory (One-time Programmable Read-Only Memory, OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain referred to in the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
In some embodiments, the at least one processor 32 is a Control Unit (Control Unit) of the computer device 3, connects the various components of the entire computer device 3 using various interfaces and lines, and performs various functions and processes of the computer device 3 by running or executing programs or modules stored in the memory 31, and invoking data stored in the memory 31. For example, the at least one processor 32, when executing the computer program stored in the memory, implements all or part of the steps of the federal learning-based model training method described in embodiments of the present application; or to implement all or part of the functionality of the federally learning-based model training apparatus. The at least one processor 32 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like.
In some embodiments, the at least one communication bus 33 is arranged to enable connected communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the computer device 3 may further comprise a power source (such as a battery) for powering the various components, preferably the power source is logically connected to the at least one processor 32 via a power management means, whereby the functions of managing charging, discharging, and power consumption are performed by the power management means. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The computer device 3 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described in detail herein.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or processor (processor) to perform portions of the methods described in various embodiments of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or that the singular does not exclude a plurality. Several of the elements or devices recited in the specification may be embodied by one and the same item of software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present application may be modified or substituted without departing from the spirit and scope of the technical solution of the present application.

Claims (9)

1. The model training method based on the federal learning is used for training a model to be trained and applied to a central node, and is characterized by comprising the following steps of:
transmitting the initial weight to the target participation node;
calculating a first duration that the initial weight is received by the target participating node, including: acquiring a starting time point for starting to send the initial weight; acquiring an ending time point which is output by the target participating node and receives the initial weight; calculating the difference between the ending time point and the starting time point to obtain a first time length;
receiving a first local model weight sent by the target participating node, and calling a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained;
Updating the model to be trained according to the second weight, and judging whether the updated model to be trained converges or not;
when the updated model to be trained is not converged, calculating a second time length for the target participating node to send the weight of the first local model;
determining target network information of the target participating node according to the first time length and the second time length, wherein the target network information is a comprehensive situation of data transmission speed and network delay;
updating the quantization precision and the model updating frequency of the target participation node according to the target network information, and transmitting the quantization precision and the model updating frequency to the target participation node;
receiving a second local model weight obtained by the target participating node according to the quantization precision and the model updating frequency, and calling the preset model to process the second local model weight to obtain a third weight corresponding to the model to be trained;
updating the model to be trained according to the third weight, and judging whether the updated model to be trained converges or not;
and when the updated model to be trained converges, determining that the training of the model to be trained is completed.
2. The model training method based on federal learning according to claim 1, wherein the invoking a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained comprises:
acquiring the number of the target participation nodes and the first local model weight corresponding to each target participation node;
summing a plurality of first local model weights to obtain a first local model weight sum;
calculating the ratio of the first local model weight to the number to obtain an initial local model weight;
and calling a preset encryption key to decrypt and process the initial local model weight to obtain a second weight corresponding to the model to be trained.
3. The federal learning-based model training method according to claim 1, wherein the determining the target network information of the target participating node according to the first duration and the second duration comprises:
respectively calculating a first time length average value and a first time length standard deviation corresponding to the first time length, and determining a first network interval according to the first time length average value and the first time length standard deviation;
respectively calculating a second time length average value and a second time length standard deviation corresponding to the second time length, and determining a second network interval according to the second time length average value and the second time length standard deviation;
And determining target network information of each target participating node according to the relation between the first time length and the first network interval and the relation between the second time length and the second network interval.
4. The federal learning-based model training method according to claim 1, wherein the updating the quantization accuracy and model update frequency of the target participating node according to the target network information comprises:
acquiring a first mapping relation between preset network information and quantization precision, and traversing the first mapping relation according to the target network information to obtain target quantization precision;
and acquiring a second mapping relation between preset network information and model updating frequency, and traversing the second mapping relation according to the target network information to obtain the target model updating frequency.
5. A model training method based on federal learning, for training a model to be trained, applied to a target participating node, the method comprising:
acquiring preset initial model parameters corresponding to the model to be trained, and adjusting the model to be trained according to the preset initial model parameters to obtain a local training model, wherein the preset initial model parameters comprise initial quantization accuracy, initial model updating frequency and learning rate;
When an initial weight value sent by a central node is received, training data corresponding to the local training model is obtained, and the local training model is trained according to the initial weight value and the training data, so that an updated local training model is obtained;
obtaining an intermediate local model weight according to the local model weight of the local training model after the initial quantization precision quantization processing update;
invoking a preset encryption key to encrypt the intermediate local model weight to obtain a first local model weight;
and sending the first local model weight to the central node according to the initial model updating frequency.
6. The federal learning-based model training method according to claim 5, wherein the quantizing the updated local model weights of the local training model according to the initial quantization accuracy comprises:
acquiring an initial quantization model;
updating the initial quantization model according to the quantization precision to obtain a target quantization model;
and inputting the local model weight into the target quantization model to obtain an intermediate local model weight.
7. Model trainer based on federal study for training wait to train the model, be applied to central node, its characterized in that, model trainer based on federal study includes:
The initial weight sending module is configured to send an initial weight to a target participating node, and calculate a first duration for the target participating node to receive the initial weight, where the first duration includes: acquiring a starting time point for starting to send the initial weight; acquiring an ending time point which is output by the target participating node and receives the initial weight; calculating the difference between the ending time point and the starting time point to obtain a first time length;
the second weight acquisition module is used for receiving the first local model weight sent by the target participating node, and calling a preset model to process the first local model weight to obtain a second weight corresponding to the model to be trained;
the model convergence judging module is used for updating the model to be trained according to the second weight and judging whether the updated model to be trained converges or not;
the time calculation module is used for calculating a second time length for the target participating node to send the weight of the first local model when the updated model to be trained is not converged;
the network information determining module is used for determining target network information of the target participating node according to the first time length and the second time length, wherein the target network information is the comprehensive situation of data transmission speed and network delay;
The data updating module is used for updating the quantization precision and the model updating frequency of the target participation node according to the target network information and sending the quantization precision and the model updating frequency to the target participation node;
the third weight acquisition module is used for receiving a second local model weight obtained by the target participating node according to the quantization precision and the model updating frequency, and calling the preset model to process the second local model weight to obtain a third weight corresponding to the model to be trained;
the model convergence judging module is also used for updating the model to be trained according to the third weight and judging whether the updated model to be trained converges or not;
and the training completion determining module is used for determining that the training of the model to be trained is completed when the updated model to be trained converges.
8. A computer device comprising a processor for implementing the federal learning-based model training method according to any one of claims 1 to 4 and 5 to 6 when executing a computer program stored in a memory.
9. A computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the federal learning-based model training method according to any one of claims 1 to 4 and 5 to 6.
CN202111136508.0A 2021-09-27 2021-09-27 Model training method and device based on federal learning and related equipment Active CN113837397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111136508.0A CN113837397B (en) 2021-09-27 2021-09-27 Model training method and device based on federal learning and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111136508.0A CN113837397B (en) 2021-09-27 2021-09-27 Model training method and device based on federal learning and related equipment

Publications (2)

Publication Number Publication Date
CN113837397A CN113837397A (en) 2021-12-24
CN113837397B true CN113837397B (en) 2024-02-02

Family

ID=78970823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111136508.0A Active CN113837397B (en) 2021-09-27 2021-09-27 Model training method and device based on federal learning and related equipment

Country Status (1)

Country Link
CN (1) CN113837397B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118101501B (en) * 2024-04-23 2024-07-05 山东大学 Communication method and system for industrial Internet of things heterogeneous federal learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263908A (en) * 2019-06-20 2019-09-20 深圳前海微众银行股份有限公司 Federal learning model training method, equipment, system and storage medium
CN112617855A (en) * 2020-12-31 2021-04-09 平安科技(深圳)有限公司 Electrocardiogram analysis method and device based on federal learning and related equipment
CN112784995A (en) * 2020-12-31 2021-05-11 杭州趣链科技有限公司 Federal learning method, device, equipment and storage medium
WO2021120676A1 (en) * 2020-06-30 2021-06-24 平安科技(深圳)有限公司 Model training method for federated learning network, and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263908A (en) * 2019-06-20 2019-09-20 深圳前海微众银行股份有限公司 Federal learning model training method, equipment, system and storage medium
WO2021120676A1 (en) * 2020-06-30 2021-06-24 平安科技(深圳)有限公司 Model training method for federated learning network, and related device
CN112617855A (en) * 2020-12-31 2021-04-09 平安科技(深圳)有限公司 Electrocardiogram analysis method and device based on federal learning and related equipment
CN112784995A (en) * 2020-12-31 2021-05-11 杭州趣链科技有限公司 Federal learning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113837397A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113489681B (en) Block link point data consistency consensus method, device, equipment and storage medium
CN110795477A (en) Data training method, device and system
CN110417558A (en) Verification method and device, the storage medium and electronic device of signature
CN114827198B (en) Multi-layer center asynchronous federal learning method applied to Internet of vehicles
CN113505882B (en) Data processing method based on federal neural network model, related equipment and medium
CN116745780A (en) Method and system for decentralised federal learning
Uddin et al. An efficient selective miner consensus protocol in blockchain oriented IoT smart monitoring
WO2023093235A1 (en) Communication network architecture generation method and apparatus, electronic device, and medium
Li et al. FEEL: Federated end-to-end learning with non-IID data for vehicular ad hoc networks
CN113837397B (en) Model training method and device based on federal learning and related equipment
Bany Taha et al. TD‐PSO: task distribution approach based on particle swarm optimization for vehicular ad hoc network
CN112966878A (en) Loan overdue prediction and learning method and device
CN114579957A (en) Credible sandbox-based federated learning model training method and device and electronic equipment
CN110599384B (en) Organization relation transferring method, device, equipment and storage medium
Serhani et al. Dynamic Data Sample Selection and Scheduling in Edge Federated Learning
CN108833133A (en) Network configuration management method, apparatus and storage medium based on system for cloud computing
CN118095803A (en) Logistics resource integration and scheduling platform and method based on big data
CN117979291A (en) Block chain-based Internet of things sensing network safety device, method, equipment and medium
CN116957110B (en) Trusted federation learning method and system based on federation chain
CN113676494B (en) Centralized data processing method and device
CN114707663A (en) Distributed machine learning method and device, electronic equipment and storage medium
CN113723509B (en) Follow-up monitoring method and device based on federal reinforcement learning and related equipment
CN111222057A (en) Information processing method and device and computer readable storage medium
CN118368053B (en) Method and system for collaborative security calculation under chain upper chain based on sliced block chain
CN106650271A (en) Data encryption processing-based personal medical information management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40062774

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant