CN112801301B - Asynchronous computing method, device, apparatus, storage medium, and program product - Google Patents

Asynchronous computing method, device, apparatus, storage medium, and program product Download PDF

Info

Publication number
CN112801301B
CN112801301B CN202110134143.1A CN202110134143A CN112801301B CN 112801301 B CN112801301 B CN 112801301B CN 202110134143 A CN202110134143 A CN 202110134143A CN 112801301 B CN112801301 B CN 112801301B
Authority
CN
China
Prior art keywords
model
preset
training
federal
global model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110134143.1A
Other languages
Chinese (zh)
Other versions
CN112801301A (en
Inventor
张天豫
范力欣
吴锦和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202110134143.1A priority Critical patent/CN112801301B/en
Publication of CN112801301A publication Critical patent/CN112801301A/en
Application granted granted Critical
Publication of CN112801301B publication Critical patent/CN112801301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an asynchronous computing method, a device, equipment, a storage medium and a program product, wherein the method comprises the following steps: storing model information of a current global model into a preset blockchain; if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, obtaining a target federation model meeting the preset federation model requirement through third-party equipment, and obtaining the operation behaviors of the user to be predicted through the target federation model, thereby saving the calculation results of all the participants on the chain by using a preset blockchain technology, solving the technical problem that the operation behavior prediction results of the user are inaccurate due to the fact that part of federation models obtained by the participants do not reach standards due to asynchronous calculation based on federation learning in the prior art, and improving the accuracy of the federation learning results during asynchronous calculation by utilizing the traceable characteristics of the preset blockchain, and further improving the accuracy of the operation behavior prediction results of the user to be predicted.

Description

Asynchronous computing method, device, apparatus, storage medium, and program product
Technical Field
The present invention relates to the field of federal learning technology, and in particular, to an asynchronous computing method, apparatus, device, storage medium, and program product.
Background
Currently, in many situations, prediction of user operation behaviors is involved, for example, a service platform usually predicts consumption behaviors of a user according to product browsing data, historical product ordering data and the like of the user to further recommend or push commodities to the user, or a music platform predicts song searching behaviors of the user according to historical songs and the like of the user to further recommend or push songs to the user, and accordingly, the service platform usually utilizes a machine learning model to predict user operation behaviors.
At present, in order to ensure user data privacy, most of user operation behaviors are predicted by using a federal learning model, however, since federal calculation steps among all participants in federal learning are performed under the condition of assuming synchronous calculation, and then in an actual process, federal calculation among all the participants is mostly based on asynchronous calculation, namely, when part of the participants have completed calculation and upload calculation results, part of the participants have not uploaded calculation results, so that the calculation results of part of the participants in the federal learning process are inaccurate, and further, federal models obtained by part of the participants are not up to standard, thereby causing the technical problem of inaccurate prediction results.
Disclosure of Invention
The invention provides an asynchronous calculation method, an asynchronous calculation device, equipment, a storage medium and a program product, and aims to solve the technical problem that in the prior art, a federal model obtained by part of participants does not reach standards due to asynchronous calculation based on federal learning, so that a prediction result is inaccurate.
To achieve the above object, the present invention provides an asynchronous calculation method applied to a participant in federal learning, the asynchronous calculation method including:
Performing iterative training on a local model, sending a training result of the iterative training to third party equipment associated with the participant, so that the third party equipment obtains a current global model according to the training result, storing model information of the current global model into a preset blockchain, and feeding back the global model in an encryption mode;
Performing iterative training according to the current global model to obtain a federal model;
And if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in the current preset blockchain, and sending an update request to the third party equipment so as to obtain a target federation model meeting the preset federation model requirement through the third party equipment.
Preferably, the step of determining the optimal global model stored in the current preset blockchain and sending an update request to the third party device to obtain, by the third party device, a target federation model that meets the preset federation model requirement includes:
Determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so that the third party device performs global model update according to model information of the optimal global model to obtain a target global model, and feeding back the target global model;
And performing iterative training according to the target global model until a target federal model meeting the requirements of a preset federal model is obtained.
Preferably, the step of determining the optimal global model stored in the current preset blockchain and sending an update request to the third party device, so that the third party device performs global model update according to model information of the optimal global model includes:
reading global model information stored in each preset blockchain node of the current preset blockchain to determine an optimal global model stored in the current preset blockchain;
Determining a model sequence number of the optimal global model, and sending an update request carrying the model sequence number to the third party device so that the third party device obtains model information of the optimal global model from a preset blockchain according to the model sequence number, and carries out global model update according to the model information of the optimal global model.
Preferably, the step of determining that the federal model does not meet the preset federal model requirement includes:
determining a performance index of the federal model;
If the performance index of the federal model does not meet the preset performance index requirement, judging that the federal model does not meet the preset federal model requirement; or alternatively
Acquiring the number of participants participating in the federation model based on a preset blockchain;
If the number of the participants does not meet the preset number of the participants, judging that the federal model does not meet the preset federal model requirement; or alternatively
Determining a degree of model performance improvement of the federal model and the local model;
And if the performance improvement degree of the model is smaller than the preset performance improvement degree, judging that the federal model does not meet the preset federal model requirement.
Preferably, the step of performing iterative training according to the current global model to obtain a target federal model further includes:
The current global model is used as a local model, iterative training is carried out on the local model in a return mode, training results of the iterative training are sent to third party equipment associated with the participators, so that the third party equipment obtains the current global model according to the training results, model information of the current global model is stored in each preset blockchain node of a preset blockchain, and the global model is fed back;
and continuing to execute the step of performing iterative training on the local model by taking the current global model as the local model until a preset training iteration round is reached to obtain a target federal model.
Continuing to send the local model training result to a third party device associated with the participant, so as to calculate a global model according to the local model training result through the third party device, record the global model into a preset blockchain, and feed back the global model
Preferably, before the step of performing iterative training on the local model, the method further includes:
sending a participation request to the third party device to join a preset blockchain through the third party device;
And reading the global model information stored in each preset block chain node of the current preset block chain, and selecting the optimal global model as a local model.
Preferably, the step of sending the training result of the iterative training to a third party device associated with the participant, so that the third party device obtains a current global model according to the training result, and storing model information of the current global model into a preset blockchain includes:
And sending the training results of the iterative training to third party equipment associated with the participants, wherein the third party equipment packages the training result encrypted signature into a target block, adds the target block into a preset blockchain, obtains a current global model through the training results of a plurality of participants currently participating in federal learning stored in the preset blockchain, and stores model information of the current global model into the preset blockchain.
In addition, to achieve the above object, the present invention also provides an asynchronous computing device including:
The first training module is used for carrying out iterative training on the local model, sending a training result of the iterative training to third party equipment associated with the participator, enabling the third party equipment to obtain a current global model according to the training result, storing model information of the current global model into a preset blockchain, and encrypting and feeding back the global model;
The second training module is used for carrying out iterative training according to the current global model so as to obtain a federal model;
And the third training module is used for determining an optimal global model stored in the current preset blockchain if the federation model does not meet the preset federation model requirement, sending an update request to the third party equipment, obtaining a target federation model meeting the preset federation model requirement through the third party equipment, and obtaining the operation behavior of the user to be predicted through the target federation model.
In addition, to achieve the above object, the present invention also provides an asynchronous computing device including a processor, a memory, and an asynchronous computing program stored in the memory, which when executed by the processor, implements the steps of the asynchronous computing method as described above.
In addition, to achieve the above object, the present invention also provides a computer storage medium having stored thereon an asynchronous calculation program which, when executed by a processor, implements the steps of the asynchronous calculation method as described above.
Furthermore, to achieve the above object, the present invention provides a computer program product comprising a computer program which, when run by a processor, implements the steps of the asynchronous calculation method as described above.
Compared with the prior art, the invention provides an asynchronous calculation method, which is characterized in that a local model is subjected to iterative training, and a training result of the iterative training is sent to a third party device associated with a participant, so that the third party device obtains a current global model according to the training result, model information of the current global model is stored in a preset blockchain, and the global model is fed back; performing iterative training according to the current global model to obtain a federal model; if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party equipment to obtain a target federation model meeting the preset federation model requirement through the third party equipment, thereby, the invention stores the calculation result of each participant on the chain by using a preset blockchain technology to facilitate asynchronous calculation among each participant, and can acquire the optimal global model from the preset blockchain to continue to participate in training when the obtained federation model does not reach the standard, the technical problem that part of federal models obtained by participants do not reach standards due to asynchronous calculation based on federal learning in the prior art is solved by obtaining the federal model reaching standards and obtaining the operation behaviors of the users to be predicted through the target federal model, and the accuracy of the results in federal learning during asynchronous calculation is improved by utilizing the retrospective characteristic of the preset blockchain, so that the accuracy of the prediction results of the operation behaviors of the users to be predicted is improved.
Drawings
FIG. 1 is a schematic hardware architecture of an asynchronous computing device according to embodiments of the invention;
FIG. 2 is a flow chart of a first embodiment of an asynchronous computing method of the present invention;
FIG. 3 is a flow chart of a second embodiment of the asynchronous calculation method of the present invention;
FIG. 4 is a flow chart of a third embodiment of an asynchronous calculation method according to the present invention;
FIG. 5 is a functional block diagram of an embodiment of an asynchronous computing device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The asynchronous computing device mainly related to the embodiment of the invention refers to network connection equipment capable of realizing network connection, and the asynchronous computing device can be a server, a cloud platform and the like.
Referring to FIG. 1, FIG. 1 is a schematic hardware architecture of an asynchronous computing device according to various embodiments of the invention. In an embodiment of the invention, the asynchronous computing device may include a processor 1001 (e.g., central processing unit Central Processing Unit, CPU), a communication bus 1002, an input port 1003, an output port 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communications between these components; the input port 1003 is used for data input; the output port 1004 is used for data output, and the memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory, and the memory 1005 may be an optional storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration shown in fig. 1 is not limiting of the invention and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
With continued reference to FIG. 1, the memory 1005 of FIG. 1, which is a readable storage medium, may include an operating system, a network communication module, an application program module, and an asynchronous computer program. In fig. 1, the network communication module is mainly used for connecting with a server and performing data communication with the server; and the processor 1001 may call an asynchronous calculation program stored in the memory 1005 and perform the following operations:
Performing iterative training on a local model, sending a training result of the iterative training to third party equipment associated with the participant, so that the third party equipment obtains a current global model according to the training result, storing model information of the current global model into a preset blockchain, and feeding back the global model in an encryption mode;
Performing iterative training according to the current global model to obtain a federal model;
if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so as to obtain a target federation model meeting the preset federation model requirement through the third party device, and obtaining the operation behavior of the user to be predicted through the target federation model.
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
Determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so that the third party device performs global model update according to model information of the optimal global model to obtain a target global model, and feeding back the target global model;
And performing iterative training according to the target global model until a target federal model meeting the requirements of a preset federal model is obtained.
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
reading global model information stored in each preset blockchain node of the current preset blockchain to determine an optimal global model stored in the current preset blockchain;
Determining a model sequence number of the optimal global model, and sending an update request carrying the model sequence number to the third party device so that the third party device obtains model information of the optimal global model from a preset blockchain according to the model sequence number, and carries out global model update according to the model information of the optimal global model.
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
determining a performance index of the federal model;
If the performance index of the federal model does not meet the preset performance index requirement, judging that the federal model does not meet the preset federal model requirement; or alternatively
Acquiring the number of participants participating in the federation model based on a preset blockchain;
If the number of the participants does not meet the preset number of the participants, judging that the federal model does not meet the preset federal model requirement; or alternatively
Determining a degree of model performance improvement of the federal model and the local model;
And if the performance improvement degree of the model is smaller than the preset performance improvement degree, judging that the federal model does not meet the preset federal model requirement.
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
The current global model is used as a local model, iterative training is carried out on the local model in a return mode, training results of the iterative training are sent to third party equipment associated with the participators, so that the third party equipment obtains the current global model according to the training results, model information of the current global model is stored in each preset blockchain node of a preset blockchain, and the global model is fed back;
and continuing to execute the step of performing iterative training on the local model by taking the current global model as the local model until a preset training iteration round is reached to obtain a target federal model.
Continuing to send the local model training result to a third party device associated with the participant, so as to calculate a global model according to the local model training result through the third party device, record the global model into a preset blockchain, and feed back the global model
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
sending a participation request to the third party device to join a preset blockchain through the third party device;
And reading the global model information stored in each preset block chain node of the current preset block chain, and selecting the optimal global model as a local model.
Further, the processor 1001 may be further configured to call an asynchronous calculation program stored in the memory 1005, and perform the following steps:
And sending the training results of the iterative training to third party equipment associated with the participants, wherein the third party equipment packages the training result encrypted signature into a target block, adds the target block into a preset blockchain, obtains a current global model through the training results of a plurality of participants currently participating in federal learning stored in the preset blockchain, and stores model information of the current global model into the preset blockchain.
Based on the hardware structure shown in fig. 1, a first embodiment of the present invention provides an asynchronous calculation method.
Referring to fig. 2, fig. 2 is a flowchart of a first embodiment of an asynchronous computing method according to the present invention.
Embodiments of the present invention provide embodiments of asynchronous computing methods, it being noted that although a logical sequence is illustrated in a flowchart, in some cases the steps illustrated or described may be performed in a different order than that illustrated herein. Specifically, the asynchronous calculation method of the embodiment includes:
step S10: performing iterative training on a local model, sending a training result of the iterative training to third party equipment associated with the participant, so that the third party equipment obtains a current global model according to the training result, storing model information of the current global model into a preset blockchain, and feeding back the global model in an encryption mode;
It should be noted that, in the actual federal learning process, the federal computation among the participants is mostly based on asynchronous computation, that is, when some of the participants have completed computation and uploaded the computation result, some of the participants have not uploaded the computation result, so that the computation result of some of the participants in the federal learning process is inaccurate.
Specifically, the asynchronous calculation method is applied to a participant in federation learning, it is easy to understand that the federation learning may include a plurality of participants, a third party device associated with the participant may be a coordinator or other intermediate device in federation learning, while the preset blockchain provides a trusted mechanism for each participant in federation learning, by means of the preset blockchain, especially the authorization mechanism of the federation chain, identity management, and the like, mutually untrusted users may be integrated together as the participant and the third party device to establish a safe trusted collaboration mechanism, preferably, in this embodiment, the participant a providing data is illustrated, after the participant a performs iterative training on the local model based on the local data, the third party device sends the training result of iterative training to the third party device associated with the participant, wherein after receiving the training result of iterative training sent by the participant a, the third party device stores the training result of iterative training sent by the participant a in the preset blockchain, reads the other training results of the participant a plurality of participants that have uploaded data in the preset blockchain, and calculates the current training result of the current block chain according to the training result of the preset participant a, so as to obtain the global training result of the current model of the current block.
After the current global model is obtained through calculation, in order to facilitate tracing of the calculation result by the subsequent party, the third party device stores model information of the current global model into a preset blockchain, in addition, in order to ensure data security, it should be understood that, in order to ensure data security, multiple parties participating in federal learning can perform federal learning in a secure federal network, and the current global model obtained through calculation by aggregating training results of multiple parties is stored in the preset blockchain in an encrypted manner, wherein the model information of the current global model comprises information of the currently obtained global model, information of multiple parties participating in gathering the global model, such as equipment numbers of the parties, information of participation aggregated training result data provided by the parties, serial numbers of the current global model and the like, and feeds back the global model to party a for the parties to continue training, in this embodiment, in addition, the calculation amount of terminal equipment can be reduced, and the participation of the subsequent new federal learning can directly obtain information from the preset blockchain, thus reducing the training speed of the terminal equipment, and further improving the training speed of the terminal equipment.
In addition, for easy understanding, the embodiment provides a specific implementation scheme for transmitting the training result of the iterative training to a third party device associated with the participant, so that the third party device obtains a current global model according to the training result, and stores model information of the current global model into a preset blockchain, which is specifically as follows:
And sending the training results of the iterative training to third party equipment associated with the participants, wherein the third party equipment packages the training result encrypted signature into a target block, adds the target block into a preset blockchain, obtains a current global model through the training results of a plurality of participants currently participating in federal learning stored in the preset blockchain, and stores model information of the current global model into the preset blockchain.
Specifically, the training result includes model-trained data such as gradient, model parameters, model accuracy, etc., which is not limited in this embodiment, and preferably, the embodiment exemplifies model-trained model parameters, for example, according to held data, party a performs local training through an algorithm based on gradient descent, searches for model parameters to minimize a loss function, and sends model parameters corresponding to the minimum loss function to a third party device associated with the parties, where the third party device collects model parameters from each party and stores the model parameters in a preset blockchain node corresponding to each party in a transaction form.
When the third party device receives the verification passing information fed back by the preset blockchain nodes of all the participants in the current preset blockchain, a head block of the participant b is created, and preset blockchain link points of the participant b are associated with other preset blockchain nodes, wherein the head block of the participant b comprises the identification information of the participant b and other information such as a time stamp for adding the participant b into the federal learning, so that the head block can be traced back after a problem occurs in the subsequent federal learning.
Easily understood, the third party device packages the training result encryption signature of the party a into the target block, and adds the target block into the preset blockchain after the training result of the party a passes through the full network verification, and reads the training results of other multiple parties with uploaded data in the current preset blockchain to perform aggregation calculation according to the training result of the party a and the training results of other multiple parties recorded in the current preset blockchain to obtain the current global model.
Step S20: performing iterative training according to the current global model to obtain a federal model;
In this step, after the global model in federal learning is obtained, iterative training is performed locally, for example, a loss function of the global model is calculated, then model parameters of the global model are updated in a gradient descent manner until the loss function converges or reaches a preset training iteration round, and the model parameters corresponding to the loss function converging or reaching the preset training iteration round are used as a final federal model.
In addition, in an embodiment, the step of performing iterative training according to the current global model to obtain a target federal model further includes:
The current global model is used as a local model, iterative training is carried out on the local model in a return mode, training results of the iterative training are sent to third party equipment associated with the participators, so that the third party equipment obtains the current global model according to the training results, model information of the current global model is stored in each preset blockchain node of a preset blockchain, and the global model is fed back;
and continuing to execute the step of performing iterative training on the local model by taking the current global model as the local model until a preset training iteration round is reached to obtain a target federal model.
And continuing to send the local model training result to a third party device associated with the participant, so as to calculate a global model according to the local model training result through the third party device, record the global model into a preset blockchain, and feed back the global model.
In the step, after the current global model is obtained, the current global model is used as a local model, the training step is carried out in a circulating mode until the circulating step reaches a preset training iteration round, so that the global model with higher precision is obtained, and the target federal model with higher precision is obtained.
Step S30: if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so as to obtain a target federation model meeting the preset federation model requirement through the third party device, and obtaining the operation behavior of the user to be predicted through the target federation model.
Specifically, the preset federal model requirement may be set for each participant based on its federal requirement, for example, the number of iterative training, the accuracy of the federal model, and the like, which is not limited in this embodiment.
As will be readily appreciated, in the actual federation learning process, most of federation computation among all the participants is based on asynchronous computation, that is, when some of the participants have completed computation and uploaded computation results, some of the participants have not uploaded computation results, so that some of the participants who have completed computation may miss data of some of the participants who have not uploaded computation results in the federation learning process, and thus may cause the final federation model to be inaccurate.
It should be understood that, in this embodiment, the target federation model meeting the requirement of the preset federation model is obtained by using the optimal global model stored in the preset blockchain node, so compared with the technical problem that in the prior art, when some participants have completed calculation and uploaded calculation results, some participants have not uploaded calculation results, resulting in inaccurate calculation results of some participants in the federation learning process, in this embodiment, by using the traceable feature of the preset blockchain, the accuracy of federation learning results in asynchronous calculation is improved, and therefore, when the operation behaviors of the user to be predicted are obtained by using the target federation model, the accuracy of the prediction results of the operation behaviors of the user to be predicted is further improved.
In addition, it should be noted that, in this embodiment, the step of determining that the federal model does not meet the preset federal model requirement includes the following steps:
judging condition one:
determining a performance index of the federal model;
If the performance index of the federal model does not meet the preset performance index requirement, judging that the federal model does not meet the preset federal model requirement; or alternatively
Judging condition II:
acquiring the number of participants participating in the federation model based on a preset blockchain;
If the number of the participants does not meet the preset number of the participants, judging that the federal model does not meet the preset federal model requirement; or alternatively
And (3) judging a condition III:
Determining a degree of model performance improvement of the federal model and the local model;
And if the performance improvement degree of the model is smaller than the preset performance improvement degree, judging that the federal model does not meet the preset federal model requirement.
In particular, the performance index of the federal model includes an index that can characterize whether the federal model is good or bad, such as accuracy, sensitivity, recall, a coefficient of base, and gain, which is not limited in this embodiment, where the degree of improvement in model performance between the federal model and the local model refers to a performance index difference between the federal model and the local model, such as a sensitivity difference between the federal model and the local model, and it is understood that when the sensitivity difference between the federal model and the local model is too small, it indicates that the federal model and the local model are substantially the same, and thus the federal model does not achieve the goal of participating in federal training.
Specifically, the number of the participants participating in the federal model refers to the number of the participants participating in federal aggregation of the current global model when the participants acquire the federal model after performing iterative training on the current global model fed back by the third party device, and it is easy to understand that when the number of the participants participating in federal aggregation of the current global model is too small, the data features are less, so that the finally acquired federal model has poor effect.
It should be understood that the foregoing is merely illustrative, and the technical solution of the present invention is not limited in any way, and those skilled in the art may perform the arrangement according to the needs in practical application, and are not listed here.
According to the method, the local model is subjected to iterative training, a training result of the iterative training is sent to third party equipment associated with the participator, so that the third party equipment obtains a current global model according to the training result, model information of the current global model is stored in a preset blockchain, and the global model is fed back; performing iterative training according to the current global model to obtain a federal model; if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party equipment to obtain a target federation model meeting the preset federation model requirement through the third party equipment, thereby, the invention stores the calculation result of each participant on the chain by using a preset blockchain technology to facilitate asynchronous calculation among the participants, and when the obtained federal model does not reach the standard, the optimal global model can be obtained from the preset blockchain to continue to participate in training so as to obtain the federal model reaching the standard, so that the technical problem that part of federal models obtained by participants do not reach the standard due to asynchronous calculation based on federal learning in the prior art is solved, and the accuracy of the result in federal learning during asynchronous calculation is improved by utilizing the retroactive characteristic of the preset blockchain, and the accuracy of the prediction result of the operation behavior of the user to be predicted is further improved.
Further, based on the first embodiment of the asynchronous calculation method of the present invention, a second embodiment of the asynchronous calculation method of the present invention is presented.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of an asynchronous calculation method according to the present invention;
the difference between the second embodiment of the asynchronous computing method and the first embodiment of the asynchronous computing method is that the steps of determining an optimal global model stored in a current preset blockchain and sending an update request to the third party device to obtain, by the third party device, a target federation model that meets the preset federation model requirement include:
step S301: determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so that the third party device performs global model update according to model information of the optimal global model to obtain a target global model, and feeding back the target global model;
step S302: and performing iterative training according to the target global model until a target federal model meeting the requirements of a preset federal model is obtained.
As the training data and the federation training process of each participant in the federation learning process are recorded in the preset blockchain, if the currently obtained federation model does not meet the preset federation model requirement, the preset blockchain can be downloaded to the local and the optimal global model is selected from the preset blockchain, so that the third party equipment performs global model update according to the model information of the optimal global model, and the target global model with better model effect is obtained.
In addition, for easy understanding, the embodiment provides a specific implementation scheme for determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device, so that the third party device performs global model update according to model information of the optimal global model, which specifically includes the following steps:
reading global model information stored in each preset blockchain node of the current preset blockchain to determine an optimal global model stored in the current preset blockchain;
Determining a model sequence number of the optimal global model, and sending an update request carrying the model sequence number to the third party device so that the third party device obtains model information of the optimal global model from a preset blockchain according to the model sequence number, and carries out global model update according to the model information of the optimal global model.
Specifically, the global model information may include a performance index of a global model, for example, after the third party device obtains the global model, calculate a model precision of the global model and other performance indexes of the global model, and store the global model and the performance index of the global model together into a preset blockchain, further, the global model information may also include information of the number of participants participating in federation aggregation of each global model, for example, the participants download the preset blockchain to a local, and read the model information of each global model stored in the preset blockchain, so as to select an optimal global model from the information, for example, obtain the number of participants participating in federation aggregation of each global model, and update the global model with the largest number of participants as the model information of the optimal global model, so as to obtain the target global model.
Further, after the optimal global model is selected, an update request carrying a model serial number of the optimal global model is sent to the third party device, so that the third party device obtains model information of the optimal global model from a preset blockchain according to the model serial number, and carries out global model update according to the model information of the optimal global model.
It should be understood that the foregoing is merely illustrative, and the technical solution of the present invention is not limited in any way, and those skilled in the art may perform the arrangement according to the needs in practical application, and are not listed here.
According to the method, the optimal global model stored in the current preset blockchain is determined, and an update request is sent to the third party device, so that the third party device performs global model update according to model information of the optimal global model, a target global model is obtained, and the target global model is fed back; and performing iterative training according to the target global model until a target federal model meeting the requirements of a preset federal model is obtained, so that the technical problem that part of data of other participants are lost in results obtained by part of participants due to asynchronous calculation based on federal learning and the federal model does not reach the standard in the prior art is solved by utilizing the traceability characteristic of a preset blockchain.
Further, based on the first embodiment of the asynchronous calculation method of the present invention, a third embodiment of the asynchronous calculation method of the present invention is presented.
Referring to fig. 4, fig. 4 is a flowchart illustrating a third embodiment of an asynchronous calculation method according to the present invention;
step S101: the third embodiment of the asynchronous computing method is different from the first embodiment of the asynchronous computing method in that before the step of performing iterative training on the local model, the method further includes:
Step S102: sending a participation request to the third party device to join a preset blockchain through the third party device;
step S103: and reading the global model information stored in each preset block chain node of the current preset block chain, and selecting the optimal global model as a local model.
In this embodiment, as the model training information of each participant is recorded in the preset blockchain, when a participant wants to join in the federal learning process of other participants, in order to increase the model training speed, for example, when a participant b needs to participate in federal learning, the participant b sends a participation request carrying identification information of the participant b to a third party device, the third party device broadcasts the identification information of the participant b in the preset blockchain in a whole network, the preset blockchain nodes of other participants verify the identification information of the participant b after receiving the identification information of the participant b, and feed the verification passing information back to the third party device, and when the third party device receives the verification passing information fed back by the preset blockchain nodes of all participants in the preset blockchain, a head block of the participant b is created, and the preset blockchain link point of the participant b is associated with other preset blockchain nodes, wherein the head block of the participant b includes the identification information of the participant b, the time stamp of the participant b added to the federal learning, and the like.
And then after the party b joins the preset blockchain based on the third party device, the party b downloads the preset blockchain to the local, reads the model information of each global model stored in the preset blockchain, so as to select the optimal global model from the model information, for example, acquire the number of the parties participating in federally aggregated global models, take the global model with the largest number of the parties as the optimal global model, and train the optimal global model as the local model, thereby shortening the training time of the local model of the party b.
The method comprises the steps of sending a participation request to the third party equipment so as to join a preset blockchain through the third party equipment; and reading the global model information stored in each preset block chain node of the current preset block chain, and selecting the optimal global model as a local model, so that the training speed of federation learning of newly added participants is increased, and the efficiency of federation learning is improved.
In addition, the embodiment also provides an asynchronous computing device. Referring to fig. 5, fig. 5 is a schematic diagram of functional modules of a first embodiment of an asynchronous computing device according to the present invention.
In this embodiment, the asynchronous computing device is a virtual device, and is stored in the memory 1005 of the asynchronous computing device shown in fig. 1, so as to implement all functions of the asynchronous computing program: the method comprises the steps of performing iterative training on a local model, sending a training result of the iterative training to third party equipment associated with a participant, enabling the third party equipment to obtain a current global model according to the training result, storing model information of the current global model into a preset blockchain, and feeding back the global model; the method comprises the steps of performing iterative training according to the current global model to obtain a federal model; and if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in the current preset blockchain, and sending an update request to the third party device so as to obtain a target federation model meeting the preset federation model requirement through the third party device.
Specifically, referring to fig. 5, the asynchronous computing device includes:
The first training module 10 is configured to perform iterative training on a local model, send a training result of the iterative training to a third party device associated with the participant, so that the third party device obtains a current global model according to the training result, store model information of the current global model into a preset blockchain, and encrypt and feed back the global model;
a second training module 20, configured to perform iterative training according to the current global model, so as to obtain a federal model;
and the third training module 30 is configured to determine an optimal global model stored in a current preset blockchain if the federation model does not meet the preset federation model requirement, and send an update request to the third party device, so as to obtain, by using the third party device, a target federation model that meets the preset federation model requirement, and obtain, by using the target federation model, an operation behavior of a user to be predicted.
According to the asynchronous computing device provided by the embodiment, the local model is subjected to iterative training, and the training result of the iterative training is sent to the third party equipment associated with the participant, so that the third party equipment obtains a current global model according to the training result, model information of the current global model is stored in a preset blockchain, and the global model is fed back; performing iterative training according to the current global model to obtain a federal model; if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party equipment to obtain a target federation model meeting the preset federation model requirement through the third party equipment, thereby, the invention stores the calculation result of each participant on the chain by using a preset blockchain technology to facilitate asynchronous calculation among the participants, and when the obtained federal model does not reach the standard, the optimal global model can be obtained from the preset blockchain to continue to participate in training so as to obtain the federal model reaching the standard, so that the technical problem that part of federal models obtained by participants do not reach the standard due to asynchronous calculation based on federal learning in the prior art is solved, and the accuracy of the result in federal learning during asynchronous calculation is improved by utilizing the retroactive characteristic of the preset blockchain, and the accuracy of the prediction result of the operation behavior of the user to be predicted is further improved.
In addition, the embodiment of the present invention further provides a computer storage medium, where an asynchronous calculation program is stored, and when the asynchronous calculation program is executed by a processor, the steps of the asynchronous calculation method described above are implemented, which is not described herein again.
In addition, the embodiment of the present invention further provides a computer program product, which includes a computer program, and the computer program when executed by a processor implements the steps of the asynchronous calculation method as described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising several instructions for causing a terminal device to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or modifications in the structures or processes described in the specification and drawings, or the direct or indirect application of the present invention to other related technical fields, are included in the scope of the present invention.

Claims (11)

1. An asynchronous computing method for use with a participant in federal learning, the asynchronous computing method comprising:
performing iterative training on a local model, sending a training result of the iterative training to third party equipment associated with the participant, so that the third party equipment obtains a current global model according to the training result, storing model information of the current global model into a preset blockchain, and feeding back the global model in an encryption mode;
Performing iterative training according to the current global model to obtain a federal model;
if the federation model does not meet the preset federation model requirement, determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so as to obtain a target federation model meeting the preset federation model requirement through the third party device, and obtaining the operation behavior of the user to be predicted through the target federation model.
2. The asynchronous computing method according to claim 1, wherein the step of determining an optimal global model stored in a current preset blockchain and sending an update request to the third party device to obtain a target federation model meeting preset federation model requirements by the third party device comprises:
Determining an optimal global model stored in a current preset blockchain, and sending an update request to the third party device so that the third party device performs global model update according to model information of the optimal global model to obtain a target global model, and feeding back the target global model;
And performing iterative training according to the target global model until a target federal model meeting the requirements of a preset federal model is obtained.
3. The asynchronous calculation method according to claim 2, wherein the step of determining an optimal global model stored in a current preset blockchain and transmitting an update request to the third party device to cause the third party device to perform global model update according to model information of the optimal global model comprises:
reading global model information stored in each preset blockchain node of the current preset blockchain to determine an optimal global model stored in the current preset blockchain;
Determining a model sequence number of the optimal global model, and sending an update request carrying the model sequence number to the third party device so that the third party device obtains model information of the optimal global model from a preset blockchain according to the model sequence number, and carries out global model update according to the model information of the optimal global model.
4. The asynchronous calculation method according to claim 1, wherein the step of determining that the federation model does not meet a preset federation model requirement comprises:
determining a performance index of the federal model;
If the performance index of the federal model does not meet the preset performance index requirement, judging that the federal model does not meet the preset federal model requirement; or alternatively
Acquiring the number of participants participating in the federation model based on a preset blockchain;
If the number of the participants does not meet the preset number of the participants, judging that the federal model does not meet the preset federal model requirement; or alternatively
Determining a degree of model performance improvement of the federal model and the local model;
And if the performance improvement degree of the model is smaller than the preset performance improvement degree, judging that the federal model does not meet the preset federal model requirement.
5. The asynchronous computing method of claim 1, wherein the step of iteratively training to obtain a target federal model based on the current global model further comprises:
the current global model is used as a local model, iterative training is carried out on the local model in a return mode, training results of the iterative training are sent to third party equipment associated with the participators, so that the third party equipment obtains the current global model according to the training results, model information of the current global model is stored in each preset blockchain node of a preset blockchain, and the global model is fed back;
and continuing to execute the step of performing iterative training on the local model by taking the current global model as the local model until a preset training iteration round is reached to obtain a target federal model.
6. The asynchronous computing method of claim 1, wherein prior to the step of iteratively training the local model, further comprising:
sending a participation request to the third party device to join a preset blockchain through the third party device;
And reading the global model information stored in each preset block chain node of the current preset block chain, and selecting the optimal global model as a local model.
7. The asynchronous calculation method according to any one of claims 1 to 6, wherein the step of transmitting a training result of iterative training to a third party device associated with the participant, so that the third party device obtains a current global model according to the training result, and storing model information of the current global model in a preset blockchain includes:
And sending a training result of iterative training to a third party device associated with the participants, wherein the third party device packages the training result encrypted signature into a target block, adds the target block into a preset blockchain, obtains a current global model through training results of a plurality of participants currently participating in federal learning stored in the preset blockchain, and stores model information of the current global model into the preset blockchain.
8. An asynchronous computing device, the asynchronous computing device comprising:
the first training module is used for carrying out iterative training on the local model, sending a training result of the iterative training to third party equipment associated with a participant in federal learning, enabling the third party equipment to obtain a current global model according to the training result, storing model information of the current global model into a preset blockchain, and encrypting and feeding back the global model;
The second training module is used for carrying out iterative training according to the current global model so as to obtain a federal model;
And the third training module is used for determining an optimal global model stored in the current preset blockchain if the federation model does not meet the preset federation model requirement, sending an update request to the third party equipment, obtaining a target federation model meeting the preset federation model requirement through the third party equipment, and obtaining the operation behavior of the user to be predicted through the target federation model.
9. An asynchronous computing device comprising a processor, a memory, and an asynchronous computing program stored in the memory, which when executed by the processor, implements the steps of the asynchronous computing method of any of claims 1-7.
10. A computer storage medium, having stored thereon an asynchronous calculation program which, when executed by a processor, implements the steps of the asynchronous calculation method according to any of claims 1-7.
11. A computer program product comprising a computer program which, when run by a processor, implements the steps of the asynchronous calculation method according to any of claims 1-7.
CN202110134143.1A 2021-01-29 2021-01-29 Asynchronous computing method, device, apparatus, storage medium, and program product Active CN112801301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110134143.1A CN112801301B (en) 2021-01-29 2021-01-29 Asynchronous computing method, device, apparatus, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110134143.1A CN112801301B (en) 2021-01-29 2021-01-29 Asynchronous computing method, device, apparatus, storage medium, and program product

Publications (2)

Publication Number Publication Date
CN112801301A CN112801301A (en) 2021-05-14
CN112801301B true CN112801301B (en) 2024-08-09

Family

ID=75813275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110134143.1A Active CN112801301B (en) 2021-01-29 2021-01-29 Asynchronous computing method, device, apparatus, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN112801301B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469371B (en) * 2021-07-01 2023-05-02 建信金融科技有限责任公司 Federal learning method and apparatus
CN114548426B (en) * 2022-02-17 2023-11-24 北京百度网讯科技有限公司 Asynchronous federal learning method, business service prediction method, device and system
CN114996317B (en) * 2022-07-05 2024-02-23 中国电信股份有限公司 Asynchronous optimization method and device based on longitudinal federal learning and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10270599B2 (en) * 2017-04-27 2019-04-23 Factom, Inc. Data reproducibility using blockchains

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552986A (en) * 2020-07-10 2020-08-18 鹏城实验室 Block chain-based federal modeling method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112801301A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN112801301B (en) Asynchronous computing method, device, apparatus, storage medium, and program product
US10360257B2 (en) System and method for image annotation
CN110378488B (en) Client-side change federal training method, device, training terminal and storage medium
CN105431844B (en) Third party for search system searches for application
CN104067563B (en) Data distribution platform
CN111325417B (en) Method and device for realizing privacy protection and realizing multi-party collaborative updating of business prediction model
US20120166518A1 (en) Providing state service for online application users
CN109087116A (en) Accumulated point exchanging method, integral transaction system and computer readable storage medium
CN109345334A (en) Method and device for changing order information, electronic equipment and readable storage medium
CN111125420B (en) Object recommendation method and device based on artificial intelligence and electronic equipment
CN108241539B (en) Interactive big data query method and device based on distributed system, storage medium and terminal equipment
CN105300398B (en) The methods, devices and systems of gain location information
CN112307331B (en) Intelligent recruitment information pushing method, system and terminal equipment for college graduates based on blockchain
CN110738479A (en) Order management method and system based on multi-person ordering
CN113268336A (en) Service acquisition method, device, equipment and readable medium
CN113807415B (en) Federal feature selection method, federal feature selection device, federal feature selection computer device, and federal feature selection storage medium
US20200265514A1 (en) Recording medium recording communication program and communication apparatus
CN113051413B (en) Multimedia information processing method and device, electronic equipment and storage medium
KR102248487B1 (en) Review Contents Providing Method and Apparatus Thereof
CN111008873B (en) User determination method, device, electronic equipment and storage medium
CN105450513B (en) File the method and cloud storage service device of Email attachment
CN112560939B (en) Model verification method and device and computer equipment
CN110958565A (en) Method and device for calculating signal distance, computer equipment and storage medium
CN110414260B (en) Data access method, device, system and storage medium
CN110032499B (en) Page user loss analysis method and device, server and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant