CN112732297B - Method and device for updating federal learning model, electronic equipment and storage medium - Google Patents

Method and device for updating federal learning model, electronic equipment and storage medium Download PDF

Info

Publication number
CN112732297B
CN112732297B CN202011640520.0A CN202011640520A CN112732297B CN 112732297 B CN112732297 B CN 112732297B CN 202011640520 A CN202011640520 A CN 202011640520A CN 112732297 B CN112732297 B CN 112732297B
Authority
CN
China
Prior art keywords
gradient
learning model
formula
matrix
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011640520.0A
Other languages
Chinese (zh)
Other versions
CN112732297A (en
Inventor
朱星华
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011640520.0A priority Critical patent/CN112732297B/en
Priority to PCT/CN2021/083180 priority patent/WO2022141839A1/en
Publication of CN112732297A publication Critical patent/CN112732297A/en
Application granted granted Critical
Publication of CN112732297B publication Critical patent/CN112732297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer And Data Communications (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an intelligent decision technology, and discloses an updating method of a federated learning model, which comprises the following steps: the method comprises the steps that a server side receives a data removal request sent by a target client side participating in federated learning, after data removal is carried out according to the data removal request, data removal messages are sent to the target client side and other client sides participating in federated learning, gradient parameters of data removal are calculated according to a preset gradient formula, other client sides calculate a blackplug matrix according to the preset matrix formula, and parameters of a federated learning model are updated according to the gradient parameters, the blackplug matrix and a preset model parameter formula. The invention also provides an updating device of the federated learning model, electronic equipment and a computer readable storage medium. The method can solve the problem of low updating efficiency of the federal learning model.

Description

Method and device for updating federal learning model, electronic equipment and storage medium
Technical Field
The invention relates to the field of intelligent decision making, in particular to a method and a device for updating a federated learning model, electronic equipment and a computer-readable storage medium.
Background
Nowadays, people pay more and more attention to the safety of data information, and in order to guarantee the information safety during big data exchange and protect terminal data and personal data privacy, a method for developing high-efficiency machine learning among multiple participants or multiple computing nodes, namely a federal learning method, appears.
In the prior art, if a participant wants to quit the federal system after completing the federal model training, or if the participant wants to remove the influence of a part of private data on the federal model, the participant needs to retrain the federal model after deleting the data to update the parameters of the federal model. However, the method is very costly in terms of computing power and communication, and the cooperation is given by combining all the participants, which is time-consuming and labor-consuming. Therefore, in the prior art, after data is removed, it is not efficient to update parameters of the federated learning model.
Disclosure of Invention
The invention provides an updating method and device of a federated learning model and a computer readable storage medium, and mainly aims to solve the problem of low updating efficiency of the federated learning model.
In order to achieve the above object, the present invention provides an updating method of a federal learning model applied to a server, including:
receiving a data removal request sent by a target client participating in federal learning;
after deleting the removed data from the data for the federated learning model, sending a data removal message to all the clients participating in the federated learning so that the target client calculates gradient parameters of the removed data about the federated learning model according to a gradient formula and each client calculates a blackplug matrix of a loss function of the federated learning model after the removed data of the total data contained in each client is removed according to a matrix formula;
receiving gradient parameters sent by the target client and the black plug matrix sent by each client;
calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula;
and updating the federal learning model by using the updating parameters.
Optionally, the preset model parameter formula includes:
Figure GDA0003811653050000021
wherein, w (-m) For the updated parameters, w is a parameter vector of the federated learning model,
Figure GDA0003811653050000022
and a black plug matrix of the ith client is obtained, delta is the gradient parameter, m is the number of the removed data, and k is the number of all clients participating in the federal learning.
Optionally, after the updating the federal learning model by using the update parameters, the method further includes:
and sending the updated federal learning model to all clients participating in federal learning.
Optionally, before calculating the update parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula, the method further includes:
judging whether the gradient parameters or the black plug matrix are encrypted data or not;
and if the gradient parameters or the black plug matrix are encrypted data, decrypting the encrypted data according to a preset decryption formula to obtain the decrypted gradient parameters and the decrypted black plug matrix.
Optionally, the decrypting the encrypted data according to a preset decryption formula to obtain decrypted gradient parameters and a blackplug matrix includes:
the preset decryption formula is as follows:
m=L(c λ modn 2 )*μmodn
Figure GDA0003811653050000023
where m is a decrypted gradient parameter or a blackplug matrix, c is an encrypted gradient parameter or a blackplug matrix, mod is a modulus operator, and n is p × q, where p and q are large prime numbers satisfying the greatest common divisor 1 of pq and (p-1) (q-1), λ is a kamichael function, and μ is a preset parameter.
Optionally, after receiving a data removal request sent by a target client participating in federal learning, the method further includes:
and opening the monitoring ports according to the number of the clients.
Optionally, the receiving the gradient parameter sent by the target client and the black plug matrix sent by each client includes:
acquiring the monitoring port;
and receiving the gradient parameters sent by the target client and the black plug matrix sent by each client through the monitoring port by using a preset request response protocol.
In order to solve the above problem, the present invention further provides an updating apparatus applied to a federated learning model on a server side, where the apparatus includes:
the data request module is used for receiving a data removal request sent by a target client participating in federal learning;
a message sending module, configured to send a data removal message to all clients participating in federated learning after deleting the removal data from the data for the federated learning model, so that the target client calculates gradient parameters of the removal data with respect to the federated learning model according to a gradient formula, and each client calculates a blackplug matrix of a loss function of the federated learning model after removing the removal data, based on a matrix formula, of the full amount of data contained in each client;
the parameter receiving module is used for receiving the gradient parameters sent by the target client and the black plug matrix sent by each client;
the updating parameter calculation module is used for calculating updating parameters according to the gradient parameters, the black plug matrix and a preset model parameter formula;
and the model updating module is used for updating the federal learning model by using the updating parameters.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the updating method of the federal learning model.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, where at least one instruction is stored, and the at least one instruction is executed by a processor in an electronic device to implement the above update method of the federal learning model.
The gradient parameters sent by the target client and the black plug matrix sent by each client are received; calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula; and updating the federal learning model by using the updating parameters. After the data used for the federal learning are deleted, the federal learning model does not need to be retrained, the usability and the accuracy of the federal learning model can be maintained, and the updating efficiency of the federal learning model is improved. Meanwhile, the gradient parameters and the gradient parameters are calculated by the client side simultaneously, so that the federal learning model can be updated rapidly, the operation pressure of the server side is reduced, and the overall operation efficiency is improved. Therefore, the method, the device, the electronic equipment and the computer readable storage medium for updating the federal learning model provided by the invention can solve the problem of low updating efficiency of the federal learning model.
Drawings
Fig. 1 is a schematic flow chart of an updating method of a federal learning model according to a first method embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a method for updating a federated learning model in accordance with a second embodiment of the present invention;
fig. 3 is a block diagram of an updating apparatus of the federal learning model according to an embodiment of the present invention;
fig. 4 is a schematic internal structural diagram of an electronic device implementing an update method of a federal learning model according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides an updating method of a federated learning model. The execution subject of the update method of the federal learning model includes, but is not limited to, at least one of electronic devices, such as a server, a terminal, and the like, which can be configured to execute the method provided by the embodiments of the present application. In other words, the update method of the federal learning model may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a method for updating a federal learning model according to a first method embodiment of the present invention is schematically shown. In this embodiment, the method for updating the federal learning model is applied to a server, and includes:
and S11, receiving a data removal request sent by the target client participating in the federal learning.
The server is a server participating in federal learning. One server may have multiple clients.
In the embodiment of the invention, the server side opens the monitoring ports according to the number of the clients so as to perform data transmission with each client.
For example, if the number of the clients is K, the server opens K listening ports.
And S12, after the removal data are deleted from the data used for the federated learning model, sending a data removal message to all the clients participating in the federated learning, so that the target client calculates gradient parameters of the removal data about the federated learning model according to a gradient formula, and each client calculates a blackplug matrix of a loss function of the federated learning model after the removal data of the full amount of data contained in each client according to a matrix formula.
In the embodiment of the invention, if a certain client has a removal number, the client calculates the gradient parameters, and meanwhile, each client participating in federal learning calculates a blackplug Matrix (Hessian Matrix), so that a plurality of blackplug matrices can be obtained. Namely, other clients including the target client perform calculation to obtain the black plug matrix.
Specifically, please refer to the description in the second embodiment of the method of the present invention for the description of the client-side computing.
And S13, receiving the gradient parameters sent by the target client and the black plug matrix sent by each client.
In this embodiment of the present invention, the receiving the gradient parameter sent by the target client and the black matrix sent by each client includes:
acquiring the monitoring port;
and receiving the gradient parameters sent by the target client and the black plug matrix sent by each client through the monitoring port by using a preset request response protocol.
And S14, calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula.
Preferably, the preset model parameter formula includes:
Figure GDA0003811653050000051
wherein, w (-m) For the updated parameters, w is a parameter vector of the federated learning model,
Figure GDA0003811653050000052
and the black plug matrix of the ith client is delta, the gradient parameter is delta, the number of the removed data is m, and the number of all the clients participating in the federal learning is k.
In an embodiment of the present invention, before calculating the update parameter according to the gradient parameter, the black plug matrix, and a preset model parameter formula, the method further includes:
judging whether the gradient parameters or the black plug matrix are encrypted data or not;
and if the gradient parameters or the black plug matrix are encrypted data, decrypting the encrypted data according to a preset decryption formula to obtain the decrypted gradient parameters and the decrypted black plug matrix.
Specifically, the decrypting the encrypted data according to a preset decryption formula to obtain decrypted gradient parameters and a blackplug matrix includes:
the preset decryption formula is as follows:
m=L(c λ modn 2 )*μmodn
Figure GDA0003811653050000061
where m is a decrypted gradient parameter or a blackplug matrix, c is an encrypted gradient parameter or a blackplug matrix, mod is a modulus operator, and n is p × q, where p and q are large prime numbers satisfying the greatest common divisor 1 of pq and (p-1) (q-1), λ is a kamichael function, and μ is a preset parameter.
In detail, the encrypted gradient parameter or the blackplug matrix is decrypted by using the public key (n, g), so that the decrypted gradient parameter or the blackplug matrix is obtained.
In detail, after the updating the federal learning model by using the update parameters, the method further includes:
and sending the updated federal learning model to all clients participating in federal learning.
The gradient parameters sent by the target client and the black plug matrix sent by each client are received; calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula; and updating the federal learning model by using the updating parameters. After the data used for the federal learning are deleted, the federal learning model does not need to be retrained, the usability and the accuracy of the federal learning model can be maintained, and the updating efficiency of the federal learning model is improved. Meanwhile, the gradient parameters and the gradient parameters are calculated by the client side concurrently, so that the Federal learning model can be updated quickly, the operation pressure of the server side is reduced, and the overall operation efficiency is improved. Therefore, the method for updating the federal learning model provided by the invention can solve the problem of low updating efficiency of the federal learning model.
Fig. 2 is a flow chart illustrating a method for updating the federal learning model according to a second embodiment of the present invention. In this embodiment, the method for updating the federal learning model is applied to a client, and includes:
s21, when the removal data for removing the federal learning model exists in the client, obtaining the federal learning model.
In an embodiment of the present invention, the federated learning model may be a federated linear model or a federated logistic regression model.
S22, calculating gradient parameters of the removal data about the federal learning model according to a gradient formula, and calculating a blackplug matrix of a loss function of the federal learning model of the full-volume data contained in the client after the removal of the removal data according to a matrix formula.
Further, the gradient formula includes a first gradient sub-formula and a second gradient sub-formula, the matrix formula includes a first matrix sub-formula and a second matrix sub-formula, and after obtaining the federal learning model, the method further includes:
judging the type of the federal learning model;
if the federal learning model belongs to a federal linear model, calculating gradient parameters of the removed data about the federal learning model according to a first gradient sub-formula, and calculating a blackplug matrix of a loss function of the federal learning model after the removed data of the total data contained in the client is removed according to a first matrix sub-formula;
if the federal learning model belongs to a federal logistic regression model, calculating gradient parameters of the removed data about the federal learning model according to a second gradient sub-formula, and calculating a blackplug matrix of a loss function of the federal learning model after the removed data of the total data contained in the client is removed according to a second matrix sub-formula.
Specifically, the first gradient sub-formula includes:
Figure GDA0003811653050000071
wherein Δ is a gradient parameter, m is the number of removed data, λ is a regularization factor, w is a parameter vector of the Federal learning model, b is a preset loss perturbation factor, x j And y j Is the input data for the model.
Specifically, the second gradient sub-formula includes:
Figure GDA0003811653050000072
Figure GDA0003811653050000073
wherein Δ is a gradient parameter, m is the number of removed data, w is a parameter vector of the federated learning model, x j And y j Is the input data of the model.
Specifically, the first matrix sub-formula includes:
Figure GDA0003811653050000081
wherein the content of the first and second substances,
Figure GDA0003811653050000082
is a black plug matrix, and the black plug matrix is,
Figure GDA0003811653050000083
in order to be a function of the loss,
Figure GDA0003811653050000084
for the full amount of data that client i contains,
Figure GDA0003811653050000085
remove data, x, for client i j And y j Is the input data of the model.
In this embodiment, when the federal learning model is a federal linear model, the loss function of the federal learning model includes:
Figure GDA0003811653050000086
wherein the content of the first and second substances,
Figure GDA0003811653050000087
the method comprises the steps of obtaining a loss function, wherein lambda is a regularization factor, w is a parameter vector of the federal learning model, b is a preset loss disturbance factor, (x, y) is input data of the federal learning model, and n is the number of parameter federal learning clients.
Specifically, the second matrix sub-formula includes:
Figure GDA0003811653050000088
Figure GDA0003811653050000089
wherein the content of the first and second substances,
Figure GDA00038116530500000810
is a black plug matrix, and the black plug matrix is,
Figure GDA00038116530500000811
in order to be a function of the loss,
Figure GDA00038116530500000812
is the rotation.
In this embodiment, when the federal learning model is a federal logistic regression model, the loss function of the federal learning model includes:
Figure GDA00038116530500000813
Figure GDA00038116530500000814
wherein the content of the first and second substances,
Figure GDA00038116530500000815
for the loss function, x and y are input data to the model.
S23, sending the gradient parameters and the black plug matrix to a server side, so that the server side can update the parameters of the federal learning model according to the gradient parameters and the black plug matrix.
In the embodiment of the invention, when the client and the server are in a connected state, the gradient parameters and the black plug matrix are sent to the server, so that the server adjusts the parameters of the federal learning model according to the gradient parameters and the black plug matrix.
Specifically, in this embodiment of the present invention, before sending the gradient parameter and the black plug matrix to the server, the method further includes:
and carrying out encryption calculation on the gradient parameters and the black plug matrix.
Further, the performing encryption calculation on the gradient parameters and the black plug matrix includes:
randomly selecting two large prime numbers p and q meeting preset conditions, namely, enabling the greatest common divisor of pq and (p-1) (q-1) to be 1;
calculating n ═ p × q, and satisfying λ (n) ═ lcm (p-1, q-1), where lcm is the least common multiple and λ is the kamichael function;
randomly selecting one less than n 2 And calculating μ ═ L (g) by a positive integer of g λ modn 2 )) -1 modn;
Obtaining a public key (n, g) and a private key (lambda, mu) according to the n, g, lambda and mu;
and encrypting the gradient parameters and the black plug matrix by using the private key (lambda, mu) to obtain the encrypted gradient parameters and the encrypted black plug matrix.
The prime number refers to a natural number having no other factors than 1 and itself among natural numbers greater than 1, and the large prime number refers to the largest one or more of the natural numbers satisfying the definition of prime number.
Further, in the embodiment of the present invention, the client transmits the public key to the server, and encrypts the gradient parameter and the black plug matrix by using the private key (λ, μ) to obtain the encrypted gradient parameter and the encrypted black plug matrix.
The embodiment of the invention utilizes the private key to encrypt the gradient parameters and the black plug matrix, thereby improving the security of data transmission.
According to the embodiment of the invention, the gradient parameters and the black plug matrix are calculated in parallel at the client, so that the calculation pressure of the server is reduced, the speed of obtaining the gradient parameters and the black plug matrix is improved, and the calculated gradient parameters and the calculated black plug matrix are transmitted to the server, so that the server can rapidly update the model. Therefore, the method for updating the federal learning model provided by the invention can solve the problem of low updating efficiency of the federal learning model.
Fig. 3 is a functional block diagram of an updating apparatus of the federal learning model according to an embodiment of the present invention.
The update apparatus of the federal learning model according to the present invention may be divided into an update apparatus 100 of a first federal learning model and an update apparatus 200 of a second federal learning model. Wherein, the updating apparatus 100 of the first federated learning model may be installed in a server side and the updating apparatus 200 of the second federated learning model may be installed in a client side.
According to the implemented functions, the updating apparatus 100 of the first federated learning model may include a data request module 101, a message sending module 102, a parameter receiving module 103, an update parameter module 104, and a model updating module 105; and the updating apparatus 200 of the second federated learning model may include a model obtaining module 201, a gradient and matrix calculating module 202, and a parameter sending module 203.
The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In this embodiment, the functions of the modules in the updating apparatus 100 of the first federated learning model and the updating apparatus 200 of the second federated learning model are as follows:
the data request module 101 is configured to receive a data removal request sent by a target client participating in federal learning;
the message sending module 102 is configured to send a data removal message to all the clients participating in federated learning after deleting the removed data from the data for the federated learning model, so that the target client calculates gradient parameters of the removed data with respect to the federated learning model according to a gradient formula, and each client calculates a blackplug matrix of a loss function of the federated learning model after removing the removed data, according to a matrix formula, of the total data contained in each client;
the parameter receiving module 103 is configured to receive the gradient parameters sent by the target client and the black plug matrix sent by each client;
the update parameter calculation module 104 is configured to calculate an update parameter according to the gradient parameter, the black plug matrix, and a preset model parameter formula;
the model updating module 105 is configured to update the federal learning model by using the update parameters.
The model obtaining module 201 is configured to obtain a federated learning model when the client has removal data for removing the federated learning model;
the gradient and matrix calculation module 202 is configured to calculate gradient parameters of the removed data with respect to the federal learning model according to a gradient formula, and calculate a blackplug matrix of a loss function of the federal learning model after the removed data of the full amount of data included in the client is removed according to a matrix formula;
the parameter sending module 203 is configured to send the gradient parameter and the black plug matrix to a server, so that the server updates the parameter of the federal learning model according to the gradient parameter and the black plug matrix.
In detail, the specific implementation of each module of the updating apparatus 100 of the first federal learning model is as follows:
the data request module 101 is configured to receive a data removal request sent by a target client participating in federal learning.
The server is a server participating in federal learning. One server may have multiple clients.
In the embodiment of the invention, the server side opens the monitoring ports according to the number of the clients so as to perform data transmission with each client.
For example, if the number of the clients is K, the server opens K listening ports.
The message sending module 102 is configured to send a data removal message to all the clients participating in federated learning after deleting the removed data from the data for the federated learning model, so that the target client calculates gradient parameters of the removed data with respect to the federated learning model according to a gradient formula, and each client calculates a blackplug matrix of a loss function of the federated learning model after removing the removed data, according to a matrix formula, of the total data contained in each client.
In the embodiment of the invention, if a removal number exists in a certain client, the client calculates gradient parameters, and meanwhile, each client participating in federal learning calculates a black Matrix (Hessian Matrix), so that a plurality of black matrices can be obtained. Namely, other clients including the target client perform calculation to obtain the black plug matrix.
Specifically, please refer to the description of the second embodiment of the method of the present invention for the description of the client computing.
The parameter receiving module 103 is configured to receive the gradient parameters sent by the target client and the black plug matrix sent by each client.
In this embodiment of the present invention, the receiving the gradient parameter sent by the target client and the black matrix sent by each client includes:
acquiring the monitoring port;
and receiving the gradient parameters sent by the target client and the black plug matrix sent by each client through the monitoring port by using a preset request response protocol.
The update parameter calculation module 104 is configured to calculate an update parameter according to the gradient parameter, the black matrix, and a preset model parameter formula.
Preferably, the preset model parameter formula includes:
Figure GDA0003811653050000121
wherein, w (-m) For the updated parameters, w is a parameter vector of the federated learning model,
Figure GDA0003811653050000122
and a black plug matrix of the ith client is obtained, delta is the gradient parameter, m is the number of the removed data, and k is the number of all clients participating in the federal learning.
In the embodiment of the present invention, before calculating the update parameter according to the gradient parameter, the black plug matrix, and the preset model parameter formula, the apparatus further includes a decryption module, where the decryption module is configured to determine whether the gradient parameter or the black plug matrix is encrypted data; and if the gradient parameters or the black plug matrix are encrypted data, decrypting the encrypted data according to a preset decryption formula to obtain the decrypted gradient parameters and the decrypted black plug matrix.
Specifically, the decrypting the encrypted data according to a preset decryption formula to obtain decrypted gradient parameters and a blackplug matrix includes:
the preset decryption formula is as follows:
m=L(c λ modn 2 )*μmodn
Figure GDA0003811653050000123
where m is a decrypted gradient parameter or a blackplug matrix, c is an encrypted gradient parameter or a blackplug matrix, mod is a modulus operator, and n is p × q, where p and q are large prime numbers satisfying the greatest common divisor 1 of pq and (p-1) (q-1), λ is a kamichael function, and μ is a preset parameter.
In detail, the encrypted gradient parameter or the blackplug matrix is decrypted by using the public key (n, g), so that the decrypted gradient parameter or the blackplug matrix is obtained.
In detail, after the federal learning model is updated by using the update parameters, the apparatus further includes a model update module, and the model update module 105 is configured to send the updated federal learning model to all clients participating in federal learning.
The gradient parameters sent by the target client and the black plug matrix sent by each client are received; calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula; and updating the federal learning model by using the updating parameters. After the data used for the federal learning are deleted, the federal learning model does not need to be retrained, the usability and the accuracy of the federal learning model can be maintained, and the updating efficiency of the federal learning model is improved. Meanwhile, the gradient parameters and the gradient parameters are calculated by the client side concurrently, so that the Federal learning model can be updated quickly, the operation pressure of the server side is reduced, and the overall operation efficiency is improved. Therefore, the method for updating the federal learning model provided by the invention can solve the problem of low updating efficiency of the federal learning model.
In detail, the specific implementation of the modules of the updating apparatus 200 of the second federated learning model is as follows:
the model obtaining module 201 is configured to obtain the federated learning model when the client has removal data for removing the federated learning model.
In an embodiment of the present invention, the federated learning model may be a federated linear model or a federated logistic regression model.
The gradient and matrix calculation module 202 is configured to calculate gradient parameters of the removed data with respect to the federal learning model according to a gradient formula, and calculate a blackplug matrix of a loss function of the federal learning model of the full amount of data included in the client after the removed data is removed according to a matrix formula.
Further, the gradient formula includes a first gradient sub-formula and a second gradient sub-formula, the matrix formula includes a first matrix sub-formula and a second matrix sub-formula, after the federal learning model is obtained, the apparatus further includes the gradient and matrix calculation module 202, and the gradient and matrix calculation module 202 is configured to determine the type of the federal learning model; if the federal learning model belongs to a federal linear model, calculating gradient parameters of the removed data about the federal learning model according to a first gradient sub-formula, and calculating a blackplug matrix of a loss function of the federal learning model after the removed data of the total data contained in the client is removed according to a first matrix sub-formula; if the federal learning model belongs to a federal logistic regression model, calculating gradient parameters of the removed data about the federal learning model according to a second gradient sub-formula, and calculating a black plug matrix of a loss function of the federal learning model after the removed data of the total data contained in the client is removed according to a second matrix sub-formula.
Specifically, the first gradient sub-formula includes:
Figure GDA0003811653050000131
wherein Δ is a gradient parameter, m is the number of removed data, λ is a regularization factor, w is a parameter vector of the federated learning model, b is a preset loss perturbation factor, x j And y j The input data for the model are preset parameters.
Specifically, the second gradient sub-formula includes:
Figure GDA0003811653050000141
Figure GDA0003811653050000142
wherein Δ is a gradient parameter, m is the number of removed data, w is a parameter vector of the federated learning model, x j And y j Is the input data of the model.
Specifically, the first matrix sub-formula includes:
Figure GDA0003811653050000143
wherein the content of the first and second substances,
Figure GDA0003811653050000144
is a black plug matrix, and the black plug matrix is,
Figure GDA0003811653050000145
in order to be a function of the loss,
Figure GDA0003811653050000146
for the full amount of data that client i contains,
Figure GDA0003811653050000147
remove data, x, for client i j And y j Is the input data of the model.
In this embodiment, when the federal learning model is a federal linear model, the loss function of the federal learning model includes:
Figure GDA0003811653050000148
wherein the content of the first and second substances,
Figure GDA0003811653050000149
and (4) taking a loss function as a result, wherein lambda is a regularization factor, w is a parameter vector of the federal learning model, b is a preset loss disturbance factor, (x, y) is input data of the federal learning model, and n is the number of clients for parameter federal learning.
Specifically, the second matrix sub-formula includes:
Figure GDA00038116530500001410
Figure GDA00038116530500001411
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA00038116530500001412
is a black plug matrix, and the black plug matrix is,
Figure GDA00038116530500001413
in order to be a function of the loss,
Figure GDA00038116530500001414
is the rotation.
In this embodiment, when the federal learning model is a federal logistic regression model, the loss function of the federal learning model includes:
Figure GDA0003811653050000151
Figure GDA0003811653050000152
wherein the content of the first and second substances,
Figure GDA0003811653050000153
for the loss function, x and y are the outputs of the modelAnd inputting data.
The parameter sending module 203 is configured to send the gradient parameter and the black plug matrix to a server, so that the server updates the parameter of the federal learning model according to the gradient parameter and the black plug matrix.
In the embodiment of the invention, when the client and the server are in a connected state, the gradient parameters and the black plug matrix are sent to the server, so that the server adjusts the parameters of the federal learning model according to the gradient parameters and the black plug matrix.
Specifically, in this embodiment of the present invention, before sending the gradient parameter and the black matrix to the server, the apparatus further includes an encryption module 204, where the encryption module 204 is configured to:
and carrying out encryption calculation on the gradient parameters and the black plug matrix.
Further, the performing encryption calculation on the gradient parameters and the black plug matrix includes:
randomly selecting two large prime numbers p and q meeting preset conditions, namely, making the greatest common divisor of pq and (p-1) (q-1) be 1;
calculating n-p × q, and satisfying λ (n) -lcm (p-1, q-1), wherein lcm is the least common multiple, and λ is the kamichael function;
randomly selecting one less than n 2 And calculating μ ═ L (g) by the positive integer g of (c) λ modn 2 )) -1 modn;
Obtaining a public key (n, g) and a private key (lambda, mu) according to the n, g, lambda and mu;
and encrypting the gradient parameters and the black plug matrix by using the private key (lambda, mu) to obtain the encrypted gradient parameters and the encrypted black plug matrix.
The prime number refers to a natural number having no other factors than 1 and itself among natural numbers greater than 1, and the large prime number refers to the largest one or more of the natural numbers satisfying the definition of prime number.
Further, in the embodiment of the present invention, the client transmits the public key to the server, and encrypts the gradient parameter and the black plug matrix by using the private key (λ, μ) to obtain the encrypted gradient parameter and the encrypted black plug matrix.
The embodiment of the invention utilizes the private key to encrypt the gradient parameters and the black plug matrix, thereby improving the security of data transmission.
According to the embodiment of the invention, the gradient parameters and the black plug matrix are calculated in parallel at the client, so that the calculation pressure of the server is reduced, the speed of obtaining the gradient parameters and the black plug matrix is improved, and the calculated gradient parameters and the calculated black plug matrix are transmitted to the server, so that the server can rapidly update the model. Therefore, the method for updating the federal learning model provided by the invention can solve the problem of low updating efficiency of the federal learning model.
Fig. 4 is a schematic structural diagram of an electronic device that implements an update method of a federal learning model according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as an update program 12 of a federal learning model, stored in the memory 11 and operable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of the update program 12 of the federal learning model, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., update programs of the federal learning model, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 4 only shows an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The update program 12 of the federal learning model stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, which when executed in the processor 10, can realize:
in detail, when the electronic device is a server-side electronic device, the method for updating the federal learning model includes:
receiving a data removal request sent by a target client participating in federal learning;
after deleting the removed data from the data for the federated learning model, sending a data removal message to all the clients participating in the federated learning so that the target client calculates gradient parameters of the removed data about the federated learning model according to a gradient formula and each client calculates a blackplug matrix of a loss function of the federated learning model after the removed data of the total data contained in each client is removed according to a matrix formula;
receiving gradient parameters sent by the target client and the black plug matrix sent by each client;
calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula;
and updating the federal learning model by using the updating parameters.
Further, when the electronic device is a client electronic device, the method for updating the federal learning model includes:
when the client side has removal data for removing a federated learning model, acquiring the federated learning model;
calculating gradient parameters of the removal data about the federal learning model according to a gradient formula, and calculating a black plug matrix of a loss function of the federal learning model of full data contained in the client after the removal data is removed according to a matrix formula;
and sending the gradient parameters and the black plug matrix to a server side so that the server side updates the parameters of the federal learning model according to the gradient parameters and the black plug matrix.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiments corresponding to fig. 1 to fig. 4, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
in detail, when the electronic device is a server-side electronic device, the method for updating the federal learning model includes:
receiving a data removal request sent by a target client participating in federal learning;
after deleting the removed data from the data for the federated learning model, sending a data removal message to all the clients participating in the federated learning so that the target client calculates gradient parameters of the removed data about the federated learning model according to a gradient formula and each client calculates a blackplug matrix of a loss function of the federated learning model after the removed data of the total data contained in each client is removed according to a matrix formula;
receiving gradient parameters sent by the target client and the black plug matrix sent by each client;
calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula;
and updating the federal learning model by using the updating parameters.
Further, when the electronic device is a client-side electronic device, the method for updating the federal learning model includes:
when the client side has removal data for removing a federated learning model, acquiring the federated learning model;
calculating gradient parameters of the removal data about the federal learning model according to a gradient formula, and calculating a blackplug matrix of a loss function of the federal learning model of full data contained in the client after the removal data is removed according to a matrix formula;
and sending the gradient parameters and the black plug matrix to a server side so that the server side updates the parameters of the federated learning model according to the gradient parameters and the black plug matrix.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not to denote any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (7)

1. The method for updating the federated learning model is characterized in that the method is applied to a server side and comprises the following steps:
receiving a data removal request sent by a target client participating in federal learning;
after removing data from data used for a federated learning model, sending a data removal message to all clients participating in federated learning, so that the target client calculates gradient parameters of the removed data about the federated learning model according to a gradient formula, and each client calculates a black plug matrix of a loss function of the federated learning model after the removal of the removed data of full-amount data contained in each client according to a matrix formula, wherein when the federated learning model is a federated linear model, the gradient formula is a first gradient sub-formula, the matrix formula is a first matrix sub-formula, and when the federated learning model is a federated logistic regression model, the gradient formula is a second gradient sub-formula, and the matrix formula is a second matrix sub-formula;
the first gradient sub-formula comprises:
Figure FDA0003811653040000011
the second gradient sub-formula comprises:
Figure FDA0003811653040000012
Figure FDA0003811653040000013
wherein Δ is a gradient parameter, m is the number of removed data, w is a parameter vector of the Federal learning model, x j And y j Is input data of the model, lambda is a regularization factor, b is a preset loss perturbation factor,
Figure FDA0003811653040000014
is a loss function;
the first matrix sub-formula comprises:
Figure FDA0003811653040000015
the second matrix sub-formula comprises:
Figure FDA0003811653040000016
Figure FDA0003811653040000021
wherein the content of the first and second substances,
Figure FDA0003811653040000022
for the black plug matrix of the ith client,
Figure FDA0003811653040000023
in order to be a function of the loss,
Figure FDA0003811653040000024
for the full amount of data that client i contains,
Figure FDA0003811653040000025
removal data, x, for client i j And y j Is the input data for the model and is,
Figure FDA0003811653040000026
in order to obtain the degree of rotation,
Figure FDA0003811653040000027
is a loss function;
the preset model parameter formula comprises:
Figure FDA0003811653040000028
wherein, w (-m) For updating parameters, w is a parameter vector of the federated learning model,
Figure FDA0003811653040000029
a black plug matrix of the ith client is set, delta is the gradient parameter, m is the number of the removed data, and k is the number of all clients participating in the federal study;
receiving gradient parameters sent by the target client and the black plug matrix sent by each client;
judging whether the gradient parameters or the black plug matrix are encrypted data or not;
if the gradient parameters or the black plug matrix are encrypted data, decrypting the encrypted data according to a preset decryption formula to obtain decrypted gradient parameters and the black plug matrix;
calculating an updating parameter according to the gradient parameter, the black plug matrix and a preset model parameter formula;
and updating the federal learning model by using the updating parameters.
2. The method for updating a federated learning model as defined in claim 1, wherein after the updating the federated learning model with the updated parameters, the method further comprises:
and sending the updated federal learning model to all clients participating in federal learning.
3. The method for updating a federated learning model as defined in claim 1, wherein after receiving a data removal request sent by a target client participating in federated learning, the method further comprises:
and opening the monitoring ports according to the number of the clients.
4. The method for updating the federal learning model as claimed in claim 1, wherein the receiving the gradient parameters transmitted from the target ue and the black matrix transmitted from each ue comprises:
acquiring a monitoring port;
and receiving the gradient parameters sent by the target client and the black plug matrix sent by each client through the monitoring port by using a preset request response protocol.
5. The utility model provides an update device of nation learning model, its characterized in that, the device is applied to server side, the device includes:
the data request module is used for receiving a data removal request sent by a target client participating in federal learning;
a message sending module, configured to send a data removal message to all clients participating in federated learning after removing removed data from data used in a federated learning model, so that the target client calculates gradient parameters of the removed data with respect to the federated learning model according to a gradient formula, and each client calculates a black plug matrix of a loss function of the federated learning model after removing the full amount of data contained in each client according to a matrix formula, where, when the federated learning model is a federated linear model, the gradient formula is a first gradient sub-formula, and the matrix formula is a first matrix sub-formula, and when the federated learning model is a federated logistic regression model, the gradient formula is a second gradient sub-formula, and the matrix formula is a second matrix sub-formula;
the first gradient sub-formula comprises:
Figure FDA0003811653040000031
the second gradient sub-formula comprises:
Figure FDA0003811653040000032
Figure FDA0003811653040000033
wherein Δ is a gradient parameter, m is the number of removed data, w is a parameter vector of the Federal learning model, x j And y j Is the input data of the model, lambda is the regularization factor, b is the preset loss perturbation factor,
Figure FDA0003811653040000034
is a loss function;
the first matrix sub-formula comprises:
Figure FDA0003811653040000035
the second matrix sub-formula comprises:
Figure FDA0003811653040000036
Figure FDA0003811653040000037
wherein the content of the first and second substances,
Figure FDA0003811653040000038
for the black plug matrix of the ith client,
Figure FDA0003811653040000039
in order to be a function of the loss,
Figure FDA00038116530400000310
for the full amount of data that client i contains,
Figure FDA0003811653040000041
removal data, x, for client i j And y j Is the input data for the model and is,
Figure FDA0003811653040000042
in order to obtain the degree of rotation,
Figure FDA0003811653040000043
is a loss function;
the preset model parameter formula comprises:
Figure FDA0003811653040000044
wherein, w (-m) For updating parameters, w is a parameter vector of the federated learning model,
Figure FDA0003811653040000045
is the black plug matrix of the ith client, delta is the gradient parameter, and m is the gradient parameterRemoving the number of data, wherein k is the number of all clients participating in the federal learning;
the parameter receiving module is used for receiving the gradient parameters sent by the target client and the black plug matrixes sent by each client;
the decryption module is used for judging whether the gradient parameters or the black plug matrix are encrypted data or not, and if the gradient parameters or the black plug matrix are the encrypted data, decrypting the encrypted data according to a preset decryption formula to obtain the decrypted gradient parameters and the decrypted black plug matrix;
the updating parameter calculation module is used for calculating updating parameters according to the gradient parameters, the black plug matrix and a preset model parameter formula;
and the model updating module is used for updating the federal learning model by using the updating parameters.
6. An electronic device, characterized in that the electronic device comprises:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for updating a federal learning model as claimed in any of claims 1 to 4.
7. A computer-readable storage medium storing a computer program which, when executed by a processor, implements a method for updating a federated learning model as defined in any one of claims 1 to 4.
CN202011640520.0A 2020-12-31 2020-12-31 Method and device for updating federal learning model, electronic equipment and storage medium Active CN112732297B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011640520.0A CN112732297B (en) 2020-12-31 2020-12-31 Method and device for updating federal learning model, electronic equipment and storage medium
PCT/CN2021/083180 WO2022141839A1 (en) 2020-12-31 2021-03-26 Method and apparatus for updating federated learning model, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011640520.0A CN112732297B (en) 2020-12-31 2020-12-31 Method and device for updating federal learning model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112732297A CN112732297A (en) 2021-04-30
CN112732297B true CN112732297B (en) 2022-09-27

Family

ID=75609096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011640520.0A Active CN112732297B (en) 2020-12-31 2020-12-31 Method and device for updating federal learning model, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112732297B (en)
WO (1) WO2022141839A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851786B (en) * 2019-11-14 2023-06-06 深圳前海微众银行股份有限公司 Inter-enterprise data interaction method, device, equipment and storage medium based on longitudinal federal learning
CN113887743B (en) * 2021-09-29 2022-07-22 浙江大学 Platform for forgetting and verifying data in federated learning
CN116545734A (en) * 2022-07-28 2023-08-04 上海光之树科技有限公司 Matrix decomposition method based on security aggregation and key exchange
CN115329985B (en) * 2022-09-07 2023-10-27 北京邮电大学 Unmanned cluster intelligent model training method and device and electronic equipment
CN117094410B (en) * 2023-07-10 2024-02-13 西安电子科技大学 Model repairing method for poisoning damage federal learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200050443A1 (en) * 2018-08-10 2020-02-13 Nvidia Corporation Optimization and update system for deep learning models
CN109189825B (en) * 2018-08-10 2022-03-15 深圳前海微众银行股份有限公司 Federated learning modeling method, server and medium for horizontal data segmentation
CN110378488B (en) * 2019-07-22 2024-04-26 深圳前海微众银行股份有限公司 Client-side change federal training method, device, training terminal and storage medium
CN110610242B (en) * 2019-09-02 2023-11-14 深圳前海微众银行股份有限公司 Method and device for setting weights of participants in federal learning
CN111553483B (en) * 2020-04-30 2024-03-29 同盾控股有限公司 Federal learning method, device and system based on gradient compression
CN111814985B (en) * 2020-06-30 2023-08-29 平安科技(深圳)有限公司 Model training method under federal learning network and related equipment thereof

Also Published As

Publication number Publication date
CN112732297A (en) 2021-04-30
WO2022141839A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN112732297B (en) Method and device for updating federal learning model, electronic equipment and storage medium
CN114389889B (en) File full life cycle management method and device based on block chain technology
CN111695097A (en) Login checking method and device and computer readable storage medium
CN111612458A (en) Method and device for processing block chain data and readable storage medium
CN114124502B (en) Message transmission method, device, equipment and medium
CN112394974A (en) Code change comment generation method and device, electronic equipment and storage medium
CN113127915A (en) Data encryption desensitization method and device, electronic equipment and storage medium
CN112217642A (en) Data encryption sharing method and device, electronic equipment and computer storage medium
CN111563268B (en) Data encryption method and device based on matrix operation and storage medium
CN115374150A (en) Character string data query method and device, electronic equipment and storage medium
CN113420049A (en) Data circulation method and device, electronic equipment and storage medium
CN113112252A (en) Resource transfer method and device based on block chain, electronic equipment and storage medium
CN114417374A (en) Intelligent contract business card method, device, equipment and storage medium based on block chain
CN114553532A (en) Data secure transmission method and device, electronic equipment and storage medium
CN113162763A (en) Data encryption and storage method and device, electronic equipment and storage medium
CN113221154A (en) Service password obtaining method and device, electronic equipment and storage medium
CN114760073B (en) Block chain-based warehouse commodity distribution method and device, electronic equipment and medium
CN111260532A (en) Private image encryption method and device, electronic equipment and computer readable storage medium
CN112988888B (en) Key management method, device, electronic equipment and storage medium
CN111683070B (en) Data transmission method and device based on identity encryption and storage medium
CN114091041A (en) Data transmission method, device, equipment and medium based on embedded equipment
CN112182598A (en) Public sample ID identification method, device, server and readable storage medium
CN114785860B (en) Encryption and decryption-based data response method, device, equipment and medium
CN112446765A (en) Product recommendation method and device, electronic equipment and computer-readable storage medium
CN116684077B (en) Carbon emission monitoring method and device based on carbon platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant