WO2020029585A1 - Procédé et dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert et support d'informations - Google Patents

Procédé et dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert et support d'informations Download PDF

Info

Publication number
WO2020029585A1
WO2020029585A1 PCT/CN2019/078522 CN2019078522W WO2020029585A1 WO 2020029585 A1 WO2020029585 A1 WO 2020029585A1 CN 2019078522 W CN2019078522 W CN 2019078522W WO 2020029585 A1 WO2020029585 A1 WO 2020029585A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
neural network
value
encrypted
loss value
Prior art date
Application number
PCT/CN2019/078522
Other languages
English (en)
Chinese (zh)
Inventor
刘洋
杨强
成柯葳
范涛
陈天健
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2020029585A1 publication Critical patent/WO2020029585A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present invention relates to the technical field of machine learning, and in particular, to a neural network federation modeling method, device, and storage medium based on transfer learning.
  • Machine learning can be applied to various fields, such as data mining, computer vision, natural language processing, biometric recognition, medical diagnosis, detection of credit card fraud, securities market analysis, and DNA sequence sequencing.
  • Machine learning includes a learning part and an execution part.
  • the learning part uses sample data to modify the system's knowledge base to improve the efficiency of the system's execution part to complete the task.
  • the execution part completes the task according to the knowledge base, and at the same time feeds the obtained information to the learning part.
  • the main purpose of the present invention is to provide a neural network federation modeling method, equipment and storage medium based on transfer learning, which aims to improve the privacy and utilization of sample data of all parties.
  • the present invention provides a neural network federation modeling method based on transfer learning.
  • the neural network federation modeling method based on transfer learning includes the following steps:
  • the first terminal inputs a feature vector of the first sample data to a first neural network to obtain a first neural network vector, and determines a first gradient value and a first loss value according to the first neural network vector, and applies the The first gradient value and the first loss value are encrypted;
  • the second terminal inputs the second sample data to the second neural network to obtain a second neural network vector, determines a second gradient value and a second loss value according to the first neural network vector, and applies the After the second gradient value and the second loss value are encrypted, and transmitted to the first terminal, the feature dimensions of the first neural network vector and the second neural network vector are the same;
  • the model parameters to be converged are used to establish the model to be trained.
  • the third terminal receives the encrypted third loss value sent by the first terminal, it obtains the encrypted historical loss value sent by the first terminal last time, and according to the pre-stored private key pair
  • the encrypted third loss value, the historical loss value, and the third gradient value are decrypted, and the decrypted third loss value, the historical loss value, and the third gradient value are returned to the first terminal.
  • the step of determining whether the model to be trained converges according to the third loss value and the historical loss value returned by the third terminal decryption includes:
  • the difference is less than or equal to a preset threshold, it is determined that the model to be trained converges, otherwise it is determined that the model to be trained does not converge.
  • the encrypted first gradient value and the first loss value are combined with the received encrypted second gradient value and the second loss value sent by the second terminal to obtain an encrypted third loss value and After the step of the third gradient value, the method further includes:
  • the second terminal combines the encrypted second gradient value with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sends the encrypted fourth gradient value.
  • the third terminal ;
  • the method further includes:
  • a gradient update instruction is sent to the third terminal, and the third terminal decrypts the encrypted third gradient value and the fourth gradient value according to the gradient update instruction, and decrypts the decrypted first A three gradient value is returned to the first terminal, and a decrypted fourth gradient value is returned to the second terminal;
  • the first terminal updates the local gradient of the first neural network according to the third gradient value returned by the third terminal decryption, and after the update is completed, returns to the execution step: the first terminal updates the characteristics of the first sample data
  • the vector is input to the first neural network to obtain a first neural network vector, and a first gradient value and a first loss value are determined according to the first neural network vector, and the first gradient value and the first loss value are determined.
  • the second terminal updates the local gradient of the second neural network according to the fourth gradient value decrypted by the third terminal, and after the update is completed, returns to the execution step: the second terminal encrypts the second gradient Value, combined with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sending the encrypted fourth gradient value to the third terminal.
  • the third terminal generates a set of public key and private key, and transmits the public key to the first terminal and the second terminal, and the first terminal and the first terminal
  • the two terminals respectively store the public key in respective preset storage areas.
  • the third terminal generates a set of public key and private key at a preset interval, and transmits the generated public key to the first terminal and the second terminal, and the first terminal and the The second terminal updates the public key stored in the respective preset storage area according to the received public key, respectively.
  • step of encrypting the first gradient value and the first loss value includes:
  • the first terminal obtains a public key from a preset storage area, and performs homomorphic encryption on the first gradient value and the first loss value according to the public key.
  • the neural network federation modeling method based on transfer learning further includes:
  • the present invention also provides a neural network federation modeling device based on transfer learning.
  • the neural network federation modeling device based on transfer learning includes: a memory, a processor, and a memory stored on the memory and A neural network federation modeling program based on transfer learning that can be run on the processor, and the neural network federation modeling program based on transfer learning implements the neural network based on transfer learning as described above when executed by the processor Steps in federated modeling method.
  • the present invention also provides a storage medium storing a neural network federation modeling program based on transfer learning, and implementing the neural network federation modeling program based on transfer learning to implement The steps of a neural network federation modeling method for transfer learning.
  • the present invention provides a neural network federation modeling method, device, and storage medium based on transfer learning.
  • the present invention inputs feature vectors of sample data of two parties into two neural networks, and two parties correspondingly obtain two neural networks with the same feature dimension.
  • Vector and get the respective gradient and loss values according to the neural network vectors with the same feature dimensions, and one of them encrypts the gradient and loss values, and then combines the encrypted gradient and loss values sent by the other party To obtain the encrypted total loss value and total gradient value, and transmit the encrypted total loss value to a third party, and finally determine whether the model to be trained converges based on the decrypted total loss value and historical loss value returned by the third party.
  • the model parameters to be trained are used to establish the model to be trained. Because the data to be transmitted by both parties is encrypted, and joint training can be performed in an encrypted form, the privacy of the sample data of all parties is effectively improved. , Multi-layer neural network of joint parties for machine learning can effectively use the sample of all parties Data, improve the utilization of the parties to the sample data.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a first embodiment of a neural network federation modeling method based on transfer learning according to the present invention
  • FIG. 3 is a schematic flowchart of a second embodiment of a neural network federation modeling method based on transfer learning according to the present invention.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment according to an embodiment of the present invention.
  • the neural network federation modeling device based on the transfer learning in the embodiment of the present invention may be a PC, or a mobile terminal device with a display function such as a smart phone, a tablet computer, a portable computer, or the like.
  • the neural network federation modeling device based on transfer learning may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen, an input unit such as a keyboard, and the optional user interface 1003 may further include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory or a non-volatile memory. memory), such as disk storage.
  • the memory 1005 may optionally be a storage device independent of the foregoing processor 1001.
  • the structure of the neural network federation modeling device based on transfer learning shown in FIG. 1 does not constitute a limitation on the neural network federation modeling device based on transfer learning, and may include more or more than illustrated Fewer components, or some components combined, or different component arrangements.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a neural network federation modeling program based on transfer learning.
  • the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
  • the user interface 1003 is mainly used to connect to the client (user), and The client performs data communication;
  • the processor 1001 can be used to call a neural network federation modeling program based on transfer learning stored in the memory 1005, and execute the following steps:
  • the first terminal inputs a feature vector of the first sample data to a first neural network to obtain a first neural network vector, and determines a first gradient value and a first loss value according to the first neural network vector, and applies the The first gradient value and the first loss value are encrypted;
  • the second terminal inputs the second sample data to the second neural network to obtain a second neural network vector, determines a second gradient value and a second loss value according to the first neural network vector, and applies the After the second gradient value and the second loss value are encrypted, and transmitted to the first terminal, the feature dimensions of the first neural network vector and the second neural network vector are the same;
  • the model parameters to be converged are used to establish the model to be trained.
  • the third terminal receives the encrypted third loss value sent by the first terminal, it obtains the encrypted historical loss value sent by the first terminal last time, and according to the pre-stored private key pair
  • the encrypted third loss value, the historical loss value, and the third gradient value are decrypted, and the decrypted third loss value, the historical loss value, and the third gradient value are returned to the first terminal.
  • processor 1001 may be used to call a neural network federation modeling program based on transfer learning stored in the memory 1005, and further perform the following steps:
  • the difference is less than or equal to a preset threshold, it is determined that the model to be trained is in a convergence state, otherwise it is determined that the model to be trained is not in a convergence state.
  • processor 1001 may be used to call a neural network federation modeling program based on transfer learning stored in the memory 1005, and further perform the following steps:
  • the second terminal combines the encrypted second gradient value with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sends the encrypted fourth gradient value.
  • the third terminal ;
  • the method further includes:
  • a gradient update instruction is sent to the third terminal, and the third terminal decrypts the encrypted third gradient value and the fourth gradient value according to the gradient update instruction, and decrypts the decrypted first A three gradient value is returned to the first terminal, and a decrypted fourth gradient value is returned to the second terminal;
  • the first terminal updates the local gradient of the first neural network according to the third gradient value returned by the third terminal decryption, and after the update is completed, returns to the execution step: the first terminal updates the characteristics of the first sample data
  • the vector is input to the first neural network to obtain a first neural network vector, and a first gradient value and a first loss value are determined according to the first neural network vector, and the first gradient value and the first loss value are determined.
  • the second terminal updates the local gradient of the second neural network according to the fourth gradient value decrypted by the third terminal, and after the update is completed, returns to the execution step: the second terminal encrypts the second gradient Value, combined with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sending the encrypted fourth gradient value to the third terminal.
  • the third terminal generates a set of public key and private key, and transmits the public key to the first terminal and the second terminal, and the first terminal and the first terminal
  • the two terminals respectively store the public key in respective preset storage areas.
  • the third terminal generates a set of public key and private key at a preset interval, and transmits the generated public key to the first terminal and the second terminal, and the first terminal and the The second terminal updates the public key stored in the respective preset storage area according to the received public key, respectively.
  • processor 1001 may be used to call a neural network federation modeling program based on transfer learning stored in the memory 1005, and further perform the following steps:
  • the first terminal obtains a public key from a preset storage area, and performs homomorphic encryption on the first gradient value and the first loss value according to the public key.
  • processor 1001 may be used to call a neural network federation modeling program based on transfer learning stored in the memory 1005, and further perform the following steps:
  • the specific embodiments of the neural network federation modeling device based on transfer learning of the present invention are basically the same as the specific embodiments of the neural network federation modeling method based on transfer learning described below, and will not be repeated here.
  • FIG. 2 is a schematic flowchart of a first embodiment of a neural network federation modeling method based on transfer learning according to the present invention.
  • a first terminal inputs a feature vector of first sample data to a first neural network, obtains a first neural network vector, and determines a first gradient value and a first loss value according to the first neural network vector.
  • a gradient value and a first loss value are encrypted;
  • the present invention can combine multi-party sample data to train the model to be trained.
  • the following uses the joint two-party sample data as an example to explain, where one sample data is the first sample data and is stored in the first terminal. And the first neural network is deployed at the first terminal, and the other party ’s sample data is the second sample data, which is stored in the second terminal, and the second neural network is deployed at the second terminal, and the first terminal is connected to the second terminal,
  • a third terminal is introduced.
  • the third terminal stores a set of public and private keys required for encryption, and the first terminal is connected to the third terminal.
  • the second terminal is connected to the third terminal and can transmit data.
  • the labeling of the sample data of both parties includes labeling the first sample data without labeling the second sample data, not labeling the first sample data, and labeling the second sample data, the first sample data, and the second sample data are labeled or Neither the first sample data nor the second sample data are labeled.
  • the labeling of the sample data of both parties is not specifically limited.
  • the network parameters of the first neural network and the second neural network can be set by those skilled in the art based on the actual situation and the actual situation, which is not specifically limited in this embodiment.
  • the network parameters include, but are not limited to, the number of network nodes in each layer, the number of hidden layers, the initial weight of each synapse, the learning rate, dynamic parameters, allowable errors, the number of iterations, and the activation function.
  • the first terminal inputs the first sample data to the first neural network, and when the last layer of the first neural network is reached, the characteristics of the first sample data are obtained.
  • Expression that is, the first neural network vector, and the first gradient value and the first loss value are determined according to the first neural network vector, that is, the first gradient value is the gradient function of the model to be trained.
  • the first A loss value is a loss of the loss function of the model to be trained for the first common feature vector, and the first gradient value and the first loss value are encrypted.
  • the third terminal generates a set of public key and private key, and transmits the public key to the first terminal and the second terminal, and the first terminal and the second terminal store the public key respectively in respective presets.
  • the first terminal obtains a public key from a preset storage area, and homomorphizes the first gradient value and the first loss value according to the public key. Encrypt, and send the encrypted first gradient value and first loss value to the second terminal.
  • the encryption method is homomorphic encryption (processing the homomorphically encrypted data to obtain an output, and decrypting this output, the result is the same as the output obtained by processing the unencrypted original data in the same method) , Can be calculated in the form of cipher text, does not affect the results obtained by the calculation.
  • Step S102 Combine the encrypted first gradient value and the first loss value with the received encrypted second gradient value and the second loss value sent by the second terminal to obtain the encrypted third loss value and the third gradient value. ;
  • the second terminal inputs the second sample data to the second neural network for iteration, and when it reaches the last layer of the second neural network, it obtains the characteristic expression of the second sample data, that is, the second neural network vector, and Determine the second gradient value and the second loss value according to the first neural network vector, that is, the second gradient value is the gradient function of the model to be trained for the gradient of the second common feature vector, and the second loss value is the loss function of the model to be trained for The second common feature vector is lost, and the second gradient value and the second loss value are encrypted and sent to the first terminal, that is, the public key in the pre-stored storage area is obtained, and the second gradient value and the second loss value are the same. State encryption, and send the encrypted second gradient value and the second loss value to the first terminal.
  • the feature dimensions of the first neural network vector and the second neural network vector are the same.
  • the first terminal combines the encrypted first gradient value and the first loss value with the encrypted second gradient value and the second loss value sent by the second terminal to obtain the encrypted third loss value and the third gradient value. That is, the first terminal receives the encrypted second gradient value and the second loss value sent by the second terminal, and combines the encrypted first gradient value and the second gradient value to obtain an encrypted third gradient value, and combines the encrypted first A loss value and a second loss value to obtain an encrypted third loss value.
  • the first terminal and the second terminal obtain a public key from the third terminal at a preset interval to update the local storage in the pre-
  • the public key in the storage area is set.
  • a timer is set in the third terminal.
  • the timer starts timing.
  • the timer reaches a preset time
  • the third terminal generates a group of public keys. Key and private key, and sends the public key to the first terminal and the second terminal, and the timer restarts, and the first terminal and the second terminal update the public key stored in the preset storage area .
  • the preset time can be set by a person skilled in the art based on the actual situation, which is not specifically limited in this embodiment.
  • Step S103 Send the encrypted third loss value and the third gradient value to the third terminal, and determine whether the model to be trained converges according to the third loss value and the historical loss value returned by the third terminal decryption;
  • the first terminal sends the encrypted third loss value and the third gradient value to the third terminal
  • the third terminal receives the encrypted third loss value and the third gradient value sent by the first terminal, and obtains
  • the first encrypted historical loss value sent by the first terminal, and the encrypted third loss value, historical loss value, and third gradient value are decrypted according to the pre-stored private key, and the decrypted third loss value and history are decrypted.
  • the loss value and the third gradient value are returned to the first terminal, and the first terminal determines whether the model to be trained converges according to the third loss value and the historical loss value returned by the third terminal decryption;
  • the first terminal receives the third loss value and the historical loss value returned by the third terminal, and then calculates a difference between the third loss value and the historical loss value returned by the decryption, and determines whether the difference is less than or equal to A preset threshold. If the difference is less than or equal to the preset threshold, it is determined that the model to be trained converges, otherwise it is determined that the model to be trained does not converge.
  • the preset threshold may be set by a person skilled in the art based on actual conditions, and this embodiment does not specifically limit this.
  • Step S104 If the model to be trained converges, establish the model to be trained based on the model parameters during convergence.
  • the model parameter to be trained is used to establish the model to be trained.
  • the operation of determining whether the model to be converged can also be performed by the third terminal. Specifically, the third terminal receives the encrypted third loss value sent by the first terminal, and obtains the encryption history sent by the first terminal last time. Loss value, and then decrypt the encrypted third loss value and historical loss value according to the pre-stored private key, and determine whether the model to be trained converges based on the decrypted third loss value and historical loss value, and determine the model convergence Deploying on the third terminal can reduce the resource occupation of the second terminal or the third terminal, and improve the resource utilization of the third terminal.
  • step S102 the method further includes:
  • step a the second terminal combines the encrypted second gradient value with the encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sends the encrypted fourth gradient value to the first Three terminals
  • the second terminal when the first terminal performs the determination of the gradient value and the loss value, the second terminal combines the encrypted second gradient value with the encrypted first gradient value sent by the first terminal to obtain encryption.
  • the third terminal that sends the encrypted fourth gradient value that is, the second terminal receives the encrypted first gradient value sent by the first terminal, and combines the encrypted second gradient value to obtain the encrypted first gradient value.
  • step S103 the method further includes:
  • Step b if the model to be trained does not converge, send a gradient update instruction to the third terminal, and the third terminal decrypts the encrypted third gradient value and the fourth gradient value according to the gradient update instruction, and decrypts the decrypted third
  • the gradient value is returned to the first terminal, and the decrypted fourth gradient value is returned to the second terminal;
  • the local gradients of the first neural network and the second neural network need to be updated, that is, the first terminal sends a gradient update instruction to the third terminal, and the third terminal sends the encrypted first
  • the three gradient values and the fourth gradient value are decrypted, and the decrypted third gradient value is returned to the first terminal, and the decrypted fourth gradient value is returned to the second terminal.
  • the first terminal updates the local gradient of the first neural network according to the third gradient value decrypted by the third terminal, and after the update is completed, returns to step S101, that is, the first terminal inputs the feature vector of the first sample data to the first
  • a neural network obtains a first neural network vector, determines a first gradient value and a first loss value according to the first neural network vector, and encrypts the first gradient value and the first loss value.
  • the second terminal updates the local gradient of the second neural network according to the fourth gradient value decrypted by the third terminal, and after the update is completed, returns to execute step a, that is, the second terminal encrypts the second gradient value and the received second gradient value.
  • the encrypted first gradient value sent by the first terminal is combined to obtain an encrypted fourth gradient value, and the encrypted fourth gradient value is sent to a third terminal.
  • the first terminal transmits the weight parameter value WA of the first neural network to the second after being encrypted.
  • Terminal transmits the weight parameter value WB of the second neural network to the first terminal, and the first terminal trains the first neural network according to the encrypted weight parameter values WA and WB, until convergence, and the second terminal
  • the second neural network is trained according to the encrypted weight parameter values WA and WB until convergence, and when both the first neural network and the second neural network converge, a model to be trained is established according to the weight parameter values WA and WB in the convergence state.
  • the present invention inputs the feature vectors of the sample data of the two parties into two neural networks respectively, and the two parties correspondingly obtain two neural network vectors with the same feature dimension, and obtain the respective neural network vectors according to the respective neural network vectors with the same feature dimension.
  • Gradient value and loss value, and one of them encrypts the gradient value and loss value, and then combines the encrypted gradient value and loss value sent by the other party to obtain the encrypted total loss value and total gradient value, and encrypts the encrypted value.
  • the total loss value is transmitted to a third party. Finally, based on the decrypted total loss value and the historical loss value returned by the third party, it is determined whether the model to be trained converges.
  • the model parameters at the time of convergence are used to establish the model to be trained. Because the data that the two parties need to transmit is encrypted, and joint training can be performed in an encrypted form, the privacy of the sample data of all parties is effectively improved, and at the same time, the multi-layer neural network of the parties is used for machine learning, which can effectively Use the sample data of all parties to improve the utilization of the sample data of all parties.
  • a second embodiment of the neural network federation modeling method based on transfer learning of the present invention is proposed.
  • the difference from the foregoing embodiment is that the neural network federation based on transfer learning
  • the modeling method also includes:
  • Step 105 when the configuration instruction of the initial weight is detected, count the number of synapses in the first neural network, and call a preset random number generator to generate a set of random numbers corresponding to the number of synapses;
  • the initial weights of each synapse in the model to be trained need to be configured before training the model to be trained.
  • the first terminal counts the synapses in the first neural network. Number of contacts, and call a preset random number generator to generate a set of random numbers corresponding to the number of synapses, while the second terminal counts the number of synapses in the second neural network, and calls the preset random number generator To generate another set of random numbers corresponding to the number of synapses.
  • the value range of the random number can be set by a person skilled in the art based on the actual situation, which is not specifically limited in this embodiment.
  • the value range of the random number is -0.5 to +0.5.
  • Step 106 Configure an initial weight of each synapse in the first neural network according to the generated set of random numbers.
  • the first terminal configures the initial weight of each synapse in the first neural network according to the generated set of random numbers, that is, from the generated set of random numbers, according to the generated sequence of random numbers
  • a random number is sequentially selected as the initial weight and assigned to a synapse in the first neural network
  • the second terminal configures the initial weight of each synapse in the second neural network according to another set of random numbers generated, that is, based on The order of the size of the generated another set of random numbers. From the generated another set of random numbers, a random number is sequentially selected as the initial weight and assigned to a synapse in the second neural network. Each synapse is configured once. Initial weight.
  • the present invention uses a random number generator to assign random initial weights to each synapse of the first neural network and the second neural network in the model to be trained to prevent the initial weights of the synapses from being the same, resulting in training During the process, the weight of each synapse is always kept equal, which effectively improves the accuracy of the model obtained by training.
  • an embodiment of the present invention further provides a storage medium that stores a neural network federation modeling program based on transfer learning.
  • the storage medium is executed. The following steps:
  • the first terminal inputs a feature vector of the first sample data to a first neural network to obtain a first neural network vector, and determines a first gradient value and a first loss value according to the first neural network vector, and applies the The first gradient value and the first loss value are encrypted;
  • the second terminal inputs the second sample data to the second neural network to obtain a second neural network vector, determines a second gradient value and a second loss value according to the first neural network vector, and applies the After the second gradient value and the second loss value are encrypted, and transmitted to the first terminal, the feature dimensions of the first neural network vector and the second neural network vector are the same;
  • the model parameters to be converged are used to establish the model to be trained.
  • the third terminal receives the encrypted third loss value sent by the first terminal, it obtains the encrypted historical loss value sent by the first terminal last time, and according to the pre-stored private key pair
  • the encrypted third loss value, the historical loss value, and the third gradient value are decrypted, and the decrypted third loss value, the historical loss value, and the third gradient value are returned to the first terminal.
  • the difference is less than or equal to a preset threshold, it is determined that the model to be trained converges, otherwise it is determined that the model to be trained does not converge.
  • the second terminal combines the encrypted second gradient value with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sends the encrypted fourth gradient value.
  • the third terminal ;
  • the method further includes:
  • a gradient update instruction is sent to the third terminal, and the third terminal decrypts the encrypted third gradient value and the fourth gradient value according to the gradient update instruction, and decrypts the decrypted first A three gradient value is returned to the first terminal, and a decrypted fourth gradient value is returned to the second terminal;
  • the first terminal updates the local gradient of the first neural network according to the third gradient value returned by the third terminal decryption, and after the update is completed, returns to the execution step: the first terminal updates the characteristics of the first sample data
  • the vector is input to the first neural network to obtain a first neural network vector, and a first gradient value and a first loss value are determined according to the first neural network vector, and the first gradient value and the first loss value are determined.
  • the second terminal updates the local gradient of the second neural network according to the fourth gradient value decrypted by the third terminal, and after the update is completed, returns to the execution step: the second terminal encrypts the second gradient Value, combined with the received encrypted first gradient value sent by the first terminal to obtain an encrypted fourth gradient value, and sending the encrypted fourth gradient value to the third terminal.
  • the third terminal generates a set of public key and private key, and transmits the public key to the first terminal and the second terminal, and the first terminal and the first terminal
  • the two terminals respectively store the public key in respective preset storage areas.
  • the third terminal generates a set of public key and private key at a preset interval, and transmits the generated public key to the first terminal and the second terminal, and the first terminal and the The second terminal updates the public key stored in the respective preset storage area according to the received public key, respectively.
  • the first terminal obtains a public key from a preset storage area, and performs homomorphic encryption on the first gradient value and the first loss value according to the public key.
  • the specific embodiments of the storage medium of the present invention are basically the same as the above embodiments of the neural network federation modeling method based on transfer learning, and will not be repeated here.
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better.
  • Implementation Based on such an understanding, the technical solution of the present invention in essence or a part that contributes to the existing technology can be embodied in the form of a software product, which is stored in a storage medium such as ROM / RAM as described above , Magnetic disk, optical disc), including a number of instructions to enable a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present invention.
  • a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Machine Translation (AREA)

Abstract

L'invention concerne un procédé et un dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert, et un support d'informations. Le procédé consiste : en ce qu'un premier terminal introduit un vecteur de caractéristiques de premières données d'échantillon dans un premier réseau neuronal de façon à acquérir un premier vecteur de réseau neuronal, à déterminer une première valeur de gradient et une première valeur de perte en fonction du premier vecteur de réseau neuronal, et à chiffrer la première valeur de gradient et la première valeur de perte (S101) ; à combiner la première valeur de gradient chiffrée et la première valeur de perte chiffrée avec une deuxième valeur de gradient chiffrée reçue et une deuxième valeur de perte chiffrée envoyée par un deuxième terminal de façon à acquérir une troisième valeur de perte chiffrée et une troisième valeur de gradient chiffrée (S102) ; à envoyer la troisième valeur de perte chiffrée et la troisième valeur de gradient chiffrée à un troisième terminal, et à déterminer, en fonction de la troisième valeur de perte et d'une valeur de perte historique déchiffrée et renvoyée par le troisième terminal, si un modèle à entraîner converge (S103) ; et si le modèle converge, à utiliser un paramètre de modèle au moment de la convergence pour établir le modèle (S104).
PCT/CN2019/078522 2018-08-10 2019-03-18 Procédé et dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert et support d'informations WO2020029585A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810913188.7 2018-08-10
CN201810913188.7A CN109165725B (zh) 2018-08-10 2018-08-10 基于迁移学习的神经网络联邦建模方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020029585A1 true WO2020029585A1 (fr) 2020-02-13

Family

ID=64895593

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078522 WO2020029585A1 (fr) 2018-08-10 2019-03-18 Procédé et dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert et support d'informations

Country Status (2)

Country Link
CN (1) CN109165725B (fr)
WO (1) WO2020029585A1 (fr)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368314A (zh) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 基于交叉特征的建模、预测方法、装置、设备及存储介质
CN111428887A (zh) * 2020-03-19 2020-07-17 腾讯云计算(北京)有限责任公司 一种基于多个计算节点的模型训练控制方法、装置及系统
CN111553745A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 基于联邦的模型更新方法、装置、设备及计算机存储介质
CN111724000A (zh) * 2020-06-29 2020-09-29 南方电网科学研究院有限责任公司 一种用户电费回收风险预测方法、装置及系统
CN111783038A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 基于智能学习的风险评估方法、装置、设备、系统和介质
CN111882054A (zh) * 2020-05-27 2020-11-03 杭州中奥科技有限公司 对双方加密关系网络数据交叉训练的方法及相关设备
CN111898769A (zh) * 2020-08-17 2020-11-06 中国银行股份有限公司 基于横向联邦学习的建立用户行为周期模型的方法及系统
CN111915004A (zh) * 2020-06-17 2020-11-10 北京迈格威科技有限公司 神经网络的训练方法、装置、存储介质及电子设备
CN112085159A (zh) * 2020-07-24 2020-12-15 西安电子科技大学 一种用户标签数据预测系统、方法、装置及电子设备
CN112231308A (zh) * 2020-10-14 2021-01-15 深圳前海微众银行股份有限公司 横向联邦建模样本数据的去重方法、装置、设备及介质
CN112232518A (zh) * 2020-10-15 2021-01-15 成都数融科技有限公司 一种轻量级分布式联邦学习系统及方法
CN112232519A (zh) * 2020-10-15 2021-01-15 成都数融科技有限公司 一种基于联邦学习的联合建模方法
CN112396189A (zh) * 2020-11-27 2021-02-23 中国银联股份有限公司 一种多方构建联邦学习模型的方法及装置
CN112417478A (zh) * 2020-11-24 2021-02-26 深圳前海微众银行股份有限公司 数据处理方法、装置、设备及存储介质
CN112508907A (zh) * 2020-12-02 2021-03-16 平安科技(深圳)有限公司 一种基于联邦学习的ct图像检测方法及相关装置
CN112633146A (zh) * 2020-12-21 2021-04-09 杭州趣链科技有限公司 多姿态人脸性别检测训练优化方法、装置及相关设备
CN112862507A (zh) * 2021-03-15 2021-05-28 深圳前海微众银行股份有限公司 网约车司乘纠纷的制止方法、装置、设备、介质以及产品
CN112860800A (zh) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 基于区块链和联邦学习的可信网络应用方法和装置
CN113051586A (zh) * 2021-03-10 2021-06-29 北京沃东天骏信息技术有限公司 联邦建模系统及方法、联邦模型预测方法、介质、设备
CN113269232A (zh) * 2021-04-25 2021-08-17 北京沃东天骏信息技术有限公司 模型训练方法、向量化召回方法、相关设备及存储介质
CN113362160A (zh) * 2021-06-08 2021-09-07 南京信息工程大学 一种用于信用卡反欺诈的联邦学习方法和装置
CN113409134A (zh) * 2021-06-30 2021-09-17 中国工商银行股份有限公司 基于联邦学习的企业融资授信方法及装置
CN113449872A (zh) * 2020-03-25 2021-09-28 百度在线网络技术(北京)有限公司 基于联邦学习的参数处理方法、装置和系统
CN113536770A (zh) * 2021-09-09 2021-10-22 平安科技(深圳)有限公司 基于人工智能的文本解析方法、装置、设备及存储介质
CN113537512A (zh) * 2021-07-15 2021-10-22 青岛海尔工业智能研究院有限公司 基于联邦学习的模型训练方法、装置、系统、设备和介质
CN113806759A (zh) * 2020-12-28 2021-12-17 京东科技控股股份有限公司 联邦学习模型的训练方法、装置、电子设备和存储介质
CN113923225A (zh) * 2020-11-16 2022-01-11 京东科技控股股份有限公司 基于分布式架构的联邦学习平台、方法、设备和存储介质
CN114429223A (zh) * 2022-01-26 2022-05-03 上海富数科技有限公司 异构模型建立方法及装置
CN114595835A (zh) * 2022-05-07 2022-06-07 腾讯科技(深圳)有限公司 基于联邦学习的模型训练方法及装置、设备、存储介质
WO2023124219A1 (fr) * 2021-12-30 2023-07-06 新智我来网络科技有限公司 Procédé de mise à jour itérative de modèle d'apprentissage conjoint, appareil, système et support de stockage
CN116633704A (zh) * 2023-07-25 2023-08-22 北京数牍科技有限公司 图计算方法和装置
CN117278540A (zh) * 2023-11-23 2023-12-22 中国人民解放军国防科技大学 自适应边缘联邦学习客户端调度方法、装置及电子设备
WO2024007189A1 (fr) * 2022-07-06 2024-01-11 Nokia Shanghai Bell Co., Ltd. Apprentissage de forme d'onde évolutif et rapide dans un système de communication multi-utilisateur

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165725B (zh) * 2018-08-10 2022-03-29 深圳前海微众银行股份有限公司 基于迁移学习的神经网络联邦建模方法、设备及存储介质
CN110414631B (zh) * 2019-01-29 2022-02-01 腾讯科技(深圳)有限公司 基于医学图像的病灶检测方法、模型训练的方法及装置
CN109871702A (zh) * 2019-02-18 2019-06-11 深圳前海微众银行股份有限公司 联邦模型训练方法、系统、设备及计算机可读存储介质
CN109902742B (zh) * 2019-02-28 2021-07-16 深圳前海微众银行股份有限公司 基于加密迁移学习的样本补全方法、终端、系统及介质
CN109886417B (zh) * 2019-03-01 2024-05-03 深圳前海微众银行股份有限公司 基于联邦学习的模型参数训练方法、装置、设备及介质
CN111800538B (zh) * 2019-04-09 2022-01-25 Oppo广东移动通信有限公司 信息处理方法、装置、存储介质及终端
CN110175283B (zh) * 2019-05-10 2021-04-13 深圳前海微众银行股份有限公司 一种推荐模型的生成方法及装置
CN110263908B (zh) * 2019-06-20 2024-04-02 深圳前海微众银行股份有限公司 联邦学习模型训练方法、设备、系统及存储介质
CN112149706B (zh) * 2019-06-28 2024-03-15 北京百度网讯科技有限公司 模型训练方法、装置、设备和介质
CN110399742B (zh) * 2019-07-29 2020-12-18 深圳前海微众银行股份有限公司 一种联邦迁移学习模型的训练、预测方法及装置
CN110443416A (zh) * 2019-07-30 2019-11-12 卓尔智联(武汉)研究院有限公司 基于共享数据的联邦建模装置、方法及可读存储介质
CN112308233A (zh) * 2019-08-02 2021-02-02 伊姆西Ip控股有限责任公司 用于处理数据的方法、设备和计算机程序产品
CN110610140B (zh) * 2019-08-23 2024-01-19 平安科技(深圳)有限公司 人脸识别模型的训练方法、装置、设备及可读存储介质
CN111222646B (zh) * 2019-12-11 2021-07-30 深圳逻辑汇科技有限公司 联邦学习机制的设计方法、装置和存储介质
CN111144576A (zh) * 2019-12-13 2020-05-12 支付宝(杭州)信息技术有限公司 模型训练方法、装置和电子设备
CN111125735B (zh) * 2019-12-20 2021-11-02 支付宝(杭州)信息技术有限公司 一种基于隐私数据进行模型训练的方法及系统
CN111126609B (zh) * 2019-12-20 2021-04-23 深圳前海微众银行股份有限公司 基于联邦学习的知识迁移方法、装置、设备及介质
CN111178524A (zh) * 2019-12-24 2020-05-19 中国平安人寿保险股份有限公司 基于联邦学习的数据处理方法、装置、设备及介质
CN111210003B (zh) * 2019-12-30 2021-03-19 深圳前海微众银行股份有限公司 纵向联邦学习系统优化方法、装置、设备及可读存储介质
CN111428881B (zh) * 2020-03-20 2021-12-07 深圳前海微众银行股份有限公司 识别模型的训练方法、装置、设备及可读存储介质
CN111428265A (zh) * 2020-03-20 2020-07-17 深圳前海微众银行股份有限公司 基于联邦学习的语句质检方法、装置、设备及存储介质
CN113554476B (zh) * 2020-04-23 2024-04-19 京东科技控股股份有限公司 信用度预测模型的训练方法、系统、电子设备及存储介质
CN111737921B (zh) * 2020-06-24 2024-04-26 深圳前海微众银行股份有限公司 基于循环神经网络的数据处理方法、设备及介质
CN112001502B (zh) * 2020-08-24 2022-06-21 平安科技(深圳)有限公司 高延时网络环境鲁棒的联邦学习训练方法及装置
CN114257386B (zh) * 2020-09-10 2023-03-21 华为技术有限公司 检测模型的训练方法、系统、设备及存储介质
CN112016632B (zh) * 2020-09-25 2024-04-26 北京百度网讯科技有限公司 模型联合训练方法、装置、设备和存储介质
CN112149171B (zh) * 2020-10-27 2021-07-09 腾讯科技(深圳)有限公司 联邦神经网络模型的训练方法、装置、设备及存储介质
CN112348199B (zh) * 2020-10-30 2022-08-30 河海大学 一种基于联邦学习与多任务学习的模型训练方法
CN113011598B (zh) * 2021-03-17 2023-06-02 深圳技术大学 一种基于区块链的金融数据信息联邦迁移学习方法及装置
CN112733967B (zh) * 2021-03-30 2021-06-29 腾讯科技(深圳)有限公司 联邦学习的模型训练方法、装置、设备及存储介质
CN113902137B (zh) * 2021-12-06 2022-04-08 腾讯科技(深圳)有限公司 流式模型训练方法、装置、计算机设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760932A (zh) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 数据交换方法、数据交换装置及计算装置
CN107704930A (zh) * 2017-09-25 2018-02-16 阿里巴巴集团控股有限公司 基于共享数据的建模方法、装置、系统及电子设备
US20180096248A1 (en) * 2016-09-30 2018-04-05 Safran Identity & Security Methods for secure learning of parameters of a convolution neural network, and for secure input data classification
CN108259158A (zh) * 2018-01-11 2018-07-06 西安电子科技大学 一种云计算环境下高效和隐私保护的单层感知机学习方法
CN109165725A (zh) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 基于迁移学习的神经网络联邦建模方法、设备及存储介质
CN109255444A (zh) * 2018-08-10 2019-01-22 深圳前海微众银行股份有限公司 基于迁移学习的联邦建模方法、设备及可读存储介质
CN109325584A (zh) * 2018-08-10 2019-02-12 深圳前海微众银行股份有限公司 基于神经网络的联邦建模方法、设备及可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9032473B2 (en) * 2010-03-02 2015-05-12 Interdigital Patent Holdings, Inc. Migration of credentials and/or domains between trusted hardware subscription modules
US8761160B2 (en) * 2010-06-25 2014-06-24 Acme Packet, Inc. Service path routing between session border controllers
US20180089587A1 (en) * 2016-09-26 2018-03-29 Google Inc. Systems and Methods for Communication Efficient Distributed Mean Estimation
CN107610709B (zh) * 2017-08-01 2021-03-19 百度在线网络技术(北京)有限公司 一种训练声纹识别模型的方法及系统
CN108229646A (zh) * 2017-08-08 2018-06-29 北京市商汤科技开发有限公司 神经网络模型压缩方法、装置、存储介质和电子设备
CN108182427B (zh) * 2018-01-30 2021-12-14 电子科技大学 一种基于深度学习模型和迁移学习的人脸识别方法
CN108197670B (zh) * 2018-01-31 2021-06-15 国信优易数据股份有限公司 伪标签生成模型训练方法、装置及伪标签生成方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760932A (zh) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 数据交换方法、数据交换装置及计算装置
US20180096248A1 (en) * 2016-09-30 2018-04-05 Safran Identity & Security Methods for secure learning of parameters of a convolution neural network, and for secure input data classification
CN107704930A (zh) * 2017-09-25 2018-02-16 阿里巴巴集团控股有限公司 基于共享数据的建模方法、装置、系统及电子设备
CN108259158A (zh) * 2018-01-11 2018-07-06 西安电子科技大学 一种云计算环境下高效和隐私保护的单层感知机学习方法
CN109165725A (zh) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 基于迁移学习的神经网络联邦建模方法、设备及存储介质
CN109255444A (zh) * 2018-08-10 2019-01-22 深圳前海微众银行股份有限公司 基于迁移学习的联邦建模方法、设备及可读存储介质
CN109325584A (zh) * 2018-08-10 2019-02-12 深圳前海微众银行股份有限公司 基于神经网络的联邦建模方法、设备及可读存储介质

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368314A (zh) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 基于交叉特征的建模、预测方法、装置、设备及存储介质
CN111428887A (zh) * 2020-03-19 2020-07-17 腾讯云计算(北京)有限责任公司 一种基于多个计算节点的模型训练控制方法、装置及系统
CN111428887B (zh) * 2020-03-19 2023-05-12 腾讯云计算(北京)有限责任公司 一种基于多个计算节点的模型训练控制方法、装置及系统
CN113449872B (zh) * 2020-03-25 2023-08-08 百度在线网络技术(北京)有限公司 基于联邦学习的参数处理方法、装置和系统
CN113449872A (zh) * 2020-03-25 2021-09-28 百度在线网络技术(北京)有限公司 基于联邦学习的参数处理方法、装置和系统
CN111553745A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 基于联邦的模型更新方法、装置、设备及计算机存储介质
CN111882054B (zh) * 2020-05-27 2024-04-12 杭州中奥科技有限公司 对双方加密关系网络数据交叉训练的方法及相关设备
CN111882054A (zh) * 2020-05-27 2020-11-03 杭州中奥科技有限公司 对双方加密关系网络数据交叉训练的方法及相关设备
CN111915004A (zh) * 2020-06-17 2020-11-10 北京迈格威科技有限公司 神经网络的训练方法、装置、存储介质及电子设备
CN111724000B (zh) * 2020-06-29 2024-02-09 南方电网科学研究院有限责任公司 一种用户电费回收风险预测方法、装置及系统
CN111724000A (zh) * 2020-06-29 2020-09-29 南方电网科学研究院有限责任公司 一种用户电费回收风险预测方法、装置及系统
CN111783038B (zh) * 2020-06-30 2024-04-12 北京百度网讯科技有限公司 基于智能学习的风险评估方法、装置、设备、系统和介质
CN111783038A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 基于智能学习的风险评估方法、装置、设备、系统和介质
CN112085159B (zh) * 2020-07-24 2023-08-15 西安电子科技大学 一种用户标签数据预测系统、方法、装置及电子设备
CN112085159A (zh) * 2020-07-24 2020-12-15 西安电子科技大学 一种用户标签数据预测系统、方法、装置及电子设备
CN111898769A (zh) * 2020-08-17 2020-11-06 中国银行股份有限公司 基于横向联邦学习的建立用户行为周期模型的方法及系统
CN112231308B (zh) * 2020-10-14 2024-05-03 深圳前海微众银行股份有限公司 横向联邦建模样本数据的去重方法、装置、设备及介质
CN112231308A (zh) * 2020-10-14 2021-01-15 深圳前海微众银行股份有限公司 横向联邦建模样本数据的去重方法、装置、设备及介质
CN112232518A (zh) * 2020-10-15 2021-01-15 成都数融科技有限公司 一种轻量级分布式联邦学习系统及方法
CN112232518B (zh) * 2020-10-15 2024-01-09 成都数融科技有限公司 一种轻量级分布式联邦学习系统及方法
CN112232519B (zh) * 2020-10-15 2024-01-09 成都数融科技有限公司 一种基于联邦学习的联合建模方法
CN112232519A (zh) * 2020-10-15 2021-01-15 成都数融科技有限公司 一种基于联邦学习的联合建模方法
CN113923225A (zh) * 2020-11-16 2022-01-11 京东科技控股股份有限公司 基于分布式架构的联邦学习平台、方法、设备和存储介质
CN112417478A (zh) * 2020-11-24 2021-02-26 深圳前海微众银行股份有限公司 数据处理方法、装置、设备及存储介质
CN112396189A (zh) * 2020-11-27 2021-02-23 中国银联股份有限公司 一种多方构建联邦学习模型的方法及装置
CN112396189B (zh) * 2020-11-27 2023-09-01 中国银联股份有限公司 一种多方构建联邦学习模型的方法及装置
CN112508907B (zh) * 2020-12-02 2024-05-14 平安科技(深圳)有限公司 一种基于联邦学习的ct图像检测方法及相关装置
CN112508907A (zh) * 2020-12-02 2021-03-16 平安科技(深圳)有限公司 一种基于联邦学习的ct图像检测方法及相关装置
CN112633146A (zh) * 2020-12-21 2021-04-09 杭州趣链科技有限公司 多姿态人脸性别检测训练优化方法、装置及相关设备
CN112633146B (zh) * 2020-12-21 2024-03-26 杭州趣链科技有限公司 多姿态人脸性别检测训练优化方法、装置及相关设备
CN113806759A (zh) * 2020-12-28 2021-12-17 京东科技控股股份有限公司 联邦学习模型的训练方法、装置、电子设备和存储介质
CN112860800A (zh) * 2021-02-22 2021-05-28 深圳市星网储区块链有限公司 基于区块链和联邦学习的可信网络应用方法和装置
CN113051586A (zh) * 2021-03-10 2021-06-29 北京沃东天骏信息技术有限公司 联邦建模系统及方法、联邦模型预测方法、介质、设备
CN113051586B (zh) * 2021-03-10 2024-05-24 北京沃东天骏信息技术有限公司 联邦建模系统及方法、联邦模型预测方法、介质、设备
CN112862507A (zh) * 2021-03-15 2021-05-28 深圳前海微众银行股份有限公司 网约车司乘纠纷的制止方法、装置、设备、介质以及产品
CN113269232B (zh) * 2021-04-25 2023-12-08 北京沃东天骏信息技术有限公司 模型训练方法、向量化召回方法、相关设备及存储介质
CN113269232A (zh) * 2021-04-25 2021-08-17 北京沃东天骏信息技术有限公司 模型训练方法、向量化召回方法、相关设备及存储介质
CN113362160B (zh) * 2021-06-08 2023-08-22 南京信息工程大学 一种用于信用卡反欺诈的联邦学习方法和装置
CN113362160A (zh) * 2021-06-08 2021-09-07 南京信息工程大学 一种用于信用卡反欺诈的联邦学习方法和装置
CN113409134A (zh) * 2021-06-30 2021-09-17 中国工商银行股份有限公司 基于联邦学习的企业融资授信方法及装置
CN113537512B (zh) * 2021-07-15 2024-03-15 卡奥斯工业智能研究院(青岛)有限公司 基于联邦学习的模型训练方法、装置、系统、设备和介质
CN113537512A (zh) * 2021-07-15 2021-10-22 青岛海尔工业智能研究院有限公司 基于联邦学习的模型训练方法、装置、系统、设备和介质
CN113536770B (zh) * 2021-09-09 2021-11-30 平安科技(深圳)有限公司 基于人工智能的文本解析方法、装置、设备及存储介质
CN113536770A (zh) * 2021-09-09 2021-10-22 平安科技(深圳)有限公司 基于人工智能的文本解析方法、装置、设备及存储介质
WO2023124219A1 (fr) * 2021-12-30 2023-07-06 新智我来网络科技有限公司 Procédé de mise à jour itérative de modèle d'apprentissage conjoint, appareil, système et support de stockage
CN114429223B (zh) * 2022-01-26 2023-11-07 上海富数科技有限公司 异构模型建立方法及装置
CN114429223A (zh) * 2022-01-26 2022-05-03 上海富数科技有限公司 异构模型建立方法及装置
CN114595835B (zh) * 2022-05-07 2022-07-22 腾讯科技(深圳)有限公司 基于联邦学习的模型训练方法及装置、设备、存储介质
CN114595835A (zh) * 2022-05-07 2022-06-07 腾讯科技(深圳)有限公司 基于联邦学习的模型训练方法及装置、设备、存储介质
WO2024007189A1 (fr) * 2022-07-06 2024-01-11 Nokia Shanghai Bell Co., Ltd. Apprentissage de forme d'onde évolutif et rapide dans un système de communication multi-utilisateur
CN116633704A (zh) * 2023-07-25 2023-08-22 北京数牍科技有限公司 图计算方法和装置
CN116633704B (zh) * 2023-07-25 2023-10-31 北京数牍科技有限公司 图计算方法和装置
CN117278540A (zh) * 2023-11-23 2023-12-22 中国人民解放军国防科技大学 自适应边缘联邦学习客户端调度方法、装置及电子设备
CN117278540B (zh) * 2023-11-23 2024-02-13 中国人民解放军国防科技大学 自适应边缘联邦学习客户端调度方法、装置及电子设备

Also Published As

Publication number Publication date
CN109165725B (zh) 2022-03-29
CN109165725A (zh) 2019-01-08

Similar Documents

Publication Publication Date Title
WO2020029585A1 (fr) Procédé et dispositif de modélisation de fédération de réseau neuronal faisant intervenir un apprentissage par transfert et support d'informations
WO2021056760A1 (fr) Dispositif, appareil et procédé de chiffrement de données d'apprentissage fédéré et support de stockage lisible
WO2021092973A1 (fr) Procédé et dispositif de traitement d'informations sensibles, et support de stockage pouvant être lu
WO2021095998A1 (fr) Procédé et système informatiques sécurisés
WO2020147383A1 (fr) Procédé, dispositif et système d'examen et d'approbation de processus utilisant un système de chaîne de blocs, et support de stockage non volatil
WO2019132272A1 (fr) Identifiant en tant que service basé sur une chaîne de blocs
WO2016137304A1 (fr) Sécurité de bout en bout sur la base de zone de confiance
WO2018151390A1 (fr) Dispositif de l'internet des objets
WO2015093734A1 (fr) Système et procédé d'authentification utilisant un code qr
WO2021003975A1 (fr) Procédé de test d'interface de passerelle, dispositif terminal, support de stockage et appareil
WO2019024126A1 (fr) Procédé de gestion d'informations de connaissance basé sur une chaîne de blocs, et terminal et serveur
WO2017119548A1 (fr) Procédé d'authentification d'utilisateur à sécurité renforcée
WO2013005989A2 (fr) Procédé et appareil de gestion de clé de groupe pour dispositif mobile
WO2018072261A1 (fr) Procédé et dispositif de chiffrement d'informations, procédé et dispositif de déchiffrement d'informations, et terminal
WO2019182377A1 (fr) Procédé, dispositif électronique et support d'enregistrement lisible par ordinateur permettant de générer des informations d'adresse utilisées pour une transaction de cryptomonnaie à base de chaîne de blocs
WO2023120906A1 (fr) Procédé permettant de recevoir un micrologiciel et procédé permettant de transmettre un micrologiciel
WO2019139420A1 (fr) Dispositif électronique, serveur et procédé de commande associé
WO2020062658A1 (fr) Procédé et appareil de génération de contrat, dispositif et support de stockage
WO2020062661A1 (fr) Procédé, dispositif et appareil de vérification de cohérence de données de contrat et support d'enregistrement
WO2016137291A1 (fr) Système de serveur pg utilisant un code de sécurité basé sur l'horodatage, et procédé de commande associé
WO2019132270A1 (fr) Procédé de communication sécurisé dans un environnement nfv et système associé
WO2021027134A1 (fr) Procédé, appareil et dispositif de stockage de données et support d'enregistrement informatique
WO2018053904A1 (fr) Procédé et terminal de traitement d'informations
WO2020206899A1 (fr) Procédé, appareil et dispositif de vérification d'identité basée sur un horodatage, et support d'informations
WO2017111483A1 (fr) Dispositif d'authentification basée sur des données biométriques, serveur de commande et serveur d'application relié à celui-ci, et procédé de commande associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19848010

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19848010

Country of ref document: EP

Kind code of ref document: A1