WO2023092792A1 - 联邦学习建模优化方法、电子设备、存储介质及程序产品 - Google Patents

联邦学习建模优化方法、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023092792A1
WO2023092792A1 PCT/CN2021/141224 CN2021141224W WO2023092792A1 WO 2023092792 A1 WO2023092792 A1 WO 2023092792A1 CN 2021141224 W CN2021141224 W CN 2021141224W WO 2023092792 A1 WO2023092792 A1 WO 2023092792A1
Authority
WO
WIPO (PCT)
Prior art keywords
network model
federated
local
model
noise
Prior art date
Application number
PCT/CN2021/141224
Other languages
English (en)
French (fr)
Inventor
范力欣
古瀚林
杨强
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2023092792A1 publication Critical patent/WO2023092792A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • This application relates to the field of artificial intelligence technology in financial technology (Fintech), and in particular to a federated learning modeling optimization method, electronic equipment, storage media and program products.
  • federated learning modeling is usually realized by privacy protection technologies such as homomorphic encryption or multi-party secure computing.
  • privacy protection technologies such as homomorphic encryption or multi-party secure computing.
  • the efficiency of modeling, while the multi-party secure computing technology involves complex cryptographic operations, and its communication and computing overhead are also very large, which affects the efficiency of federated learning modeling. Therefore, the efficiency of existing federated learning modeling methods is low .
  • the main purpose of this application is to provide a federated learning modeling optimization method, electronic equipment, storage media and program products, aiming at solving the technical problem of low federated learning modeling efficiency in the prior art due to the need for privacy protection.
  • the federated learning modeling optimization method includes:
  • this application provides a federated learning modeling optimization method, which is applied to the federation coordinator, and the federated learning modeling optimization method also includes:
  • each of the federated participants sends the aggregated generated network model to each of the federated participants, so that each of the federated participants can iteratively update their local generated network model according to the aggregated generated network model to obtain the federated generated network model, and according to the A federated generative network model converts locally selected noise samples into a federated predictive network model.
  • the present application also provides a federated learning modeling optimization device, which is applied to federation participants, and the federated learning modeling optimization device includes:
  • the first model generation module is used to obtain the first noise data from the local noise data set, and map the first noise data to each initial particle network model according to the locally generated network model;
  • a local iterative training and updating module configured to obtain local sample data, and perform iterative training and updating on each of the initial particle network models according to the local sample data to obtain each target particle network model;
  • a federated iterative training and updating module configured to obtain second noise data from the local noise data set, and perform federated learning-based iteration on the local generated network model according to each of the target particle network models and the second noise data Training updates to obtain the federated generation network model;
  • the second model generation module is configured to obtain locally selected noise samples from the local noise data set, and convert the locally selected noise samples into a federated prediction network model according to the federated generation network model.
  • the present application also provides a federated learning modeling optimization device, which is applied to a federated coordinator, and the federated learning modeling optimization device includes:
  • the receiving module is used to receive the locally generated network model sent by each federation participant;
  • An aggregation module configured to aggregate each of the local generated network models to obtain an aggregated generated network model
  • a sending module configured to send the aggregated generated network model to each of the federated participants, so that each of the federated participants can iteratively update their local generated network models according to the aggregated generated network model to obtain a federated generated network
  • a model is used to convert locally selected noise samples into a federated prediction network model according to the federated generation network model.
  • the present application also provides an electronic device, which includes: a memory, a processor, and the program of the federated learning modeling optimization method stored on the memory and operable on the processor, the federated
  • the program of the learning modeling optimization method is executed by the processor, the steps of the above-mentioned federated learning modeling optimization method can be realized.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a program for realizing the federated learning modeling and optimization method, and when the program of the federated learning modeling and optimization method is executed by a processor, the above mentioned The steps of the federated learning modeling optimization method.
  • the present application also provides a computer program product, including a computer program.
  • a computer program product including a computer program.
  • the steps of the above-mentioned federated learning modeling optimization method are implemented.
  • This application provides a federated learning modeling optimization method, electronic equipment, storage media, and program products. Compared with the technical means of federated learning modeling based on privacy protection technologies such as homomorphic encryption or multi-party secure computing in the prior art, This application first obtains the first noise data from the local noise data set, maps the first noise data to each initial particle network model according to the locally generated network model, and then obtains the local sample data, and according to the local sample data, respectively. Each of the initial particle network models is iteratively trained and updated to obtain each target particle network model, and then second noise data is obtained from the local noise data set, and according to each of the target particle network models and the second noise data, the The local generation network model performs iterative training and updating based on federated learning to obtain a federated generation network model, and since the local generation network is a mapping from noise data to a particle network model, the federation generation network model is from global noise data to global particles The global mapping of the network model, therefore, achieves the purpose of building a federated generation
  • Fig. 1 is a schematic flow chart of the first embodiment of the federated learning modeling optimization method of the present application
  • FIG. 2 is a schematic flow diagram of the second embodiment of the federated learning modeling optimization method of the present application
  • FIG. 3 is a schematic diagram of the device structure of the hardware operating environment involved in the federated learning modeling optimization method in the embodiment of the present application.
  • the embodiment of the present application provides a federated learning modeling optimization method, which is applied to federation participants.
  • the federated learning modeling optimization method includes:
  • Step S10 obtaining the first noise data from the local noise data set, and mapping the first noise data to each initial particle network model according to the locally generated network model;
  • the federated learning modeling optimization method is applied to horizontal federated learning
  • the federated participants are participants of horizontal federated learning
  • the local noise data set includes at least one local noise sample
  • the first noise data includes a preset number of first noise samples
  • the first noise samples are local noise samples in the first noise data
  • each of the first noise samples conforms to a preset data distribution, so
  • the preset data distribution may be a Gaussian distribution.
  • each first noise sample conforming to the Gaussian distribution is obtained from the local noise data set, each of the first noise samples is input into the local generation network model, and each of the first noise samples is mapped to a corresponding particle network model parameters to obtain each initial particle network model, wherein the first noise sample can be an image, sound, or a specific matrix, etc., and the local generation network model is a machine for generating particle network model parameters locally maintained by a federation participant learning model.
  • the particle network model may be a classification network model or a logistic regression model.
  • Step S20 obtaining local sample data, performing iterative training and updating on each of the initial particle network models according to the local sample data, to obtain each target particle network model;
  • the local sample data is local privacy data of federation participant devices, and the local sample data includes local training samples and local sample labels corresponding to the local training samples.
  • each initial particle network model by inputting the local training samples into each initial particle network model respectively, performing model prediction on each of the local training samples, and obtaining the model prediction labels output by each initial particle network model; according to each of the model prediction labels and the Calculate the model prediction loss corresponding to each initial particle network model based on the distance between the local sample labels; update each corresponding initial particle network model according to each model prediction loss, and obtain each target particle network model.
  • the local sample data includes local training samples and local sample labels
  • the initial particle network model includes an initial particle classification network model
  • the target particle network model includes a target particle classification network model
  • the steps of iteratively training and updating each of the initial particle network models to obtain each target particle network model include:
  • Step S21 according to each of the initial particle classification network models, respectively classify the local training samples to obtain classification prediction labels;
  • Step S22 calculating a classification loss according to the classification prediction label and the local sample label
  • Step S23 updating each of the initial particle classification network models according to the classification loss to obtain each of the target particle classification network models.
  • the local training samples are respectively classified to obtain the classification prediction labels output by each of the initial particle classification network models;
  • the classification loss corresponding to each of the classification prediction labels calculates the corresponding model of the corresponding initial particle classification network model
  • Gradient according to the gradient of each model, the corresponding initial particle classification network model is updated to obtain each target particle classification network model.
  • Step S30 acquiring second noise data from the local noise data set, performing iterative training and updating of the local generation network model based on federated learning according to each of the target particle network models and the second noise data, to obtain a federated Generate a network model;
  • the second noise data includes at least one second noise sample, and each of the second noise samples conforms to a preset data distribution, and the preset data distribution may be a Gaussian distribution, so The second noise sample is a local noise sample in the second noise data.
  • the target particle network model may be a target particle classification network model.
  • each second noise sample conforming to the Gaussian distribution is obtained from the local noise data set, each of the second noise samples is used as a training sample, and each of the target particle classification network models is used as a target label, and the The local generative network model is updated through iterative training based on federated learning to obtain a federated generative network model.
  • the second noise data includes at least a second noise sample
  • the target particle network model includes a target particle classification network model
  • the iterative training update based on federated learning is performed on the local generated network model, and the steps of obtaining the federated generated network model include:
  • Step S31 according to the locally generated network model, respectively map each of the second noise samples to a training particle classification network model
  • Step S32 according to the similarity loss calculated by each of the training particle classification network models and each of the target particle classification network models, perform iterative training update based on federated learning on the local generated network model, and obtain the federated generated network model .
  • each training particle classification network model is obtained , wherein the training particle classification network model is a classification network model with the training particle network parameters; based on the model parameter distribution corresponding to each of the training particle classification network models and the model parameters corresponding to each of the target particle classification network models According to the similarity between distributions, calculate the similarity loss; update the local generation network model according to the similarity loss; send the updated local generation network model to the federation coordinator for the federation coordinator to The locally generated network models sent by the federation participants are aggregated to obtain the aggregated generated network models, and then the aggregated generated network models sent by the federation coordinator are received, the aggregated generated network models are used as the new locally generated network models, and the execution steps are returned: Obtain the first noise data from the local noise data set until the local generation network model and each of the initial particle classification network models meet the preset iterative training end condition, and use the
  • the initial particle network model includes an initial particle classification network model
  • the similarity loss calculated between each of the training particle classification network models and each of the target particle classification network models the local generated network model is based on The iterative training update of federated learning
  • the steps of obtaining the federated generation network model include:
  • Step S321 calculating the similarity loss according to the similarity between the model parameter distribution of each of the training particle classification network models and the model parameter distribution of each of the target particle classification network models;
  • the similarity loss includes a KL divergence loss.
  • the KL divergence loss is calculated, wherein the purpose of calculating the KL divergence loss Fitting the model parameter distribution of each of the training particle classification network models and the model parameter distribution of each of the target particle classification network models, so that the model parameter distribution of each of the training particle classification network models is consistent with the model parameter distribution of each of the target particle classification network models.
  • the distribution of the model parameters of the model is consistent.
  • Step S322 judging whether the locally generated network model and each of the initial particle classification network models meet the preset iterative update end condition
  • Step S323 if satisfied, use the locally generated network model as the federated generated network model;
  • Step S324 if not satisfied, update the local generation network model according to the similarity loss
  • Step S325 sending the updated locally generated network model to the federation coordinator, so that the federated coordinator can aggregate the locally generated network models sent by each of the federated participants to obtain an aggregated generated network model;
  • Step S326 receiving the aggregated generated network model sent by the federal coordinator, using the aggregated generated network model as a new locally generated network model, and returning to the execution step: obtaining the first noise data from the local noise data set until the Both the locally generated network model and each of the initial particle classification network models satisfy the preset iterative update end condition.
  • the local generation network model in the federated learning modeling process and each initial particle classification network model meet the preset iterative update end condition; if the locally generated network model in the federated learning modeling process and each initial particle classification network model meet the preset iterative update end condition, it proves that the local generation network model meets the federated learning modeling requirements and the particle classification network model output by the local generation network model also meets the federated learning modeling requirements , so the local generation network model is used as a federated generation network model, wherein the federated learning modeling requirement can be model accuracy requirements; if the local generation network model and each initial particle classification network model in the federated learning modeling process If the preset iterative update end conditions are not met, it proves that the local generated network model does not meet the requirements of federated learning modeling or the particle classification network model output by the local generated network model does not meet the requirements of federated learning modeling, and then the updated The locally generated network model of the federation coordinator is sent to the federal coordinator for the feder
  • the step of judging whether the locally generated network model and each of the initial particle classification network models satisfy the preset iterative update end condition includes:
  • Step A10 obtaining classification losses corresponding to each of the initial particle classification networks, and judging whether each of the classification losses and the similarity losses are convergent;
  • Step A20 if each of the classification losses and the similarity losses converges, it is determined that both the local generation network model and each of the initial particle classification network models meet the preset iterative update end condition;
  • Step A30 if the classification loss and the similarity loss do not all converge, then it is determined that the local generation network model and each of the initial particle classification network models do not both meet the preset iterative update termination condition.
  • the classification loss is the local sample data, which is calculated when each initial particle network model is iteratively trained and updated to obtain each target particle network model. There is a one-to-one correspondence between the classification loss and the initial particle network model.
  • Step S40 obtaining locally selected noise samples from the local noise data set, and converting the locally selected noise samples into a federated predictive network model according to the federated generation network model.
  • each federal participant maintains a specific locally selected noise sample locally, and different locally selected noise samples can be selected among the federated participants to ensure that each participant The specific model parameters of a party's federated forecasting network model are not obtained by other parties.
  • a federated prediction network model is obtained, wherein the federated prediction network model may be a federated classification network model or a federated logistic regression model.
  • the local generation network model does not directly process the original sample data, such as image data or audio data, it is usually possible to set the number of parameters of the local generation network model to be much smaller than the number of parameters of the particle network model.
  • the data exchanged between the federation participants and the federation coordinator is the model parameters of the locally generated network model, so it can reduce the communication overhead and computing overhead between the federation participants and the federation coordinator, thereby also improving the efficiency of federated learning modeling.
  • the federated learning modeling optimization method further includes:
  • Step S50 receiving the public sample data issued by the federation coordinator
  • Step S60 perform model prediction on the public sample data according to the federated prediction network model, and obtain test prediction results;
  • Step S70 sending the test prediction result to the federation coordinator, so that the federation coordinator can predict the federation prediction network model of each federation participant according to the test prediction result sent by each federation participant The distance between model parameters, get the distance of each model parameter.
  • the public sample data includes at least one public sample.
  • each public sample issued by the federation coordinator is received; by inputting each of the public samples into the federated prediction network model, model prediction is performed on each of the public samples respectively, and a test prediction result is obtained, wherein the test prediction
  • the result includes at least an output of a federated prediction network model for the public sample, and then sends the test prediction result to the federated coordinator for the federated coordinator to use according to the test prediction sent by each of the federated participants
  • the distance between each model parameter of the federated predictive network model of each federation participant is predicted to obtain the distance of each model parameter.
  • the model parameter distance can be used for privacy protection evaluation.
  • the test prediction result may be a test prediction result vector, wherein the test prediction result vector includes at least a test output value of a federated prediction network model for a common sample.
  • the federated coordinator predicts the distance between the model parameters of the federated predictive network model of each federated party according to the distance between the test prediction results sent by the federated participants, and the step of obtaining the model parameter distance includes:
  • step S70 the specific implementation methods of performing privacy protection evaluation according to the distance of each model parameter include:
  • the embodiment of this application provides a privacy protection evaluation method on the basis of constructing a federated generation network model.
  • the federated coordinator distributes public sample data to each federated party to collect data from each federated party.
  • the model parameters of the federated predictive network model of the federated participants have a high similarity, which causes the privacy of the federated participants to leak.
  • the data exchanged between the federation participants and the federation coordinator are the test prediction results related to the number of participants and the number of particle network models, and there is no need to pass Transferring the model parameters itself for privacy protection evaluation saves communication overhead and computing overhead in the privacy protection evaluation process, and improves the efficiency of federated learning privacy protection evaluation.
  • the similarity loss in the embodiment of the present application is the KL divergence loss
  • the KL divergence loss is different from the model parameter distribution of each of the training particle classification network models and each of the target particle classification network
  • the model parameter distribution of the model is integrally fitted, thereby improving the stability of the locally generated network model and making it less susceptible to external attacks.
  • the initial particle network model may be an image classification particle network model
  • the local sample data may be a training image sample
  • the target particle network model is an updated image classification particle network model after iterative training, that is, is the target image classification particle network model
  • the federated generation network model is used to generate an image classification model
  • the federated prediction network model can be a federated image classification model
  • the embodiment of the present application can use the generated network model as a medium, indirectly based on The purpose of federated learning is to build a federated image classification model. Since the generated network model does not involve the specific parameters of the image classification particle network model and local training image samples, it can protect the image data privacy of the federated participants.
  • the generated network model also It can directly participate in computing and communication in the form of plain text, which greatly reduces the computing and communication overhead in the process of building a federated image classification model based on federated learning, so it improves the efficiency of building a federated image classification model based on federated learning.
  • privacy protection technology also includes differential privacy.
  • differential privacy When performing federated learning based on differential privacy, it is necessary to add noise to the sample to achieve privacy protection, and noise data has an impact on the availability of data and the accuracy of the model. Furthermore, the accuracy of federated learning modeling is reduced.
  • noise is not directly added to the samples, but the mapping of specific distributed noise to the predicted network model is learned through federated learning, and the federated generated network model is obtained.
  • the federated predictive network model can be generated according to the federated generative network model.
  • the federated predictive network model output by the federated generated network model can be guaranteed Therefore, compared with privacy protection technologies such as differential privacy, it improves the accuracy of federated learning modeling.
  • the embodiment of this application provides a federated learning modeling optimization method. Compared with the technical means of federated learning modeling based on privacy protection technologies such as homomorphic encryption or multi-party secure computing in the prior art, the embodiment of this application first starts from the local The first noise data is obtained from the noise data set, and the first noise data is mapped to each initial particle network model according to the locally generated network model, and then the local sample data is obtained, and each of the initial particle The network model is iteratively trained and updated to obtain each target particle network model, and then the second noise data is obtained from the local noise data set, and the local generation network is generated according to each of the target particle network models and the second noise data.
  • the model performs iterative training update based on federated learning to obtain the federated generation network model, and since the local generation network is the mapping from noise data to the particle network model, the federated generation network model is the global mapping from the global noise data to the global particle network model , therefore, the purpose of constructing a federated generation network model based on federated learning is achieved, and then the local selected noise samples are obtained from the local noise data set, and the locally selected noise samples can be transformed according to the federated generation network model
  • the federated predictive network model that is, according to the global mapping from the global noise data to the global particle network model
  • the locally selected noise samples are converted into the federated predictive network model, which realizes the generation of network models as a medium and indirectly based on federated learning.
  • the purpose of the federated predictive network model in which, since the generated network model does not involve the specific parameters of the particle network model and local sample data, the data privacy of the federation participants can be protected, and the generated network model can also directly participate in the calculation in the form of plain text and communication greatly reduces the computing and communication overhead in the federated learning process. Therefore, it overcomes the high computing overhead of homomorphic encryption technology in the prior art, and the multi-party secure computing technology involves complex cryptographic operations. Its communication and calculation overhead are also very large, which will affect the technical defects of the efficiency of federated learning modeling. Therefore, it solves the technical problem of low efficiency of federated learning modeling due to the need for privacy protection.
  • the federated learning modeling optimization method is applied to the federated coordinator, and the federated learning modeling optimization method includes:
  • Step B10 receiving the locally generated network model sent by each federation participant
  • Step B20 aggregating each of the locally generated network models to obtain an aggregated generated network model
  • Step B30 sending the aggregated generated network model to each of the federated participants, so that each federated participant can iteratively update its local generated network model according to the aggregated generated network model to obtain a federated generated network model, Converting locally selected noise samples into a federated prediction network model according to the federated generation network model.
  • the locally generated network model sent by each federation participant is received, wherein the federated participant is used to obtain the first noise data from the local noise data set, and according to the locally generated network model, the The first noise data is mapped to each initial particle network model, and then the federation participant obtains local sample data, and according to the local sample data, iteratively trains and updates each of the initial particle network models to obtain each target particle network model; from Acquiring second noise data centrally from the local noise data, updating the locally generated network model according to each of the target particle network models and the second noise data, and sending the updated locally generated network model to the federated coordination
  • each of the locally generated network models is aggregated to obtain an aggregated generated network model, wherein the preset aggregation rules can be weighted average or weighted sum, etc.; the aggregated generated network model sent to each of the federation participants, for the federation participants to use the aggregated generated network model as a new local generated network model, and
  • the federated learning modeling optimization method also includes:
  • Step B40 obtaining public sample data
  • Step B50 sending the public sample data to each of the federated participants, so that the federated participants can perform model prediction on the public sample data according to their respective federated prediction network models, and obtain test prediction results;
  • Step B60 receiving the test prediction results sent by each of the federated participants, predicting the distance between the model parameters of the federated predictive network model of each of the federated participants according to the test and predicted results, and obtaining the distance of each model parameter.
  • each public sample is obtained, and each said public sample is sent to each said federated participant, so that said federated participant can perform a separate analysis on each said public sample according to the federated prediction network model.
  • Model prediction obtaining test prediction results, wherein the test prediction results include at least an output of a federated prediction network model for the public samples, and then receiving the test prediction results sent by each federation participant, and according to each of the federated participants
  • the distance between the test prediction results sent by the party predict the distance between the model parameters of the federated prediction network model of each of the federation participants, obtain the distance of each model parameter, and then judge whether the distance of each of the model parameters is greater than the preset Parameter distance threshold, if the distance of each model parameter is greater than the preset parameter distance threshold, it is determined that the privacy protection evaluation result is a privacy protection evaluation pass; if the distance of each of the model parameters is not greater than the preset parameter distance threshold, it is determined that the privacy protection The result of the protection evaluation is that the privacy protection
  • the embodiment of this application provides a privacy protection evaluation method on the basis of constructing a federated generation network model.
  • the federated coordinator distributes public sample data to each federated party to collect data from each federated party. Test the prediction results, and then predict the model parameter distance between the model parameters of different federation participants according to the distance between the test prediction results from different federation participants, and then conduct privacy protection evaluation based on the model parameter distance, which can prevent different
  • the model parameters of the federated predictive network model of the federated participants have a high similarity, which causes the privacy of the federated participants to leak.
  • the embodiment of this application provides a federated learning modeling optimization method. Compared with the technical means of federated learning modeling based on privacy protection technologies such as homomorphic encryption or multi-party secure computing in the prior art, the embodiment of this application first receives each A locally generated network model sent by a federation participant; aggregating each of the locally generated network models to obtain an aggregated generated network model; sending the aggregated generated network model to each of the federated participants for the federation to participate in According to the aggregation generation network model, it iteratively updates the respective local generation network models to obtain the federation generation network model, and converts the locally selected noise samples into the federation prediction network model according to the federation generation network model, realizing the generation network model
  • the model is the medium, indirectly based on the purpose of federated learning to build a federated prediction network model.
  • the generated network model since the generated network model does not involve the specific parameters of the particle network model and local sample data, the data privacy of the federation participants can be protected.
  • the generated network model can also directly participate in calculation and communication in the form of plain text, to a great extent It reduces the computing and communication overhead in the federated learning process, so it overcomes the high computing overhead of the homomorphic encryption technology in the prior art, and the multi-party secure computing technology involves complex cryptographic operations, and its communication and computing overhead are the same Therefore, it solves the technical problem of low efficiency of federated learning modeling due to the need for privacy protection.
  • the embodiment of the present application also provides a federated learning modeling optimization device, which is applied to federation participants, and the federated learning modeling optimization device includes:
  • the first model generation module is used to obtain the first noise data from the local noise data set, and map the first noise data to each initial particle network model according to the locally generated network model;
  • a local iterative training and updating module configured to obtain local sample data, and perform iterative training and updating on each of the initial particle network models according to the local sample data to obtain each target particle network model;
  • a federated iterative training and updating module configured to obtain second noise data from the local noise data set, and perform federated learning-based iteration on the local generated network model according to each of the target particle network models and the second noise data Training updates to obtain the federated generation network model;
  • the second model generation module is configured to obtain locally selected noise samples from the local noise data set, and convert the locally selected noise samples into a federated prediction network model according to the federated generation network model.
  • the second noise data includes at least a second noise sample
  • the target particle network model includes a target particle classification network model
  • the federated iterative training update module is also used for:
  • the local generation network model is updated through iterative training based on federated learning to obtain the federated generation network model.
  • the initial particle network model includes an initial particle classification network model
  • the federated iterative training update module is also used for:
  • the model and each of the initial particle classification network models all satisfy the preset iterative update end condition.
  • the federated iterative training update module is also used for:
  • the local sample data includes local training samples and local sample labels
  • the initial particle network model includes an initial particle classification network model
  • the target particle network model includes a target particle classification network model
  • the local iterative training update Modules are also used to:
  • the local training samples are respectively classified to obtain classification prediction labels
  • Each of the initial particle classification network models is updated according to the classification loss to obtain each of the target particle classification network models.
  • the federated learning modeling optimization device is also used for:
  • the federated learning modeling optimization device provided by the present invention adopts the federated learning modeling optimization method in the first embodiment above, and solves the technical problem of low federated learning modeling efficiency due to the need for privacy protection.
  • the beneficial effect of the federated learning modeling optimization device provided by the embodiment of the present invention is the same as that of the federated learning modeling optimization method provided by the above embodiment, and other
  • the technical features are the same as those disclosed in the methods of the above-mentioned embodiments, and will not be repeated here.
  • the embodiment of the present application also provides a federated learning modeling optimization device, which is applied to a federated coordinator, and the federated learning modeling optimization device includes:
  • the receiving module is used to receive the locally generated network model sent by each federation participant;
  • An aggregation module configured to aggregate each of the local generated network models to obtain an aggregated generated network model
  • a sending module configured to send the aggregated generated network model to each of the federated participants, so that each of the federated participants can iteratively update their local generated network models according to the aggregated generated network model to obtain a federated generated network
  • a model is used to convert locally selected noise samples into a federated prediction network model according to the federated generation network model.
  • the federated learning modeling optimization device is also used for:
  • Obtaining module used to obtain public sample data
  • a testing module configured to send the public sample data to each of the federation participants, so that the federation participants can perform model prediction on the public sample data according to their respective federation prediction network models, and obtain test prediction results;
  • the model parameter distance prediction module is used to receive the test prediction results sent by each of the federation participants, and predict the distance between the model parameters of the federation prediction network models of each of the federation participants according to the test prediction results, to obtain Each model parameter distance.
  • the federated learning modeling optimization device provided by the present invention adopts the federated learning modeling optimization method in the first embodiment above, and solves the technical problem of low federated learning modeling efficiency due to the need for privacy protection.
  • the beneficial effect of the federated learning modeling optimization device provided by the embodiment of the present invention is the same as that of the federated learning modeling optimization method provided by the above embodiment, and other
  • the technical features are the same as those disclosed in the methods of the above-mentioned embodiments, and will not be repeated here.
  • An embodiment of the present invention provides an electronic device.
  • the electronic device includes: at least one processor; and a memory connected to the at least one processor in communication; wherein, the memory stores instructions executable by at least one processor, and the instructions are executed by at least one processor.
  • the processor executes, so that at least one processor can execute the federated learning modeling optimization method in the first embodiment above.
  • FIG. 3 it shows a schematic structural diagram of an electronic device suitable for implementing an embodiment of the present disclosure.
  • Electronic devices in the embodiments of the present disclosure may include but not limited to such as mobile phones, notebook computers, digital broadcast receivers, PDA (personal digital assistants), PAD (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA personal digital assistants
  • PAD tablet computers
  • PMP portable multimedia players
  • vehicle-mounted terminals such as mobile terminals such as car navigation terminals
  • fixed terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 3 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • an electronic device may include a processing device (such as a central processing unit, a graphics processing unit, etc.), which may be loaded into a random access memory (RAM) according to a program stored in a read-only memory (ROM) Various appropriate actions and processing are performed by the programs in the program. In RAM, various programs and data necessary for the operation of electronic equipment are also stored.
  • the processing device, ROM and RAM train each other through the bus. Input/output (I/O) interfaces are also connected to the bus.
  • the following systems can be connected to the I/O interface: Input devices including, for example, touch screens, touchpads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCDs), speakers, vibrators output devices such as; storage devices including, for example, magnetic tapes, hard disks, etc.; and communication devices.
  • Input devices including, for example, touch screens, touchpads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCDs), speakers, vibrators output devices such as; storage devices including, for example, magnetic tapes, hard disks, etc.; and communication devices.
  • a communication device may allow an electronic device to communicate with other devices wirelessly or by wire to exchange data. While an electronic device is shown with various systems in the figures, it should be understood that implementing or having all of the systems shown is not a requirement. More or fewer systems may alternatively be implemented or provided
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via communication means, or installed from a storage means, or installed from a ROM.
  • the processing device When the computer program is executed by the processing device, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the electronic device provided by the present invention adopts the federated learning modeling optimization method in the above-mentioned embodiment 1 or embodiment 2, and solves the technical problem of low federated learning modeling efficiency due to the need for privacy protection.
  • the beneficial effect of the electronic device provided by the embodiment of the present invention is the same as the beneficial effect of the federated learning modeling optimization method provided by the first embodiment above, and other technical features of the electronic device are the same as the method of the above embodiment The disclosed features are the same and will not be repeated here.
  • This embodiment provides a computer-readable storage medium having computer-readable program instructions stored thereon, and the computer-readable program instructions are used to execute the federated learning modeling optimization method in the first embodiment above.
  • the computer-readable storage medium provided by the embodiments of the present invention may be, for example, a USB flash drive, but is not limited to an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, system, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable Programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, system or device.
  • Program code embodied on a computer readable storage medium may be transmitted by any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable storage medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable storage medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the first noise data from the local noise data set, generates the network model according to the local The first noise data is mapped to each initial particle network model; local sample data is obtained, and each initial particle network model is iteratively trained and updated according to the local sample data to obtain each target particle network model; from the local Acquiring second noise data from the noise data set, performing iterative training update based on federated learning on the local generation network model according to each of the target particle network models and the second noise data, to obtain a federation generation network model; from the Locally selected noise samples are obtained from the local noise data set, and the locally selected noise samples are converted into a federated predictive network model according to the federated generation network model.
  • Computer program code for carrying out the operations of the present disclosure can be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation of the unit itself under certain circumstances.
  • the computer-readable storage medium provided by the present invention stores computer-readable program instructions for executing the above-mentioned federated learning modeling optimization method, which solves the technical problem of low federated learning modeling efficiency due to the need for privacy protection.
  • the beneficial effect of the computer-readable storage medium provided by the embodiment of the present invention is the same as the beneficial effect of the federated learning modeling optimization method provided by the first or second embodiment above, and will not be repeated here.
  • the present application also provides a computer program product, including a computer program.
  • a computer program product including a computer program.
  • the steps of the above-mentioned federated learning modeling optimization method are implemented.
  • the computer program product provided by this application solves the technical problem of low efficiency of federated learning modeling due to the need for privacy protection.
  • the beneficial effect of the computer program product provided by the embodiment of the present invention is the same as the beneficial effect of the federated learning modeling optimization method provided by the first or second embodiment above, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请公开了联邦学习建模优化方法、电子设备、存储介质及程序产品,应用于联邦参与方,所述联邦学习建模优化方法包括:获取第一噪声数据,依据本地生成网络模型,将第一噪声数据映射为各初始粒子网络模型;获取本地样本数据,依据本地样本数据,分别对各初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;获取第二噪声数据,依据各目标粒子网络模型和第二噪声数据,对本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;获取本地选定噪声样本,依据联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。

Description

联邦学习建模优化方法、电子设备、存储介质及程序产品
本申请要求于2021年11月29日申请的、申请号为202111436781.5的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及金融科技(Fintech)的人工智能技术领域,尤其涉及一种联邦学习建模优化方法、电子设备、存储介质及程序产品。
背景技术
随着金融科技,尤其是互联网科技金融的不断发展,越来越多的技术(如分布式、人工智能等)应用在金融领域,但金融业也对技术提出了更高的要求,如对金融业对应待办事项的分发也有更高的要求。
随着计算机软件和人工智能、大数据云服务应用的不断发展,为了解决“数据孤岛”问题,技术人员提出了联邦学习的概念。目前,为解决联邦学习中数据隐私保护的问题,通常通过同态加密或者多方安全计算等隐私保护技术来实现联邦学习建模,但是,同态加密技术的计算开销非常大,将会降低联邦学习建模的效率,而多方安全计算技术涉及到复杂的密码学操作,其通信以及计算的开销同样很大,影响联邦学习建模的效率,所以,现有的联邦学习建模方式的效率较低。
技术问题
本申请的主要目的在于提供一种联邦学习建模优化方法、电子设备、存储介质及程序产品,旨在解决现有技术中由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。
技术解决方案
为实现上述目的,本申请提供一种联邦学习建模优化方法,应用于联邦参与方,所述联邦学习建模优化方法包括:
从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;
获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;
从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;
从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
为实现上述目的,本申请提供一种联邦学习建模优化方法,应用于联邦协调方,所述联邦学习建模优化方法还包括:
接收各联邦参与方发送的本地生成网络模型;
对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;
将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
本申请还提供一种联邦学习建模优化装置,所述联邦学习建模优化装置应用于联邦参与方,所述联邦学习建模优化装置包括:
第一模型生成模块,用于从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;
本地迭代训练更新模块,用于获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;
联邦迭代训练更新模块,用于从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;
第二模型生成模块,用于从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
本申请还提供一种联邦学习建模优化装置,所述联邦学习建模优化装置应用于联邦协调方,所述联邦学习建模优化装置包括:
接收模块,用于接收各联邦参与方发送的本地生成网络模型;
聚合模块,用于对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;
发送模块,用于将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
本申请还提供一种电子设备,所述电子设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述联邦学习建模优化方法的程序,所述联邦学习建模优化方法的程序被处理器执行时可实现如上述的联邦学习建模优化方法的步骤。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有实现联邦学习建模优化方法的程序,所述联邦学习建模优化方法的程序被处理器执行时实现如上述的联邦学习建模优化方法的步骤。
本申请还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述的联邦学习建模优化方法的步骤。
有益效果
本申请提供了一种联邦学习建模优化方法、电子设备、存储介质及程序产品,相比于现有技术中基于同态加密或者多方安全计算等隐私保护技术进行联邦学习建模的技术手段,本申请首先从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型,进而获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型,进而从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型,而由于本地生成网络为噪声数据至粒子网络模型的映射,所以联邦生成网络模型为全局的噪声数据至全局的粒子网络模型的全局映射,所以,实现了基于联邦学习构建联邦生成网络模型的目的,进而从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,即可将所述本地选定噪声样本转换为联邦预测网络模型,也即依据全局的噪声数据至全局的粒子网络模型的全局映射,将本地选定噪声样本转换为联邦预测网络模型,实现了以生成网络模型为媒介,间接依据联邦学习构建联邦预测网络模型的目的,其中,由于生成网络模型不涉及到粒子网络模型的具体参数以及本地样本数据,进而可保护联邦参与方的数据隐私,同时生成网络模型也可以以明文的形式直接参与计算与通信,极大程度上降低了联邦学习过程中计算与通信的开销,所以,克服了现有技术中同态加密技术的计算开销非常大,而多方安全计算技术涉及到复杂的密码学操作,其通信以及计算的开销同样很大,从而将影响联邦学习建模的效率的技术缺陷,所以,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请联邦学习建模优化方法第一实施例的流程示意图;
图2为本申请联邦学习建模优化方法第二实施例的流程示意图;
图3为本申请实施例中联邦学习建模优化方法涉及的硬件运行环境的设备结构示意图。
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其它实施例,均属于本发明保护的范围。
实施例一
本申请实施例提供一种联邦学习建模优化方法,应用于联邦参与方,在本申请联邦学习建模优化方法的第一实施例中,参照图1,所述联邦学习建模优化方法包括:
步骤S10,从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;
在本实施例中,需要说明的是,所述联邦学习建模优化方法应用于横向联邦学习,所述联邦参与方为横向联邦学习的参与方,所述本地噪声数据集至少包括一本地噪声样本,所述第一噪声数据包括预设数量的第一噪声样本,其中,所述第一噪声样本为第一噪声数据中的本地噪声样本,各所述第一噪声样本符合预设数据分布,所述预设数据分布可以为高斯分布。
具体地,从本地噪声数据集中获取符合高斯分布的各第一噪声样本,将各所述第一噪声样本分别输入本地生成网络模型,将各所述第一噪声样本分别映射为对应的粒子网络模型参数,得到各初始粒子网络模型,其中,所述第一噪声样本可以为图像、声音或者特定的矩阵等,所述本地生成网络模型为联邦参与方本地维护的用于生成粒子网络模型参数的机器学习模型。所述粒子网络模型可以为分类网络模型或者逻辑回归模型等。
步骤S20,获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;
在本实施例中,需要说明的是,所述本地样本数据为联邦参与方设备的本地隐私数据,所述本地样本数据包括本地训练样本和本地训练样本对应的本地样本标签。
具体地,通过将本地训练样本分别输入各初始粒子网络模型,分别对各所述本地训练样本执行模型预测,得到各初始粒子网络模型输出的模型预测标签;依据各所述模型预测标签和所述本地样本标签之间的距离,计算各初始粒子网络模型对应的模型预测损失;依据各所述模型预测损失,分别更新各自对应的初始粒子网络模型,得到各目标粒子网络模型。
其中,所述本地样本数据包括本地训练样本和本地样本标签,所述初始粒子网络模型包括初始粒子分类网络模型,所述目标粒子网络模型包括目标粒子分类网络模型,
所述依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型的步骤包括:
步骤S21,依据各所述初始粒子分类网络模型,分别对所述本地训练样本进行分类,得到分类预测标签;
步骤S22,依据所述分类预测标签和所述本地样本标签,计算分类损失;
步骤S23,依据所述分类损失,对各所述初始粒子分类网络模型进行更新,得到各所述目标粒子分类网络模型。
在本实施例中,具体地,通过将本地训练样本分别输入各所述初始粒子分类网络模型,分别对所述本地训练样本进行分类,得到各所述初始粒子分类网络模型输出的分类预测标签;依据每一所述分类预测标签与所述本地样本标签之间的距离,计算各所述分类预测标签对应的分类损失;依据各所述分类损失,分别计算对应的初始粒子分类网络模型对应的模型梯度,依据各模型梯度,更新对应的初始粒子分类网络模型,得到各目标粒子分类网络模型。
步骤S30,从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;
在本实施例中,需要说明的是,所述第二噪声数据至少包括一个第二噪声样本,各所述第二噪声样本符合预设数据分布,所述预设数据分布可以为高斯分布,所述第二噪声样本为第二噪声数据中的本地噪声样本。所述目标粒子网络模型可以为目标粒子分类网络模型。
具体地,从所述本地噪声数据集中获取符合高斯分布的各第二噪声样本,以各所述第二噪声样本为训练样本,并以各所述目标粒子分类网络模型为目标标签,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型。
其中,所述第二噪声数据至少包括一第二噪声样本,所述目标粒子网络模型包括目标粒子分类网络模型,
所述依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型的步骤包括:
步骤S31,依据所述本地生成网络模型,分别将各所述第二噪声样本映射为训练粒子分类网络模型;
步骤S32,依据各所述训练粒子分类网络模型与各所述目标粒子分类网络模型计算的相似度损失,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到所述联邦生成网络模型。
在本实施例中,具体地,通过将各所述噪声样本分别输入所述本地生成网络模型,分别将各所述第二噪声样本映射为对应的训练粒子网络参数,得到各训练粒子分类网络模型,其中,所述训练粒子分类网络模型为具备所述训练粒子网络参数的分类网络模型;基于各所述训练粒子分类网络模型对应的模型参数分布与各所述目标粒子分类网络模型对应的模型参数分布之间的相似度,计算相似度损失;依据所述相似度损失,更新所述本地生成网络模型;将更新后的本地生成网络模型发送至联邦协调方,以供所述联邦协调方对各联邦参与方发送的本地生成网络模型进行聚合,得到聚合生成网络模型,进而接收联邦协调方发送的聚合生成网络模型,将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤:从本地噪声数据集中获取第一噪声数据,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代训练结束条件,将所述本地生成网络模型作为联邦生成网络模型,其中,所述预设迭代训练结束条件可以损失函数收敛,也可以为模型达到预设最大迭代次数阈值。
其中,所述初始粒子网络模型包括初始粒子分类网络模型,所述依据各所述训练粒子分类网络模型与各所述目标粒子分类网络模型计算的相似度损失,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到所述联邦生成网络模型的步骤包括:
步骤S321,依据各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布之间的相似度,计算相似度损失;
在本实施例中,需要说明的是,所述相似度损失包括KL散度损失。
具体地,依据各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布之间的相似度,计算KL散度损失,其中,计算KL散度损失的目的在于拟合各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布,使得各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布一致。
步骤S322,判断所述本地生成网络模型和各所述初始粒子分类网络模型是否均满足预设迭代更新结束条件;
步骤S323,若满足,则将所述本地生成网络模型作为所述联邦生成网络模型;
步骤S324,若不满足,则依据所述相似度损失,更新所述本地生成网络模型;
步骤S325,将更新后的本地生成网络模型发送至联邦协调方,以供所述联邦协调方对各所述联邦参与方发送的本地生成网络模型进行聚合,得到聚合生成网络模型;
步骤S326,接收所述联邦协调方发送的聚合生成网络模型,将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤:从本地噪声数据集中获取第一噪声数据,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足所述预设迭代更新结束条件。
在本实施例中,具体地,判断联邦学习建模过程中的本地生成网络模型和各初始粒子分类网络模型是否均满足预设迭代更新结束条件;若联邦学习建模过程中的本地生成网络模型和各初始粒子分类网络模型均满足预设迭代更新结束条件,则证明所述本地生成网络模型符合联邦学习建模要求且所述本地生成网络模型输出的粒子分类网络模型同样符合联邦学习建模要求,所以将所述本地生成网络模型作为联邦生成网络模型,其中,所述联邦学习建模要求可以为模型准确度要求;若联邦学习建模过程中的本地生成网络模型和各初始粒子分类网络模型未均满足预设迭代更新结束条件,则证明所述本地生成网络模型未符合联邦学习建模要求或者所述本地生成网络模型输出的粒子分类网络模型未符合联邦学习建模要求,进而将更新后的本地生成网络模型发送至联邦协调方,以供所述联邦协调方对各所述联邦参与方发送的本地生成网络模型进行聚合,得到聚合生成网络模型,接收所述联邦协调方发送的聚合生成网络模型,将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤:从本地噪声数据集中获取第一噪声数据,以进行下轮迭代,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足所述预设迭代更新结束条件。
其中,所述判断所述本地生成网络模型和各所述初始粒子分类网络模型是否均满足预设迭代更新结束条件的步骤包括:
步骤A10,获取各所述初始粒子分类网络对应的分类损失,判断各所述分类损失和所述相似度损失是否均收敛;
步骤A20,若各所述分类损失和所述相似度损失均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代更新结束条件;
步骤A30,若各所述分类损失和所述相似度损失未均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型未均满足预设迭代更新结束条件。
在本实施例中,需要说明的是,所述分类损失为述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新得到各目标粒子网络模型时计算的分类损失。所述分类损失和所述初始粒子网络模型一一对应。
具体地,获取各所述初始粒子分类网络对应的分类损失,判断各所述分类损失和所述相似度损失是否均收敛;若各所述分类损失和所述相似度损失均收敛,则证明所述本地生成网络模型以及所述本地生成网络模型输出的粒子分类网络模型均满足模型准确度要求,所以判定所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代更新结束条件;若各所述分类损失和所述相似度损失未均收敛,则证明所述本地生成网络模型以及所述本地生成网络模型输出的粒子分类网络模型未均满足模型准确度要求,所以判定所述本地生成网络模型和各所述初始粒子分类网络模型未均满足预设迭代更新结束条件。
步骤S40,从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
在本实施例中,需要说明的是,每一联邦参与方均在本地维护一特定的本地选定噪声样本,各联邦参与方之间可选定不同的本地选定噪声样本,以保证各参与方的联邦预测网络模型的具体模型参数不被其他参与方得到。
具体地,获取从所述本地噪声数据集中选取的本地选定噪声样本,通过将所述本地选定噪声样本输入联邦生成网络模型,将所述本地选定噪声样本映射为对应的粒子网络参数,得到联邦预测网络模型,其中,所述联邦预测网络模型可以为联邦分类网络模型或者联邦逻辑回归模型。
另外地,需要说明的是,由于本地生成网络模型并不直接处理原始的样本数据,例如图像数据或者音频数据等,所以通常可设置本地生成网络模型的参数数量远小于粒子网络模型的参数数量,而联邦参与方与联邦协调方交互的数据为本地生成网络模型的模型参数,所以可降低联邦参与方和联邦协调方之间的通信开销以及计算开销,进而同样提升了联邦学习建模的效率。
其中,在所述依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型的步骤之后,所述联邦学习建模优化方法还包括:
步骤S50,接收联邦协调方下发的公共样本数据;
步骤S60,依据所述联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
步骤S70,将所述测试预测结果发送至所述联邦协调方,以供所述联邦协调方依据各所述联邦参与方发送的测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
在本实施例中,需要说明的是,所述公共样本数据至少包括一公共样本。
具体地,接收联邦协调方下发的各公共样本;通过将各所述公共样本分别输入联邦预测网络模型,分别对各所述公共样本进行模型预测,得到测试预测结果,其中,所述测试预测结果至少包括一联邦预测网络模型针对于所述公共样本的输出,进而将所述测试预测结果发送至所述联邦协调方,以供所述联邦协调方依据各所述联邦参与方发送的测试预测结果之间的距离,预测各所述联邦参与方的联邦预测网络模型的模型参数两两之间的距离,得到各模型参数距离。其中,所述模型参数距离可用于进行隐私保护评估。
在一种实施方式中,所述测试预测结果可以为测试预测结果向量,其中,所述测试预测结果向量至少包括一联邦预测网络模型针对一公共样本的测试输出值。
所述联邦协调方依据各所述联邦参与方发送的测试预测结果之间的距离,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到模型参数距离的步骤包括:
计算各所述联邦参与方发送的测试预测结果向量之间的距离,得到各目标结果向量距离;依据所述结果向量距离和模型参数距离之间的映射,确定所述各目标结果向量距离对应的模型参数距离。
其中,在步骤S70之后,依据各模型参数距离进行隐私保护评估的具体实现方式包括:
判断各所述模型参数距离是否均大于预设参数距离阈值,若各所述模型参数距离均大于预设参数距离阈值,则判定所述隐私保护评估结果为隐私保护评估通过;若各所述模型参数距离未均大于预设参数距离阈值,则判定所述隐私保护评估结果为隐私保护评估不通过,进一步地,可对联邦生成网络模型重新进行基于联邦学习的迭代训练。所以,本申请实施例在构建联邦生成网络模型的基础上,还提供了一种隐私保护评估方法,联邦协调方通过向各联邦参与方下发公共样本数据,收集来自于每一联邦参与方的测试预测结果,进而依据来自于不同联邦参与方的测试预测结果之间的距离,预测不同联邦参与方的模型参数之间的模型参数距离,进而可基于模型参数距离进行隐私保护评估,可防止不同联邦参与方的联邦预测网络模型的模型参数因为相似度较高而造成联邦参与方发生隐私泄露的情况发生。
另外地,需要说明的是,在进行隐私保护评估预测的过程中,联邦参与方与联邦协调方之间交互的数据为与参与方个数以及粒子网络模型个数相关的测试预测结果,无需通过传递模型参数本身进行隐私保护评估,节约了隐私保护评估过程中的通信开销和计算开销,提升了联邦学习隐私保护评估的效率。
另外地,需要说明的是,本申请实施例中的相似度损失为KL散度损失,所述KL散度损失对各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布进行整体性拟合,进而提升了本地生成网络模型的稳定性,不容易受到被外界攻击的影响。
作为一种示例,所述初始粒子网络模型可以为图像分类粒子网络模型,所述本地样本数据可以为训练图像样本,所述目标粒子网络模型为迭代训练更新后的图像分类粒子网络模型,也即为目标图像分类粒子网络模型,所述联邦生成网络模型用于生成图像分类模型,所述联邦预测网络模型可以为联邦图像分类模型,本申请实施例可实现了以生成网络模型为媒介,间接依据联邦学习构建联邦图像分类模型的目的,其中,由于生成网络模型不涉及到图像分类粒子网络模型的具体参数以及本地的训练图像样本,进而可保护联邦参与方的图像数据隐私,同时生成网络模型也可以以明文的形式直接参与计算与通信,极大程度上降低了基于联邦学习构建联邦图像分类模型过程中计算与通信的开销,所以提升了基于联邦学习构建联邦图像分类模型的效率。
另外地,需要说明的是,隐私保护技术还包括差分隐私,基于差分隐私进行联邦学习时,需要在样本中添加噪声来实现隐私保护,而噪声数据对数据的可用性和模型的准确度产生影响,进而降低联邦学习建模的准确度,而本申请实施例中并未直接噪声添加至样本中,而是以联邦学习的方式学习特定分布的噪声至预测网络模型的映射,得到联邦生成网络模型,进而依据联邦生成网络模型可生成联邦预测网络模型,所以,只要保证本地生成网络模型和本地生成网络模型输出的各初始粒子网络模型的准确度,即可保证联邦生成网络模型输出的联邦预测网络模型的准确度,所以相比于差分隐私这类的隐私保护技术,提升了联邦学习建模的准确度。
本申请实施例提供了一种联邦学习建模优化方法,相比于现有技术中基于同态加密或者多方安全计算等隐私保护技术进行联邦学习建模的技术手段,本申请实施例首先从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型,进而获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型,进而从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型,而由于本地生成网络为噪声数据至粒子网络模型的映射,所以联邦生成网络模型为全局的噪声数据至全局的粒子网络模型的全局映射,所以,实现了基于联邦学习构建联邦生成网络模型的目的,进而从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,即可将所述本地选定噪声样本转换为联邦预测网络模型,也即依据全局的噪声数据至全局的粒子网络模型的全局映射,将本地选定噪声样本转换为联邦预测网络模型,实现了以生成网络模型为媒介,间接依据联邦学习构建联邦预测网络模型的目的,其中,由于生成网络模型不涉及到粒子网络模型的具体参数以及本地样本数据,进而可保护联邦参与方的数据隐私,同时生成网络模型也可以以明文的形式直接参与计算与通信,极大程度上降低了联邦学习过程中计算与通信的开销,所以,克服了现有技术中同态加密技术的计算开销非常大,而多方安全计算技术涉及到复杂的密码学操作,其通信以及计算的开销同样很大,从而将影响联邦学习建模的效率的技术缺陷,所以,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。
实施例二
进一步地,参照图2,基于本申请第一实施例,在本申请另一实施例中,与上述实施例一相同或相似的内容,可以参考上文介绍,后续不再赘述。在此基础上,所述联邦学习建模优化方法应用于联邦协调方,所述联邦学习建模优化方法包括:
步骤B10,接收各联邦参与方发送的本地生成网络模型;
步骤B20,对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;
步骤B30,将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
在本实施例中,具体地,接收各联邦参与方发送的本地生成网络模型,其中,所述联邦参与方用于从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型,进而联邦参与方获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行更新,将更新后的本地生成网络模型发送至联邦协调方;依据预设聚合规则,对各所述本地生成网络模型进行聚合,得到聚合生成网络模型,其中,所述预设聚合规则可以为加权平均或者加权求和等;将所述聚合生成网络模型发送至各所述联邦参与方,以供所述联邦参与方将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤从本地噪声数据集中获取第一噪声数据,以进行下一轮迭代,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代更新结束条件,将本地生成网络模型作为联邦生成网络模型,并依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。其中,所述联邦参与方执行的步骤具体可参照上述步骤S10至步骤S40中的具体内容,在此不再赘述。
其中,在所述将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型的步骤之后,所述联邦学习建模优化方法还包括:
步骤B40,获取公共样本数据;
步骤B50,将所述公共样本数据发送至各所述联邦参与方,以供所述联邦参与方依据各自的联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
步骤B60,接收各所述联邦参与方发送的测试预测结果,依据各所述测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
在本实施例中,具体地,获取各公共样本,将各所述公共样本发送至各所述联邦参与方,以供所述联邦参与方依据联邦预测网络模型,分别对各所述公共样本进行模型预测,得到测试预测结果,其中,所述测试预测结果至少包括一联邦预测网络模型针对于所述公共样本的输出,进而接收各联邦参与方发送的测试预测结果,并依据各所述联邦参与方发送的测试预测结果之间的距离,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离,进而判断各所述模型参数距离是否均大于预设参数距离阈值,若各所述模型参数距离均大于预设参数距离阈值,则判定隐私保护评估结果为隐私保护评估通过;若各所述模型参数距离未均大于预设参数距离阈值,则判定隐私保护评估结果为隐私保护评估不通过。所以,本申请实施例在构建联邦生成网络模型的基础上,还提供了一种隐私保护评估方法,联邦协调方通过向各联邦参与方下发公共样本数据,收集来自于每一联邦参与方的测试预测结果,进而依据来自于不同联邦参与方的测试预测结果之间的距离,预测不同联邦参与方的模型参数之间的模型参数距离,进而可基于模型参数距离进行隐私保护评估,可防止不同联邦参与方的联邦预测网络模型的模型参数因为相似度较高而造成联邦参与方发生隐私泄露的情况发生。
本申请实施例提供了一种联邦学习建模优化方法,相比于现有技术中基于同态加密或者多方安全计算等隐私保护技术进行联邦学习建模的技术手段,本申请实施例首先接收各联邦参与方发送的本地生成网络模型;对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型,实现了以生成网络模型为媒介,间接依据联邦学习构建联邦预测网络模型的目的。其中,由于生成网络模型不涉及到粒子网络模型的具体参数以及本地样本数据,进而可保护联邦参与方的数据隐私,同时生成网络模型也可以以明文的形式直接参与计算与通信,极大程度上降低了联邦学习过程中计算与通信的开销,所以,克服了现有技术中同态加密技术的计算开销非常大,而多方安全计算技术涉及到复杂的密码学操作,其通信以及计算的开销同样很大,从而将影响联邦学习建模的效率的技术缺陷,所以,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。
实施例三
本申请实施例还提供一种联邦学习建模优化装置,所述联邦学习建模优化装置应用于联邦参与方,所述联邦学习建模优化装置包括:
第一模型生成模块,用于从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;
本地迭代训练更新模块,用于获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;
联邦迭代训练更新模块,用于从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;
第二模型生成模块,用于从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
可选地,所述第二噪声数据至少包括一第二噪声样本,所述目标粒子网络模型包括目标粒子分类网络模型,所述联邦迭代训练更新模块还用于:
依据所述本地生成网络模型,分别将各所述第二噪声样本映射为训练粒子分类网络模型;
依据各所述训练粒子分类网络模型与各所述目标粒子分类网络模型计算的相似度损失,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到所述联邦生成网络模型。
可选地,所述初始粒子网络模型包括初始粒子分类网络模型,所述联邦迭代训练更新模块还用于:
依据各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布之间的相似度,计算相似度损失;
判断所述本地生成网络模型和各所述初始粒子分类网络模型是否均满足预设迭代更新结束条件;
若满足,则将所述本地生成网络模型作为所述联邦生成网络模型;
若不满足,则依据所述相似度损失,更新所述本地生成网络模型;
将更新后的本地生成网络模型发送至联邦协调方,以供所述联邦协调方对各所述联邦参与方发送的本地生成网络模型进行聚合,得到聚合生成网络模型;
接收所述联邦协调方发送的聚合生成网络模型,将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤:从本地噪声数据集中获取第一噪声数据,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足所述预设迭代更新结束条件。
可选地,所述联邦迭代训练更新模块还用于:
获取各所述初始粒子分类网络对应的分类损失,判断各所述分类损失和所述相似度损失是否均收敛;
若各所述分类损失和所述相似度损失均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代更新结束条件;
若各所述分类损失和所述相似度损失未均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型未均满足预设迭代更新结束条件。
可选地,所述本地样本数据包括本地训练样本和本地样本标签,所述初始粒子网络模型包括初始粒子分类网络模型,所述目标粒子网络模型包括目标粒子分类网络模型,所述本地迭代训练更新模块还用于:
依据各所述初始粒子分类网络模型,分别对所述本地训练样本进行分类,得到分类预测标签;
依据所述分类预测标签和所述本地样本标签,计算分类损失;
依据所述分类损失,对各所述初始粒子分类网络模型进行更新,得到各所述目标粒子分类网络模型。
可选地,所述联邦学习建模优化装置还用于:
接收联邦协调方下发的公共样本数据;
依据所述联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
将所述测试预测结果发送至所述联邦协调方,以供所述联邦协调方依据各所述联邦参与方发送的测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
本发明提供的联邦学习建模优化装置,采用上述实施例一中的联邦学习建模优化方法,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。与现有技术相比,本发明实施例提供的联邦学习建模优化装置的有益效果与上述实施例提供的联邦学习建模优化方法的有益效果相同,且该联邦学习建模优化装置中的其他技术特征与上述实施例方法公开的特征相同,在此不做赘述。
实施例四
本申请实施例还提供一种联邦学习建模优化装置,所述联邦学习建模优化装置应用于联邦协调方,所述联邦学习建模优化装置包括:
接收模块,用于接收各联邦参与方发送的本地生成网络模型;
聚合模块,用于对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;
发送模块,用于将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
可选地,所述联邦学习建模优化装置还用于:
获取模块,用于获取公共样本数据;
测试模块,用于将所述公共样本数据发送至各所述联邦参与方,以供所述联邦参与方依据各自的联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
模型参数距离预测模块,用于接收各所述联邦参与方发送的测试预测结果,依据各所述测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
本发明提供的联邦学习建模优化装置,采用上述实施例一中的联邦学习建模优化方法,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。与现有技术相比,本发明实施例提供的联邦学习建模优化装置的有益效果与上述实施例提供的联邦学习建模优化方法的有益效果相同,且该联邦学习建模优化装置中的其他技术特征与上述实施例方法公开的特征相同,在此不做赘述。
实施例五
本发明实施例提供一种电子设备,电子设备包括:至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述实施例一中的联邦学习建模优化方法。
下面参考图3,其示出了适于用来实现本公开实施例的电子设备的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图3示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图3所示,电子设备可以包括处理装置(例如中央处理器、图形处理器等),其可以根据存储在只读存储器(ROM)中的程序或者从存储装置加载到随机访问存储器(RAM)中的程序而执行各种适当的动作和处理。在RAM中,还存储有电子设备操作所需的各种程序和数据。处理装置、ROM以及RAM通过总线彼此训练。输入/输出(I/O)接口也连接至总线。
通常,以下系统可以连接至I/O接口:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置;包括例如磁带、硬盘等的存储装置;以及通信装置。通信装置可以允许电子设备与其他设备进行无线或有线通信以交换数据。虽然图中示出了具有各种系统的电子设备,但是应理解的是,并不要求实施或具备所有示出的系统。可以替代地实施或具备更多或更少的系统。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置从网络上被下载和安装,或者从存储装置被安装,或者从ROM被安装。在该计算机程序被处理装置执行时,执行本公开实施例的方法中限定的上述功能。
本发明提供的电子设备,采用上述实施例一或实施例二中的联邦学习建模优化方法,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。与现有技术相比,本发明实施例提供的电子设备的有益效果与上述实施例一提供的联邦学习建模优化方法的有益效果相同,且该电子设备中的其他技术特征与上述实施例方法公开的特征相同,在此不做赘述。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式的描述中,具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
实施例六
本实施例提供一种计算机可读存储介质,具有存储在其上的计算机可读程序指令,计算机可读程序指令用于执行上述实施例一中的联邦学习建模优化的方法。
本发明实施例提供的计算机可读存储介质例如可以是U盘,但不限于电、磁、光、电磁、红外线、或半导体的系统、系统或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、系统或者器件使用或者与其结合使用。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读存储介质可以是电子设备中所包含的;也可以是单独存在,而未装配入电子设备中。
上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被电子设备执行时,使得电子设备:从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
又或者:接收各联邦参与方发送的本地生成网络模型;对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本发明各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该单元本身的限定。
本发明提供的计算机可读存储介质,存储有用于执行上述联邦学习建模优化方法的计算机可读程序指令,解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。与现有技术相比,本发明实施例提供的计算机可读存储介质的有益效果与上述实施例一或实施例二提供的联邦学习建模优化方法的有益效果相同,在此不做赘述。
实施例七
本申请还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述的联邦学习建模优化方法的步骤。
本申请提供的计算机程序产品解决了由于需要进行隐私保护而造成联邦学习建模效率低的技术问题。与现有技术相比,本发明实施例提供的计算机程序产品的有益效果与上述实施例一或实施例二提供的联邦学习建模优化方法的有益效果相同,在此不做赘述。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利处理范围内。

Claims (20)

  1. 一种联邦学习建模优化方法,其中,应用于联邦参与方,所述联邦学习建模优化方法包括:
    从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型;
    获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型;
    从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型;
    从所述本地噪声数据集中获取本地选定噪声样本,依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型。
  2. 如权利要求1所述的联邦学习建模优化方法,其中,所述第二噪声数据至少包括一第二噪声样本,所述目标粒子网络模型包括目标粒子分类网络模型,
    所述依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到联邦生成网络模型的步骤包括:
    依据所述本地生成网络模型,分别将各所述第二噪声样本映射为训练粒子分类网络模型;
    依据各所述训练粒子分类网络模型与各所述目标粒子分类网络模型计算的相似度损失,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到所述联邦生成网络模型。
  3. 如权利要求2所述的联邦学习建模优化方法,其中,所述初始粒子网络模型包括初始粒子分类网络模型,
    所述依据各所述训练粒子分类网络模型与各所述目标粒子分类网络模型计算的相似度损失,对所述本地生成网络模型进行基于联邦学习的迭代训练更新,得到所述联邦生成网络模型的步骤包括:
    依据各所述训练粒子分类网络模型的模型参数分布与各所述目标粒子分类网络模型的模型参数分布之间的相似度,计算相似度损失;
    判断所述本地生成网络模型和各所述初始粒子分类网络模型是否均满足预设迭代更新结束条件;
    若满足,则将所述本地生成网络模型作为所述联邦生成网络模型;
    若不满足,则依据所述相似度损失,更新所述本地生成网络模型;
    将更新后的本地生成网络模型发送至联邦协调方,以供所述联邦协调方对各所述联邦参与方发送的本地生成网络模型进行聚合,得到聚合生成网络模型;
    接收所述联邦协调方发送的聚合生成网络模型,将所述聚合生成网络模型作为新的本地生成网络模型,并返回执行步骤:从本地噪声数据集中获取第一噪声数据,直至所述本地生成网络模型和各所述初始粒子分类网络模型均满足所述预设迭代更新结束条件。
  4. 如权利要求3所述的联邦学习建模优化方法,其中,所述判断所述本地生成网络模型和各所述初始粒子分类网络模型是否均满足预设迭代更新结束条件的步骤包括:
    获取各所述初始粒子分类网络对应的分类损失,判断各所述分类损失和所述相似度损失是否均收敛;
    若各所述分类损失和所述相似度损失均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型均满足预设迭代更新结束条件;
    若各所述分类损失和所述相似度损失未均收敛,则判定所述本地生成网络模型和各所述初始粒子分类网络模型未均满足预设迭代更新结束条件。
  5. 如权利要求1所述的联邦学习建模优化方法,其中,所述本地样本数据包括本地训练样本和本地样本标签,所述初始粒子网络模型包括初始粒子分类网络模型,所述目标粒子网络模型包括目标粒子分类网络模型,
    所述依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型的步骤包括:
    依据各所述初始粒子分类网络模型,分别对所述本地训练样本进行分类,得到分类预测标签;
    依据所述分类预测标签和所述本地样本标签,计算分类损失;
    依据所述分类损失,对各所述初始粒子分类网络模型进行更新,得到各所述目标粒子分类网络模型。
  6. 如权利要求1所述的联邦学习建模优化方法,其中,在所述依据所述联邦生成网络模型,将所述本地选定噪声样本转换为联邦预测网络模型的步骤之后,所述联邦学习建模优化方法还包括:
    接收联邦协调方下发的公共样本数据;
    依据所述联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
    将所述测试预测结果发送至所述联邦协调方,以供所述联邦协调方依据各所述联邦参与方发送的测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
  7. 如权利要求1所述的联邦学习建模优化方法,其中,所述本地噪声数据集至少包括一本地噪声样本,所述第一噪声数据包括预设数量的第一噪声样本,所述第一噪声样本为第一噪声数据中的本地噪声样本,各所述第一噪声样本符合预设数据分布,所述预设数据分布可以为高斯分布。
  8. 如权利要求7所述的联邦学习建模优化方法,其中,所述从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型的步骤包括:
    从本地噪声数据集中获取符合高斯分布的各第一噪声样本;
    将各所述第一噪声样本分别输入本地生成网络模型;
    将各所述第一噪声样本分别映射为对应的粒子网络模型参数,得到各初始粒子网络模型。
  9. 如权利要求8所述的联邦学习建模优化方法,其中,所述第一噪声样本为图像、声音或者特定的矩阵等,所述本地生成网络模型为联邦参与方本地维护的用于生成粒子网络模型参数的机器学习模型,所述粒子网络模型为分类网络模型或者逻辑回归模型。
  10. 如权利要求1所述的联邦学习建模优化方法,其中,所述本地样本数据为联邦参与方设备的本地隐私数据,所述本地样本数据包括本地训练样本和本地训练样本对应的本地样本标签。
  11. 如权利要求10所述的联邦学习建模优化方法,其中,所述获取本地样本数据,依据所述本地样本数据,分别对各所述初始粒子网络模型进行迭代训练更新,得到各目标粒子网络模型的步骤包括:
    将本地训练样本分别输入各初始粒子网络模型,分别对各所述本地训练样本执行模型预测,得到各初始粒子网络模型输出的模型预测标签;
    依据各所述模型预测标签和所述本地样本标签之间的距离,计算各初始粒子网络模型对应的模型预测损失;
    依据各所述模型预测损失,分别更新各自对应的初始粒子网络模型,得到各目标粒子网络模型。
  12. 如权利要求2所述的联邦学习建模优化方法,其中,所述相似度损失包括KL散度损失。
  13. 一种联邦学习建模优化方法,其中,应用于联邦协调方,所述联邦学习建模优化方法包括:
    接收各联邦参与方发送的本地生成网络模型;
    对各所述本地生成网络模型进行聚合,得到聚合生成网络模型;
    将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型。
  14. 如权利要求13所述的联邦学习建模优化方法,其中,在所述将所述聚合生成网络模型发送至各所述联邦参与方,以供各所述联邦参与方依据所述聚合生成网络模型,迭代更新各自的本地生成网络模型,得到联邦生成网络模型,依据所述联邦生成网络模型,将本地选定噪声样本转换为联邦预测网络模型的步骤之后,所述联邦学习建模优化方法还包括:
    获取公共样本数据;
    将所述公共样本数据发送至各所述联邦参与方,以供所述联邦参与方依据各自的联邦预测网络模型,对所述公共样本数据进行模型预测,得到测试预测结果;
    接收各所述联邦参与方发送的测试预测结果,依据各所述测试预测结果,预测各所述联邦参与方的联邦预测网络模型的模型参数之间的距离,得到各模型参数距离。
  15. 如权利要求13所述的联邦学习建模优化方法,其中,所述联邦参与方用于从本地噪声数据集中获取第一噪声数据,依据本地生成网络模型,将所述第一噪声数据映射为各初始粒子网络模型。
  16. 如权利要求13所述的联邦学习建模优化方法,其中,所述对各所述本地生成网络模型进行聚合,得到聚合生成网络模型的步骤包括:
    从所述本地噪声数据集中获取第二噪声数据,依据各所述目标粒子网络模型和所述第二噪声数据,对所述本地生成网络模型进行更新,将更新后的本地生成网络模型发送至联邦协调方;
    依据预设聚合规则,对各所述本地生成网络模型进行聚合,得到聚合生成网络模型。
  17. 如权利要求14所述的联邦学习建模优化方法,其中,所述测试预测结果至少包括一联邦预测网络模型针对于所述公共样本的输出。
  18. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至17中任一项所述的联邦学习建模优化方法的步骤。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有实现联邦学习建模优化方法的程序,所述实现联邦学习建模优化方法的程序被处理器执行以实现如权利要求1至17中任一项所述联邦学习建模优化方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至17中任一项所述联邦学习建模优化方法的步骤。
PCT/CN2021/141224 2021-11-29 2021-12-24 联邦学习建模优化方法、电子设备、存储介质及程序产品 WO2023092792A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111436781.5A CN114091617A (zh) 2021-11-29 2021-11-29 联邦学习建模优化方法、电子设备、存储介质及程序产品
CN202111436781.5 2021-11-29

Publications (1)

Publication Number Publication Date
WO2023092792A1 true WO2023092792A1 (zh) 2023-06-01

Family

ID=80305498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141224 WO2023092792A1 (zh) 2021-11-29 2021-12-24 联邦学习建模优化方法、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN114091617A (zh)
WO (1) WO2023092792A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116451275A (zh) * 2023-06-15 2023-07-18 北京电子科技学院 一种基于联邦学习的隐私保护方法及计算设备
CN116541870A (zh) * 2023-07-04 2023-08-04 北京富算科技有限公司 用于评估联邦学习模型的方法及装置
CN116863309A (zh) * 2023-09-04 2023-10-10 中电科网络安全科技股份有限公司 一种图像识别方法、装置、系统、电子设备及存储介质
CN117094381A (zh) * 2023-08-21 2023-11-21 哈尔滨工业大学 一种兼顾高效通信和个性化的多模态联邦协同方法
CN117407781A (zh) * 2023-12-14 2024-01-16 山东能源数智云科技有限公司 基于联邦学习的设备故障诊断方法及装置
CN117575291A (zh) * 2024-01-15 2024-02-20 湖南科技大学 基于边缘参数熵的联邦学习的数据协同管理方法
CN117575423A (zh) * 2024-01-10 2024-02-20 湖南工商大学 基于联邦学习系统的工业产品质量检测方法及相关设备
CN117811845A (zh) * 2024-02-29 2024-04-02 浪潮电子信息产业股份有限公司 威胁检测及模型训练方法、装置、系统、电子设备、介质
CN117892805A (zh) * 2024-03-18 2024-04-16 清华大学 基于超网络和层级别协作图聚合的个性化联邦学习方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114386583A (zh) * 2022-03-24 2022-04-22 北京大学 一种用于保护标签信息的纵向联邦神经网络模型学习方法
CN114880314B (zh) * 2022-05-23 2023-03-24 北京正远达科技有限公司 应用人工智能策略的大数据清洗决策方法及ai处理系统
CN115034333A (zh) * 2022-06-29 2022-09-09 支付宝(杭州)信息技术有限公司 联邦学习方法、联邦学习装置及联邦学习系统
CN115640517A (zh) * 2022-09-05 2023-01-24 北京火山引擎科技有限公司 多方协同模型训练方法、装置、设备和介质
CN115438735A (zh) * 2022-09-09 2022-12-06 中国电信股份有限公司 基于联邦学习的质检方法、系统、可读介质及电子设备
CN116321219B (zh) * 2023-01-09 2024-04-19 北京邮电大学 自适应蜂窝基站联邦形成方法、联邦学习方法及装置
CN115994384B (zh) * 2023-03-20 2023-06-27 杭州海康威视数字技术股份有限公司 基于决策联邦的设备隐私保护方法、系统和装置
CN116796860B (zh) * 2023-08-24 2023-12-12 腾讯科技(深圳)有限公司 联邦学习方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906903A (zh) * 2021-01-11 2021-06-04 北京源堡科技有限公司 网络安全风险预测方法、装置、存储介质及计算机设备
CN113095512A (zh) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 联邦学习建模优化方法、设备、介质及计算机程序产品
CN113222180A (zh) * 2021-04-27 2021-08-06 深圳前海微众银行股份有限公司 联邦学习建模优化方法、设备、介质及计算机程序产品
CN113298268A (zh) * 2021-06-11 2021-08-24 浙江工业大学 一种基于对抗噪声注入的垂直联邦学习方法和装置
US20210342453A1 (en) * 2020-04-29 2021-11-04 Robert Bosch Gmbh Private model utility by minimizing expected loss under noise

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210342453A1 (en) * 2020-04-29 2021-11-04 Robert Bosch Gmbh Private model utility by minimizing expected loss under noise
CN112906903A (zh) * 2021-01-11 2021-06-04 北京源堡科技有限公司 网络安全风险预测方法、装置、存储介质及计算机设备
CN113095512A (zh) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 联邦学习建模优化方法、设备、介质及计算机程序产品
CN113222180A (zh) * 2021-04-27 2021-08-06 深圳前海微众银行股份有限公司 联邦学习建模优化方法、设备、介质及计算机程序产品
CN113298268A (zh) * 2021-06-11 2021-08-24 浙江工业大学 一种基于对抗噪声注入的垂直联邦学习方法和装置

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116451275B (zh) * 2023-06-15 2023-08-22 北京电子科技学院 一种基于联邦学习的隐私保护方法及计算设备
CN116451275A (zh) * 2023-06-15 2023-07-18 北京电子科技学院 一种基于联邦学习的隐私保护方法及计算设备
CN116541870A (zh) * 2023-07-04 2023-08-04 北京富算科技有限公司 用于评估联邦学习模型的方法及装置
CN116541870B (zh) * 2023-07-04 2023-09-05 北京富算科技有限公司 用于评估联邦学习模型的方法及装置
CN117094381B (zh) * 2023-08-21 2024-04-12 哈尔滨工业大学 一种兼顾高效通信和个性化的多模态联邦协同方法
CN117094381A (zh) * 2023-08-21 2023-11-21 哈尔滨工业大学 一种兼顾高效通信和个性化的多模态联邦协同方法
CN116863309A (zh) * 2023-09-04 2023-10-10 中电科网络安全科技股份有限公司 一种图像识别方法、装置、系统、电子设备及存储介质
CN116863309B (zh) * 2023-09-04 2024-01-09 中电科网络安全科技股份有限公司 一种图像识别方法、装置、系统、电子设备及存储介质
CN117407781A (zh) * 2023-12-14 2024-01-16 山东能源数智云科技有限公司 基于联邦学习的设备故障诊断方法及装置
CN117407781B (zh) * 2023-12-14 2024-02-23 山东能源数智云科技有限公司 基于联邦学习的设备故障诊断方法及装置
CN117575423A (zh) * 2024-01-10 2024-02-20 湖南工商大学 基于联邦学习系统的工业产品质量检测方法及相关设备
CN117575423B (zh) * 2024-01-10 2024-04-16 湖南工商大学 基于联邦学习系统的工业产品质量检测方法及相关设备
CN117575291A (zh) * 2024-01-15 2024-02-20 湖南科技大学 基于边缘参数熵的联邦学习的数据协同管理方法
CN117575291B (zh) * 2024-01-15 2024-05-10 湖南科技大学 基于边缘参数熵的联邦学习的数据协同管理方法
CN117811845A (zh) * 2024-02-29 2024-04-02 浪潮电子信息产业股份有限公司 威胁检测及模型训练方法、装置、系统、电子设备、介质
CN117811845B (zh) * 2024-02-29 2024-05-24 浪潮电子信息产业股份有限公司 威胁检测及模型训练方法、装置、系统、电子设备、介质
CN117892805A (zh) * 2024-03-18 2024-04-16 清华大学 基于超网络和层级别协作图聚合的个性化联邦学习方法
CN117892805B (zh) * 2024-03-18 2024-05-28 清华大学 基于超网络和层级别协作图聚合的个性化联邦学习方法

Also Published As

Publication number Publication date
CN114091617A (zh) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2023092792A1 (zh) 联邦学习建模优化方法、电子设备、存储介质及程序产品
CN112149171B (zh) 联邦神经网络模型的训练方法、装置、设备及存储介质
WO2020207174A1 (zh) 用于生成量化神经网络的方法和装置
WO2023284387A1 (zh) 基于联邦学习的模型训练方法、装置、系统、设备和介质
CN113627085A (zh) 横向联邦学习建模优化方法、设备、介质及程序产品
CN112785002A (zh) 模型构建优化方法、设备、介质及计算机程序产品
CN113505520A (zh) 用于支持异构联邦学习的方法、装置和系统
WO2023078072A1 (zh) 基于拜占庭容错的异步共识方法、装置、服务器和介质
CN113051239A (zh) 数据共享方法、应用其的模型的使用方法及相关设备
CN114528044A (zh) 一种接口调用方法、装置、设备及介质
CN116703131B (zh) 电力资源分配方法、装置、电子设备和计算机可读介质
WO2023098698A1 (zh) 物流配送网络的确定方法、装置、终端设备及存储介质
CN110069195B (zh) 图像拖拽变形方法和装置
CN115277197B (zh) 模型所有权验证方法、电子设备、介质及程序产品
WO2022228067A1 (zh) 语音处理方法、装置和电子设备
CN114595474A (zh) 联邦学习建模优化方法、电子设备、介质及程序产品
CN111709784B (zh) 用于生成用户留存时间的方法、装置、设备和介质
CN111680754B (zh) 图像分类方法、装置、电子设备及计算机可读存储介质
CN115470292B (zh) 区块链共识方法、装置、电子设备及可读存储介质
CN113778078A (zh) 定位信息生成方法、装置、电子设备和计算机可读介质
CN112036821B (zh) 基于网格图规划专线的量化方法、装置、介质和电子设备
CN116521377B (zh) 业务计算卸载方法、系统、装置、设备及介质
CN111738416B (zh) 模型同步更新方法、装置及电子设备
WO2022105554A1 (zh) 区域画像的修正方法、装置、电子设备和可读存储介质
CN115470908A (zh) 模型安全推理方法、电子设备、介质及程序产品