WO2022218231A1 - 联合更新业务模型的方法及装置 - Google Patents

联合更新业务模型的方法及装置 Download PDF

Info

Publication number
WO2022218231A1
WO2022218231A1 PCT/CN2022/085876 CN2022085876W WO2022218231A1 WO 2022218231 A1 WO2022218231 A1 WO 2022218231A1 CN 2022085876 W CN2022085876 W CN 2022085876W WO 2022218231 A1 WO2022218231 A1 WO 2022218231A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
party
model
business
parameter
Prior art date
Application number
PCT/CN2022/085876
Other languages
English (en)
French (fr)
Inventor
郑龙飞
陈超超
王力
张本宇
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2022218231A1 publication Critical patent/WO2022218231A1/zh
Priority to US18/485,765 priority Critical patent/US20240037252A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/604Tools and structures for managing or administering access control systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • One or more embodiments of this specification relate to the field of computer technology, and in particular, to a method and apparatus for jointly updating a business model based on privacy protection.
  • Federated learning is a method for joint modeling while preserving private data. For example, cooperative security modeling is required between enterprises, and federated learning can be used to collaboratively train data processing models using data from all parties under the premise of fully protecting enterprise data privacy, so that data processing models can be processed more accurately and efficiently.
  • business data In a federated learning scenario, for example, all parties can agree on a model structure (or an agreed model), then use private data to train locally, and use a safe and reliable method to aggregate model parameters. Improve local models.
  • Federated learning is implemented on the basis of privacy protection, effectively breaking data silos, and realizing multi-party joint modeling.
  • the number of network layers in the business model in federated learning is gradually deepening, and the model parameters are also increasing accordingly.
  • the original model has more than 20 million parameters and the model size exceeds 100MB.
  • the data received by the server increases exponentially, which may cause communication congestion and seriously affect the overall training efficiency.
  • One or more embodiments of this specification describe a method and apparatus for jointly updating a service model, so as to solve one or more problems mentioned in the background art.
  • a method for jointly updating a business model is provided, which is used by multiple data parties to jointly train a business model based on privacy protection with the assistance of a service party, where the business model is used to process business data and obtain corresponding business processing.
  • the method includes: the service direction provides global model parameters to each data party, and the corresponding relationship between each data party and the N parameter groups divided by the global model parameters; each data party uses the global model parameters to update the local business model. ;
  • Each data party further updates the updated local business model based on the local business data, obtains a new local business model, and uploads the model parameters corresponding to its own parameter group to the server; the server respectively targets each parameter group. , which fuses the received model parameters to update the global model parameters.
  • each data party further updates the updated local business model based on the local business data
  • obtaining a new local business model includes: after each data party updates the local business model with global model parameters, use the local business model The data detects the current stage transition index; the data cube whose stage transition index satisfies the stop condition of the full update enters the local update stage, and the data cube entering the local update stage updates the model parameters in its corresponding parameter group.
  • the phase transition indicator is the model performance of the updated local business model pair
  • the stopping condition is that the model performance satisfies a preset value
  • a method for jointly updating a business model is provided, which is used to assist multiple data parties to jointly train a service party of a business model based on privacy protection, where the business model is used to process business data and obtain a corresponding business processing result.
  • the plurality of data parties includes a first party, and the method includes: providing current global model parameters to the first party, and first parameters in N parameter groups divided between the first party and the global model parameters The corresponding relationship of the group is used for the first party to use the current global model parameters to update the local business model, and after further updating the updated local business model based on the local business data to obtain a new local business model, feedback a first parameter set for the first parameter set; receive a first parameter set fed back by the first party; based on the first parameter set and others received from other data parties regarding the first parameter set parameter set, update the first parameter group in the global model parameters, and then update the current global model parameters according to the update of the first parameter group.
  • the corresponding relationship between the first party and the first parameter group is determined based on the following manner: dividing the plurality of data parties into M groups, wherein a single group of data parties corresponds to at least one data party, so The first party belongs to the first group in the M groups of data squares; the corresponding relationship between the M groups of data squares and the N parameter groups is determined, wherein a single group of data squares corresponds to at least one parameter group, and a single parameter group corresponds to at least one group of data The parameter group corresponding to the first group is the first parameter group.
  • the dividing the plurality of data parties into M groups includes one of the following: with the goal that the number of pieces of business data held by each group of data parties is consistent, dividing the plurality of data parties into M groups is M groups; with the goal that the number of business data pieces held by a single data party is positively correlated with the number of model parameters included in the corresponding parameter group, the multiple data parties are divided into M groups.
  • the updating the first parameter set in the global model parameters based on the first parameter set and other parameter sets received from other data parties about the first parameter set includes: A parameter set and other parameter sets related to the first parameter group are fused in at least one of the following ways: weighted average, minimum value, median; update the first parameter group in the global model parameters according to the fusion result.
  • the updating of the current global model parameters according to the updating of the first parameter group includes: updating each of the other parameter groups according to the corresponding parameter sets fed back from respective corresponding data sources, The current global model parameters are thereby updated.
  • a method for jointly updating a business model for jointly training a first party of a plurality of data parties of a business model based on privacy protection with the assistance of a service party, the business model is used for processing business data to obtain corresponding business processing results, the method includes: receiving from the service party the current global model parameters, and the correspondence between the first party and the first parameter group in the N parameter groups divided by the global model parameters update the local business model using the current global model parameters; perform several rounds of updating the local model parameters based on the processing of the local business data by the updated local business model; The first parameter set obtained by updating the parameter group, so that the service party can update the first parameter set in the global model parameters based on the first parameter set and other parameter sets received from other data parties about the first parameter group. A parameter group to update the current global model parameters.
  • further updating the updated local business model based on the local business data to obtain a new local business model includes: using the local business data to detect the current stage transition index of the updated local business model; When the phase transition index satisfies the stop condition of the full update, a partial update phase for updating the first parameter group is entered.
  • the full update phase of updating all model parameters of the local business model is continued.
  • the stage transition indicator is the model performance of the updated local business model
  • the stopping condition is that the model performance satisfies a preset value
  • further updating the updated local business model based on the local business data to obtain a new local business model includes: detecting whether the stage transition index satisfies the requirements for full update. Activation condition; when the phase transition index satisfies the activation condition, re-enter the full update phase of updating all model parameters of the local business model.
  • a system for jointly updating a business model including a service party and multiple data parties, the multiple data parties jointly train a business model based on privacy protection with the assistance of the service party, and the business model Used to process business data and obtain corresponding business processing results; wherein: the service party is configured to provide global model parameters to each data party, and the corresponding relationship between each data party and the N parameter groups divided by the global model parameters; Each data party is configured to use the global model parameters to update the local business model, and further update the updated local business model based on the local business data to obtain a new local business model to match the model in its corresponding parameter group.
  • the parameters are uploaded to the server; the server is further configured to fuse the received model parameters for each parameter group, thereby updating the global model parameters.
  • a device for jointly updating a business model is provided, which is set at a service party that assists multiple data parties to jointly train a business model based on privacy protection, where the business model is used to process business data and obtain corresponding business processing results.
  • the plurality of data parties includes a first party
  • the apparatus includes: a providing unit configured to provide the first party with current global model parameters, and N parameter groups divided between the first party and the global model parameters The corresponding relationship of the first parameter group in , for the first party to use the current global model parameters to update the local business model, and further update the updated local business model based on the local business data to obtain a new After the local business model, the first parameter set for the first parameter group is fed back; the receiving unit is configured to receive the first parameter set fed back by the first party; the updating unit is configured to be based on the first parameter set and Other sets of parameters received from other data sources about the first parameter set update the first parameter set in the global model parameters, thereby updating the current global model parameters according to the update of the first parameter set.
  • a device for jointly updating a business model which is set at a first party among multiple data parties that jointly train a business model based on privacy protection with the assistance of a service party, where the business model is used for processing business data to obtain a corresponding service processing result
  • the apparatus includes: a receiving unit configured to receive the current global model parameter from the serving party, and the first party in the N parameter groups divided by the global model parameter.
  • a corresponding relationship of a parameter group configured to use the current global model parameters to update a local business model; a training unit, configured to further update the updated local business model based on the local business data to obtain a new local business model model; a feedback unit configured to feed back a first parameter set obtained by updating the first parameter group to the serving party, so that the serving party can use the first parameter set and information received from other data parties
  • the other parameter sets of the first parameter group update the first parameter group in the global model parameters, and then update the current global model parameters.
  • a computer-readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform the method of the second aspect or the third aspect.
  • a computing device including a memory and a processor, wherein executable code is stored in the memory, and when the processor executes the executable code, the second aspect or the first aspect is implemented. three-way approach.
  • each data party in the process of jointly updating a business model based on privacy protection in multi-party collaboration, since multiple data parties as training members are grouped, each data party only uploads part of the model parameters, which can effectively reduce The communication volume between each data party and the server, as well as the data processing volume of the server, can avoid communication congestion and improve the overall training efficiency.
  • the method and device can be applied to any federated learning process, especially in the case of a large number of data cubes or a large number of training samples, the above effects are more significant.
  • FIG. 1 shows a schematic diagram of the implementation architecture of the joint update business model based on privacy protection in the technical concept of this specification
  • FIG. 2 shows a flow chart of a method for jointly updating a business model according to one embodiment
  • FIG. 3 shows a schematic block diagram of an apparatus for jointly updating a business model provided at a service side according to an embodiment
  • FIG. 4 shows a schematic block diagram of an apparatus for jointly updating a business model provided at a data party according to an embodiment.
  • Federated Learning also known as federated machine learning, federated learning, federated learning, etc.
  • Federated Machine Learning is a machine learning framework that can effectively help multiple agencies conduct data usage and machine learning modeling while meeting user privacy protection, data security, and government regulations.
  • enterprise A and enterprise B each establish a task model, and a single task can be classification or prediction, and these tasks have also been approved by their respective users when obtaining data.
  • enterprise A lacks label data
  • enterprise B lacks user feature data, or the data is insufficient and the sample size is not enough to establish a good model, the model at each end may not be established or the effect may not be ideal.
  • the problem to be solved by federated learning is how to establish high-quality models at each end of A and B, and the own data of each enterprise is not known to other parties, that is, to establish a shared model without violating data privacy regulations.
  • This shared model is like the optimal model created by all parties aggregating the data together. In this way, the built model serves only its own goals in each party's area.
  • Each institution of federated learning can also be called a business party, and each business party can correspond to different business data.
  • the service data here may be, for example, various data such as characters, pictures, voices, animations, and videos.
  • business data of various business parties is related.
  • business party 1 is a bank, which provides users with savings, loans and other services, and can hold data such as users' age, gender, income and expenditure, loan limit, deposit limit and other data.
  • 2 is a P2P platform, which can hold the user's loan records, investment records, repayment time and other data
  • the business party 3 is a shopping website, which holds the user's shopping habits, payment habits, payment accounts and other data.
  • each business party may be each hospital, medical examination institution, etc.
  • business party 1 is hospital A, which corresponds to the user's age, gender, symptoms, diagnosis results, treatment plans, treatment results, etc.
  • the business party 2 may be the medical examination institution B, and the medical examination record data corresponding to the user's age, gender, symptoms, medical examination conclusions, etc., and so on.
  • the implementation architecture of federated learning is shown in Figure 1.
  • the business party can act as the data holder, or pass the data to the data holder, and the data holder can participate in the joint training of the business model. Therefore, in FIG. 1 and the following, parties other than the serving party participating in the joint training are collectively referred to as data parties.
  • a data party can usually correspond to a business party. In an optional implementation, one data party may also correspond to multiple business parties.
  • the data party can be realized by devices, computers, servers, etc.
  • the business model can be jointly trained by two or more data parties.
  • Each data party can use the trained business model to perform local business processing on local business data.
  • the service party can provide assistance for the federated learning of each business party, for example, assist in nonlinear calculation, comprehensive model parameters or gradient calculation, etc.
  • the service party shown in FIG. 1 is in the form of other parties independently set up independently of each business party, such as a trusted third party.
  • the service parties may also be distributed among various business parties, or be composed of various business parties, and a secure computing protocol (such as secret sharing, etc.) may be used between the various business parties to complete joint auxiliary computing. This manual does not limit this.
  • the service party can initialize the global business model and distribute it to each business party.
  • Each business party can calculate the gradient of the model parameters locally according to the global business model determined by the server, and update the model parameters according to the gradient.
  • the service side comprehensively calculates the gradient of model parameters or jointly updated model parameters, and feeds it back to each business side.
  • Each business party updates the local model parameters according to the received model parameters or their gradients. In this way, the business model suitable for each business party is finally trained.
  • Federated learning can be divided into horizontal federated learning (feature alignment), vertical federated learning (sample alignment) and federated transfer learning.
  • the implementation architecture provided in this specification can be used for various federated learning architectures, especially for horizontal federated learning, that is, each business party provides some independent samples.
  • this specification proposes a federated learning method that updates model parameters in stages and groups.
  • the data party is in the first stage, and the model parameters are updated in groups and uploaded in groups to speed up the convergence speed.
  • This stage can be called the full update stage, and the data party is in the second stage.
  • This stage can be called the local update stage.
  • the transition between the first stage and the second stage can be judged by the stage transition indicator.
  • FIG. 2 shows a schematic flow chart of jointly training a business model according to an embodiment of the present specification.
  • the process involves service parties and multiple data parties.
  • the serving party or a single serving party may be any computer, device, or server with certain computing capabilities, for example, the serving party and the data party shown in FIG. 1 .
  • Figure 2 shows a cycle of federated learning. The individual steps are described in detail below.
  • the service party divides each data party into M groups. It can be understood that, under the technical concept of this specification, the data parties can upload model parameters to the service party in groups, and therefore, the service party can group the data parties in advance.
  • M is an integer greater than 1.
  • the serving party may randomly divide each data party into M groups.
  • the randomness mentioned here can include at least one of the following: which group a single data square is divided into is random, a single data square and which data squares are divided into a group is random, the number of members of a single group is random and not less than 1. For example, 100 data squares are randomly divided into 10 groups, some of which include 10 data squares, some groups include 11 data squares, some groups include 8 data squares, and so on.
  • a plurality of data parties may be grouped according to the amount of business data held by the data parties. For example, with the goal of keeping the total amount of business data held by the data parties in each group equal, each data party is grouped.
  • the model parameters of the business model can also be grouped at the same time.
  • N groups of data cubes and N groups of model parameters are one A correspondence.
  • the model parameters of a business model can be pre-grouped.
  • the grouping of data cubes can be based on the grouping of model parameters.
  • N can be a preset positive integer.
  • M is less than N, a single set of data cubes can correspond to multiple sets of model parameters, and when M is greater than N, a single set of model parameters can correspond to multiple sets of data cubes.
  • a single set of data cubes can correspond to multiple sets of model parameters and a single set of model parameters can correspond to multiple sets of data cubes.
  • a single group of data squares in the M groups of data squares corresponds to at least one parameter group
  • a single parameter group of the N parameter groups corresponds to at least one group of data squares.
  • the number of data cube groups can be consistent with the number of neural network layers of the business model, so that each group of data cubes can correspond to a layer of neural network.
  • the number of data cube groups may also be smaller than the number of neural network layers of the business model, so that at least one parameter group may include a multi-layer neural network.
  • N groups of model parameters correspond to N group identifiers respectively, and the data cubes in each group are assigned one of the N group identifiers. That is, the grouping identifiers of the model parameters are assigned to each group of data cubes randomly or according to certain rules. Each grouping identifier can also be randomly corresponding to each data square group after the data square grouping is determined, or the model parameter grouping identifier can be randomly assigned to each data square to simultaneously group the data squares and determine the model parameters corresponding to the data squares. In the case where the model parameters are grouped according to the number of neural network layers, the grouping identifier of the data cube can use the layer number of the corresponding model parameter.
  • the neural network layer numbers are from 0 to N-1, with a total of N numbers, and these N numbers are randomly assigned to each data cube, the data cubes can be grouped at the same time, and the data cubes and each layer of the neural network ( respectively correspond to the corresponding relationship of each parameter group).
  • a plurality of data cubes may also be grouped according to the corresponding relationship between the number of services held by the data cubes and the number of model parameters in a single group.
  • the business model is a neural network
  • a single-layer neural network corresponds to a set of model parameters, the more layers with more neurons, the greater the amount of business data held by the corresponding data parties.
  • the service party can regroup each data party in each interaction cycle, or group each data party only once in the initial cycle, and use it in subsequent cycles. This is not limited.
  • the server provides each data party with the current global model parameters and the corresponding relationship between each data party and the N parameter groups divided by the global model parameters.
  • the current global model parameters can be the model parameters initialized by the server.
  • the current global model parameters can be the model parameters fed back by the server according to each data party. Updated model parameters.
  • each data party only feeds back a part of the model parameters (herein referred to as part of the model parameters) of all the model parameters to the service party.
  • the purpose of grouping data cubes in step 201 is to determine which data cubes feed back which model parameters. Therefore, in this step 202, each data party may be provided with the corresponding group identifier (eg, the jth group) of the corresponding parameter group, or the parameter identifier (eg, wij) of each model parameter, so that the data party can provide the corresponding group identifier according to the group identifier. model parameters.
  • one data square (or a group of data squares in which it is located) may also correspond to one or more parameter groups, which is not limited herein.
  • a single data party can feed back the model parameters of its corresponding multiple parameter groups to the service party.
  • the first party as any one of the multiple data parties as an example, it may at least have a corresponding relationship with the first parameter group.
  • the first parameter group may be any one of the N groups of model parameters.
  • each data party further updates the local service model updated according to the global model parameters based on the local service data to obtain a new local service model.
  • a single data party can update the local business model by using the full amount of global model parameters, or can only update part of the model parameters of the corresponding group.
  • a single data party in the stage of full update of model parameters, can use the full amount of global model parameters to update the local business model, and in the stage of local update of model parameters, a single data party can use the full amount of The global model parameters update the local business model, and the local business model can also be updated by using some model parameters of the parameter group corresponding to itself in the global model parameters.
  • the data cube of the ith group only updates the model parameters of the ith layer of neural network (corresponding to the ith parameter group).
  • the full update stage may be the stage of fully updating model parameters in the process of using local business data to train the local business model
  • the local update stage may be the partial update of model parameters in the process of using local business data to train the local business model stage.
  • a single data party receives the full global model parameters from the server, and fully updates the model parameters of the local business model, and then uses the updated local business model to process the local training samples.
  • a single data party can use the full amount or some model parameters of the corresponding parameter group to update the local business model, and then use the updated local business model to process the local business data as a training sample, and use the updated local business model to process the local business data as a training sample.
  • the data cube j corresponding to the ith group of model parameters can fix the model parameters of other groups, only calculate the gradient of the ith group of model parameters, and update the ith group of model parameters.
  • a single data cube (denoted as j) can only upload part of the model parameters wij of the corresponding group (such as the i-th group) in the current cycle (the j-th data cube, the i-th group). model parameters).
  • the business model is an N-layer neural network
  • the N groups of the data cube correspond to the N-layer neural network respectively
  • the data cubes assigned to the second group can feed back the model parameters of the second-layer neural network to the server.
  • it can be negotiated and determined by the server or each data party, or based on parameters such as the training time (for example, 5 hours) and the number of training cycles (for example, 1000 interaction cycles) in the full update phase determined by the server. , all data parties enter the local update stage of federated learning under the technical concept of this specification.
  • each data party can use the phase transition indicator to measure whether its current cycle is in the full update phase or the partial update phase.
  • the transformation index at this stage may be an index used to measure the processing capability of the jointly trained business model for the local business data of a single data party. That is to say, after the jointly trained business model has a certain processing capability for the local business data of a single data party, the model parameters in the partial update stage can be locally updated.
  • the conversion indicator at this stage may be represented by at least one model performance, such as accuracy rate, model loss, and the like.
  • the stage transition indicator satisfies the stop condition for full update, a single data cube can enter the partial update stage.
  • the stopping conditions are different.
  • the stage transition metric may be accuracy.
  • After updating the local business model with the current global model parameters provided by the server a single data party uses the updated local business model to process the local validation set to obtain the accuracy rate.
  • the stopping condition is, for example, that the accuracy rate is greater than a predetermined accuracy threshold or the like.
  • the stage transition metric is model loss.
  • a single data party uses the updated local business model to process the local validation set in multiple batches.
  • Each batch determines a model loss. For multiple consecutive batches, check whether the single drop of the model loss is less than a predetermined value. (such as 0.001), or whether the overall decrease is less than a predetermined value (such as 0.01), etc., as the stage conversion indicator. That is, the stopping condition is that the model loss decreases by less than a predetermined magnitude.
  • the data cube can also detect whether the loss function tends to be stable in recent multiple (eg, 10) training cycles (interaction cycles with the data cube), for example, the decrease is less than a predetermined value (eg, 0.001), etc., as Stage transition metrics. That is to say, the stopping condition at this time may be that the model loss decreases continuously for a predetermined number of times less than a predetermined amplitude.
  • the data cube may also use other evaluation indicators, or use other methods to determine phase transition indicators, to determine whether the full update phase ends.
  • a single data cube can enter the local update phase, that is, in each training cycle, during multiple rounds of model parameter update locally performed locally, only the model parameters of the corresponding group of the corresponding training cycle are updated, for example The first party only updates the model parameters in the first parameter group.
  • the conversion indicators in the above stages can further detect the conversion indicators in the above stages.
  • the activation condition here can also be called the wake-up condition of the full update phase. For example, it may be detected that the magnitude of the decrease in the model loss is greater than a preset activation value (eg, 0.1).
  • each data party uploads the model parameters corresponding to the parameter group to the service party.
  • the ith data assigned to the jth group feeds back the model parameters wi,j of the jth parameter group (eg, the jth layer neural network) to the server.
  • the aforementioned first party can at least upload the updated parameter values for the model parameters corresponding to the first parameter group to the service party.
  • the parameter values of the model parameters corresponding to the first parameter group may be recorded as the first parameter set in this specification, and the first party may feed back the updated first parameter set for the first parameter group.
  • the data uploaded to the service party may also be encrypted in a pre-agreed manner such as homomorphic encryption, secret sharing, etc., to further protect data privacy.
  • the service side fuses the model parameters fed back by the corresponding groups of data sources for each parameter group, so as to update the global model parameters.
  • the server can fuse each group of model parameters from 1 to N, or fuse the model parameters of each parameter group according to the order in which each group of data parties has completed the feedback of the model parameters.
  • the service provider can fuse each group of model parameters by means of weighted average, minimum value, median, etc., which is not limited here.
  • the weights in the weighted average mode, can be set to be consistent or inconsistent. If it is set to be inconsistent, the weight corresponding to each data party can be positively correlated with the number of business data pieces held by the corresponding data party.
  • the fusion result of each set of model parameters can be used to update its global model parameters.
  • the above steps 201 to 205 can be regarded as a cycle in which the server assists in the aggregation of the federated learning process.
  • the execution order of each step in step 201 to step 205 is not limited to the order given in the above embodiment.
  • step 201, step 202, and step 203 may be performed in the above order, may be performed simultaneously, or may be performed in a mixed manner.
  • the server can provide current global model parameters to each data party in step 201, and then group each data party in step 202, and provide corresponding grouping identifiers to the data party.
  • the service party determines and provides the corresponding group identifier to the data party while the data party uses the local service data to train the local service model.
  • the server grouping multiple data cubes and determining the model parameters corresponding to the corresponding groups, or during training pre-determine the grouping and determine the model parameters corresponding to the corresponding grouping to provide the data party.
  • the service party no longer performs the above-mentioned step 201, and in step 202, the corresponding relationship between the data party and the parameter group is provided to each data party. square steps.
  • each data party in the process of jointly updating the business model through the process shown in Figure 2 based on privacy protection, due to the grouping of multiple data parties as training members, each data party only uploads part of the model parameters to the service party. Effectively reduce the communication volume between each data party and the server, as well as the data processing volume of the server in the process of multi-party cooperation, so as to avoid communication congestion and improve the overall training efficiency.
  • the training process can be divided into two stages.
  • the training members are globally updated but the model parameters are uploaded locally in groups, which is conducive to speeding up the convergence speed and improving the efficiency of joint training.
  • the local update stage training members are grouped into groups. Partial updating and partial uploading of model parameters is conducive to improving model performance, thereby improving the processing capability of jointly trained business models for business data.
  • the method of jointly updating the business model provided in this specification can be applied to any federated learning process, especially in the case of a large number of data sources or a large number of training samples, the above effect is more significant.
  • the above process does not sparse or quantify the model, so the model information is lossless, and the impact on the convergence of the model is small.
  • the training members are randomly grouped, and the robustness of the federated model to the training data is also guaranteed.
  • a system for jointly updating a business model including a service party and multiple data parties, the multiple data parties jointly train the business model based on privacy protection with the assistance of the service party, and the business model is used for Process business data to obtain corresponding business processing results.
  • the server is configured to provide global model parameters to each data party, and the corresponding relationship between each data party and the N parameter groups divided by the global model parameters; each data party is configured to update the local business model by using the global model parameters, and The updated local business model is further updated based on the local business data to obtain a new local business model, so as to upload the model parameters in the parameter group corresponding to itself to the server; the server is also configured to separately target each parameter group. , which fuses the received model parameters to update the global model parameters.
  • the service party and the single data party may perform corresponding operations through the apparatuses 300 and 400 for jointly updating the business model, respectively.
  • the apparatus 300 may include: a providing unit 31 configured to provide the first party with the current global model parameters, and the correspondence between the first party and the first parameter group in the N parameter groups divided by the global model parameters , so that the first party can use the current global model parameters to update the local business model, and further update the updated local business model based on the local business data to obtain a new local business model.
  • the receiving unit 32 is configured to receive the first parameter set fed back by the first party; the updating unit 33 is configured to be based on the first parameter set and other parameter sets about the first parameter group received from other data parties , and update the first parameter group in the global model parameters, so as to update the current global model parameters according to the update of the first parameter group.
  • the receiving unit 32 may also be configured to receive various parameter sets fed back by other data parties, not just the first parameter set fed back by the first party. Due to the consistency of the interaction process between the service side and each data side, only the interaction between the first side of the data side and the service side is described, so only the parameter set involving the first side is described.
  • the apparatus 400 may include: a receiving unit 41, configured to receive the current global model parameters from the serving party, and the N number of the first party and the global model parameters divided The correspondence between the first parameter groups in the parameter groups; the replacement unit 42 is configured to update the local business model using the current global model parameters; the training unit 43 is configured to further update the updated local business model based on the local business data , to obtain a new local business model; the feedback unit 44 is configured to feed back the first parameter set obtained by updating the first parameter group to the service party, so that the service party can use the first parameter set and the information about the first parameter set received from other data parties. For other parameter sets of a parameter group, update the first parameter group in the global model parameters, and then update the current global model parameters.
  • the device 300 shown in FIG. 3 and the device 400 shown in FIG. 4 are respectively the same as the method embodiment shown in FIG. function. Therefore, the corresponding descriptions in the method embodiment shown in FIG. 2 are also applicable to the apparatus 300 or the apparatus 400 , and details are not repeated here.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed in a computer, the computer is made to execute the method described in conjunction with FIG. 2 with the service party or data. corresponding operation.
  • a computing device including a memory and a processor, where executable codes are stored in the memory, and when the processor executes the executable codes, the method in combination with the method of FIG. corresponding operation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本说明书实施例提供一种基于隐私保护的联合更新业务模型的方法及装置,其中,在一次迭代过程中,服务方向各个数据方提供全局模型参数,以及各个数据方各自与全局模型参数划分的N个参数组的对应关系,由各个数据方各自利用全局模型参数更新本地业务模型,并基于本地业务数据对更新后的本地业务模型进一步更新,以将新的业务模型中各自所对应参数组的模型参数上传至服务方,进而由服务方依次对接收到的各个参数组进行融合,更新全局模型参数。该过程可以减少数据方与服务方的通信压力,避免造成通信阻塞,有利于提高联邦学习的整体训练效率。

Description

联合更新业务模型的方法及装置 技术领域
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及基于隐私保护,联合更新业务模型的方法和装置。
背景技术
计算机技术的发展,使得机器学习在各种各样的业务场景中得到越来越广泛的应用。联邦学习是一种在保护隐私数据情况下进行联合建模的方法。例如,企业与企业之间需要进行合作安全建模,可以进行联邦学习,以便在充分保护企业数据隐私的前提下,使用各方的数据对数据处理模型进行协作训练,从而更准确、有效地处理业务数据。在联邦学习场景中,各方例如可以商定模型结构(或约定模型)后,各自使用隐私数据在本地进行训练,并将模型参数使用安全可信的方法进行聚合,最后各方根据聚合后模型参数改进本地模型。联邦学习实现在隐私保护基础上,有效打破数据孤岛,实现多方联合建模。
然而,随着任务复杂性和对性能要求的逐渐提升,联邦学习中的业务模型网络层数呈逐渐加深的趋势,模型参数也相应的越来越多。以人脸识别ResNET-50为例,原始模型拥有超过2000万个参数,模型大小超过100MB。特别是在一些参与联邦学习的训练成员较多的场景中,服务器接收的数据呈几何倍数上升,可能造成通信阻塞,严重影响整体训练的效率。
发明内容
本说明书一个或多个实施例描述了一种联合更新业务模型的方法及装置,用以解决背景技术提到的一个或多个问题。
根据第一方面,提供了一种联合更新业务模型的方法,用于多个数据方在服务方的辅助下基于隐私保护联合训练业务模型,所述业务模型用于处理业务数据,得到相应业务处理结果;所述方法包括:服务方向各个数据方提供全局模型参数,以及各个数据方各自与所述全局模型参数划分的N个参数组的对应关系;各个数据方各自利用全局模型参数更新本地业务模型;各个数据方基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型,并将与自身所对应参数组的模型参数上传至服务 方;服务方分别针对各个参数组,对接收到的模型参数进行融合,从而更新全局模型参数。
根据一个实施例,各个数据方基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型包括:各个数据方各自在利用全局模型参数更新本地业务模型之后,使用本地业务数据检测当前的阶段转换指标;所述阶段转换指标满足全量更新的停止条件的数据方,进入局部更新阶段,进入局部更新阶段的数据方对其对应参数组中的模型参数进行更新。
根据一个实施例,所述阶段转换指标为更新后的本地业务模型对的模型性能,所述停止条件为所述模型性能满足预设值。
根据第二方面,提供一种联合更新业务模型的方法,用于辅助多个数据方基于隐私保护联合训练业务模型的服务方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述多个数据方包括第一方,所述方法包括:向所述第一方提供当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系,以供所述第一方利用所述当前全局模型参数更新本地业务模型,并在基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型之后,反馈针对所述第一参数组的第一参数集;接收所述第一方反馈的第一参数集;基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而根据对所述第一参数组的更新,更新所述当前全局模型参数。
根据一个实施例,所述第一方与所述第一参数组的对应关系基于以下方式确定:将所述多个数据方分为M组,其中,单组数据方对应至少一个数据方,所述第一方属于M组数据方中的第一组;确定M组数据方分别与N个参数组的对应关系,其中,单组数据方对应至少一个参数组,单个参数组对应至少一组数据方,所述第一组对应的参数组为第一参数组。
根据一个实施例,所述将所述多个数据方分为M组包括以下中的一项:以各组数据方持有的业务数据条数一致为目标,将所述多个数据方分为M组;以单个数据方持有的业务数据条数与相对应的参数组包括的模型参数数量正相关为目标,将所述多个数据方分为M组。
在一个实施例中,所述基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组包括:对所述第一参数集及关于 所述第一参数组的其他参数集进行以下至少一种方式的融合:加权平均、取最小值、取中位数;根据融合结果更新全局模型参数中的第一参数组。
在一个实施例中,所述依据对所述第一参数组的更新,更新所述当前全局模型参数包括:对其他各个参数组分别按照从各自对应的若干数据方反馈的相应参数集进行更新,从而更新所述当前全局模型参数。
根据第三方面,提供一种联合更新业务模型的方法,用于在服务方的辅助下,基于隐私保护联合训练业务模型的多个数据方中的第一方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述方法包括:从所述服务方接收当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系;利用所述当前全局模型参数更新本地业务模型;基于更新后的本地业务模型对本地业务数据的处理,对本地模型参数进行若干轮次的更新;向所述服务方反馈针对所述第一参数组进行更新得到的第一参数集,以供所述服务方基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而更新当前全局模型参数。
在一个实施例中,所述基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型包括:利用本地业务数据检测更新后的本地业务模型当前的阶段转换指标;在所述阶段转换指标满足全量更新的停止条件的情况下,进入对所述第一参数组进行更新的局部更新阶段。
在一个实施例中,在所述阶段转换指标不满足所述停止条件的情况下,继续对本地业务模型的全部模型参数进行更新的全量更新阶段。
在一个实施例中,所述阶段转换指标为更新后的本地业务模型的模型性能,所述停止条件为所述模型性能满足预设值。
在一个实施例中,在所述局部更新阶段,所述基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型包括:检测所述阶段转换指标是否满足全量更新的激活条件;在所述阶段转换指标满足所述激活条件的情况下,重新进入对本地业务模型的全部模型参数进行更新的全量更新阶段。
根据第四方面,提供一种联合更新业务模型的系统,包括服务方和多个数据方,所述多个数据方在所述服务方的辅助下基于隐私保护联合训练业务模型,所述业务模型用于处理业务数据,得到相应业务处理结果;其中:所述服务方配置为向各个数据方提供 全局模型参数,以及各个数据方各自与所述全局模型参数划分的N个参数组的对应关系;各个数据方各自配置为利用全局模型参数更新本地业务模型,并基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型,以将与自身所对应参数组中的模型参数上传至服务方;所述服务方还配置为分别针对各个参数组,对接收到的模型参数进行融合,从而更新全局模型参数。
根据第五方面,提供一种联合更新业务模型的装置,设于辅助多个数据方基于隐私保护联合训练业务模型的服务方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述多个数据方包括第一方,所述装置包括:提供单元,配置为向所述第一方提供当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系,以供所述第一方利用所述当前全局模型参数更新本地业务模型,并在基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型之后,反馈针对所述第一参数组的第一参数集;接收单元,配置为接收所述第一方反馈的第一参数集;更新单元,配置为基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,从而依据对所述第一参数组的更新,更新所述当前全局模型参数。
根据第六方面,提供一种联合更新业务模型的装置,设于在服务方的辅助下,基于隐私保护联合训练业务模型的多个数据方中的第一方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述装置包括:接收单元,配置为从所述服务方接收当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系;替换单元,配置为利用所述当前全局模型参数更新本地业务模型;训练单元,配置为基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型;反馈单元,配置为向所述服务方反馈针对所述第一参数组进行更新得到的第一参数集,以供所述服务方基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而更新当前全局模型参数。
根据第七方面,提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行第二方面或第三方面的方法。
根据第八方面,提供了一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现第二方面或第三方面的方法。
通过本说明书实施例提供的方法和装置,在多方协作基于隐私保护联合更新业务模 型过程中,由于对作为训练成员的多个数据方进行分组,每个数据方仅上传部分模型参数,可以有效减少各个数据方与服务方之间的通信量,以及服务方的数据处理量,从而避免造成通信阻塞,有利于提高整体训练的效率。该方法和装置可以适用于任何联邦学习过程,尤其在数据方较多,或者训练样本数量较大的情形下,以上效果更加显著。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出本说明书的技术构思中基于隐私保护联合更新业务模型的实施架构示意图;
图2示出根据一个实施例的联合更新业务模型的方法流程图;
图3示出根据一个实施例的设于服务方的用于联合更新业务模型的装置的示意性框图;
图4示出根据一个实施例的设于数据方的用于联合更新业务模型的装置的示意性框图。
具体实施方式
下面结合附图,对本说明书提供的方案进行描述。
联邦学习(Federated Learning),又可以称为联邦机器学习,联合学习,联盟学习等。联邦机器学习是一个机器学习框架,能有效帮助多个机构在满足用户隐私保护、数据安全和政府法规的要求下,进行数据使用和机器学习建模。
具体地,假设企业A、企业B各自建立一个任务模型,单个任务可以是分类或预测,而这些任务也已经在获得数据时有各自用户的认可。然而,由于数据不完整,例如企业A缺少标签数据、企业B缺少用户特征数据,或者数据不充分,样本量不足以建立好的模型,那么在各端的模型有可能无法建立或效果并不理想。联邦学习要解决的问题是如何在A和B各端建立高质量的模型,并且各个企业的自有数据不被其他方知晓,即在不违反数据隐私法规情况下,建立一个共有模型。这个共有模型就好像各方把数据聚合在一起建立的最优模型一样。这样,建好的模型在各方的区域仅为自有的目标服务。
联邦学习的各个机构也可以称为业务方,各个业务方分别可以对应有不同的业务数据。这里的业务数据例如可以是字符、图片、语音、动画、视频等各种数据。通常,各个业务方的业务数据具有相关性。例如,涉及金融业务的多个业务方中,业务方1为银行,为用户提供储蓄、贷款等业务,可以持有用户的年龄、性别、收支流水、贷款额度、存款额度等数据,业务方2为P2P平台,可以持有用户的借贷记录、投资记录、还款时效等数据,业务方3为购物网站,持有用户的购物习惯、付款习惯、付款账户等数据。再例如,涉及医疗业务的多个业务方中,各个业务方可以是各个医院、体检机构等,如业务方1为医院A,对应用户年龄、性别、症状、诊断结果、治疗方案、治疗结果等等诊疗记录作为本地业务数据,业务方2可以为体检机构B,对应用户年龄、性别、症状、体检结论等等的体检记录数据,等等。
联邦学习的实施架构如图1所示。实践中,业务方可以作为数据持有方,也可以将数据传递给数据持有方,由数据持有方参与业务模型的联合训练。因此,在图1及下文中,将参与联合训练的服务方之外的各方统称为数据方。一个数据方通常可以对应一个业务方。在可选的实现中,一个数据方也可以对应多个业务方。数据方可以通过设备、计算机、服务器等实现。
在该实施架构下,可以由两个或两个以上的数据方共同训练业务模型。各个数据方分别可以利用训练好的业务模型对本地业务数据进行本地业务处理。服务方可以为各个业务方的联邦学习提供辅助,例如,辅助进行非线性计算、综合模型参数或梯度计算等。图1示出的服务方的形式为独立于各个业务方单独设置的其他方,如可信第三方等。实践中,服务方还可以分布于各个业务方,或者由各个业务方组成,各个业务方之间可以采用安全计算协议(如秘密共享等)完成联合辅助计算。本说明书对此不做限定。
参考图1所示,在联邦学习的实施架构下,服务方可以初始化全局的业务模型,分发给各个业务方。各个业务方可以分别按照服务方确定的全局业务模型,在本地计算模型参数的梯度,按照梯度更新模型参数。由服务方综合计算模型参数的梯度或联合更新的模型参数,并反馈给各个业务方。各个业务方根据接收到的模型参数或其梯度,更新本地的模型参数。如此循环,最终训练适合各个业务方的业务模型。
联邦学习可以分为横向联邦学习(特征对齐)、纵向联邦学习(样本对齐)与联邦迁移学习。本说明书提供的实施架构可以是用于各种联邦学习架构,尤其适用于横向联邦学习,即,各个业务方分别提供部分独立样本。
为了减少通信量,提高模型训练效率,本说明书提出一种分阶段分组更新模型参数 的联邦学习方法。在该技术构思下,联邦学习过程中,数据方在第一阶段,全量更新并分组上传模型参数,以加快收敛速度,此阶段可以称为全量更新阶段,数据方在第二阶段,分组更新并分组上传模型参数,提高模型性能,此阶段可以称为局部更新阶段。其中,对于单个数据方而言,其第一阶段和第二阶段之间的过渡可以通过阶段转换指标进行判断。
下面详细描述本说明书技术构思下的联合训练业务模型的方法。
图2示出了根据本说明书一个实施例的联合训练业务模型的流程示意图。该流程涉及服务方和多个数据方。服务方或单个服务方可以是任一具有一定计算能力的计算机、设备或服务器等,例如图1示出的服务方、数据方。图2示出了联邦学习的一个周期。下面详细描述各个步骤。
首先,在步骤201中,服务方将各个数据方分为M组。可以理解,在本说明书的技术构思下,数据方可以分组向服务方上传模型参数,因此,服务方可以提前为数据方分组。其中M为大于1的整数。
根据一个实施方式,服务方可以将各个数据方随机分为M组。这里说的随机可以包括以下中的至少一种:单个数据方分到哪一组是随机的、单个数据方和哪些数据方分为一组是随机的、单个组的组员数量是随机的且不小于1。例如,将100个数据方随机分为10组,其中有的组包括10个数据方,有的组包括11个数据方,有的组包括8个数据方等等。
根据一个实施方式,可以按照数据方持有的业务数据数量对多个数据方进行分组。例如,以保持各个分组中的数据方持有的业务数据总量持平为目标,对各个数据方分组。
在其他实施方式中,还可以有其他分组方法,在此不再赘述。
另一方面,业务模型的模型参数同时也可以分组,在模型参数组数和数据方组数都是N(此时,M=N)的情况下,N组数据方与N组模型参数一一对应。通常,业务模型的模型参数可以被预先分组。数据方的分组可以以模型参数的分组为依据。其中,N可以是预设的正整数。在M小于N的情况下,单组数据方可以对应多组模型参数,在M大于N的情况下,单组模型参数可以对应多组数据方。事实上,即使M=N,也可以有单组数据方可以对应多组模型参数和单组模型参数可以对应多组数据方的情况同时存在。总之,M组数据方中的单组数据方对应至少一个参数组,N个参数组中的单个参数组对应至少一组数据方。
在业务模型为神经网络的情况下,数据方组数可以与业务模型的神经网络层数一致,这样,每组数据方可以对应一层神经网络。可选地,数据方组数也可以小于业务模型的神经网络层数,这样,至少有一个参数组可以包括多层神经网络。
在一个实施例中,N组模型参数分别对应N个分组标识,各个分组中的数据方被分配N个分组标识中的一个。亦即,将模型参数的分组标识随机或按照一定规则分配给各组数据方。各个分组标识还可以在确定数据方分组后随机对应到各个数据方分组,也可以直接将模型参数分组标识随机分配给各个数据方以同时对数据方进行分组并确定数据方对应的模型参数。在模型参数按照神经网络层数分组的情况下,数据方的分组标识可以使用其对应的模型参数所在层号。作为一个示例,神经网络层号分别为0到N-1,共N个数,将这N个数随机分配给各个数据方,可以同时对数据方进行分组并得到数据方与各层神经网络(分别对应着各个参数组)的对应关系。
根据一个实施例,在根据模型参数分组确定数据方分组的情况下,还可以按照数据方持有的业务数量与单组模型参数数量的对应关系,为多个数据方分组。例如,在业务模型为神经网络,单层神经网络对应一组模型参数的情况下,神经元数量越多的层,相应分到的数据方持有的业务数据数量越多。
值得说明的是,在联合更新业务模型过程中,服务方可以在每个交互周期对各个数据方重新分组,也可以仅在初始周期对各个数据方进行一次分组,并在后续各个周期沿用,在此不做限定。
然后,通过步骤202,服务方向各个数据方提供当前的全局模型参数,以及各个数据方各自与由全局模型参数划分的N个参数组的对应关系。可以理解,在联邦学习的初始周期,当前的全局模型参数可以是由服务方初始化的模型参数,在联邦学习的其他周期,当前的全局模型参数可以是由服务方根据各个数据方反馈的模型参数更新的模型参数。
在本说明书的技术构思下,每个数据方仅将全部模型参数中的一部分模型参数(在这里称为部分模型参数)反馈至服务方。步骤201中对数据方分组的目的便是确定哪些数据方反馈哪些模型参数。因此,在该步骤202中可以向各个数据方提供相应所对应参数组的分组标识(如第j组),或者各个模型参数的参数标识(如wij),以供数据方按照分组标识提供相应的模型参数。
在可选的实施例中,一个数据方(或其所在的一组数据方)还可以对应一个或多个 参数组,在此不做限定。此时,单个数据方可以向服务方反馈其对应的多个参数组的模型参数。以作为多个数据方中任意一方的第一方为例,其至少可以与第一参数组具有对应关系。其中,第一参数组可以是N组模型参数中的任意一组。
接着,在步骤203中,各个数据方各自基于本地业务数据对按照全局模型参数更新的本地业务模型进行进一步的更新,得到新的本地业务模型。其中,单个数据方可以利用全量的全局模型参数更新本地业务模型,也可以仅更新相应组的部分模型参数。例如,在处于对模型参数的全量更新阶段的情况下,单个数据方可以利用全量的全局模型参数更新本地业务模型,在处于对模型参数的局部更新阶段的情况下,单个数据方可以利用全量的全局模型参数更新本地业务模型,也可以利用全局模型参数中,自身所对应参数组的部分模型参数更新本地业务模型。例如,第i组的数据方仅更新第i层神经网络(对应第i参数组)的模型参数。
针对单个数据方而言,全量更新阶段可以是在利用本地业务数据训练本地业务模型过程中全量更新模型参数的阶段,局部更新阶段可以是在利用本地业务数据训练本地业务模型过程中局部更新模型参数的阶段。在一个可能的设计中,在全量更新阶段,单个数据方从服务方接收全量的全局模型参数,并全量更新本地业务模型的模型参数,进而,利用更新后的本地业务模型处理本地作为训练样本的业务数据,并在当前训练周期的若干个轮次中全量更新模型参数。也就是说,计算全部模型参数的梯度,以基于各个梯度更新全部模型参数。在局部更新阶段,单个数据方可以利用全量或相应参数组的部分模型参数,更新本地业务模型,进而,利用更新后的本地业务模型处理本地作为训练样本的业务数据,并在当前训练周期的若干个轮次中仅计算相应参数组的部分模型参数的梯度,并更新这些模型参数。例如,与第i组模型参数对应的数据方j,可以固定其他组的模型参数,仅计算第i组模型参数的梯度,并更新第i组模型参数。
值得说明的是,不论全量更新阶段还是局部更新阶段,单个数据方(记为j)都可以仅上传当前周期对应组(如第i组)的部分模型参数wij(第j个数据方第i组模型参数)。例如,业务模型为N层神经网络,数据方的N个分组分别对应N层神经网络,被分到第2组的数据方,可以向服务方反馈第2层神经网络的模型参数。从而,在整个联邦学习过程中,可以大大降低通信数据量。
在一些可选的实现方式中,可以由服务方或各个数据方协商确定,或基于服务方确定的全量更新阶段训练时间(例如5个小时)、训练周期数(如1000个交互周期)等参数,各个数据方一起进入本说明书技术构思下的联邦学习的局部更新阶段。
在另一些可选的实现方式中,各个数据方分别可以利用阶段转换指标衡量自身当前周期处于全量更新阶段还是局部更新阶段。该阶段转换指标可以是用于衡量联合训练的业务模型针对单个数据方的本地业务数据的处理能力的指标。也就是说,联合训练的业务模型针对单个数据方的本地业务数据具有一定的处理能力后,可以进行局部更新阶段的模型参数局部更新。
在可选的实现方式中,该阶段转换指标可以通过诸如准确率、模型损失等中的至少一项模型性能表示。在阶段转换指标满足全量更新的停止条件的情况下,单个数据方可以进入局部更新阶段。根据阶段转换指标的不同,停止条件也不同。在一个实施例中,阶段转换指标可以是准确率。单个数据方在利用服务方提供的当前全局模型参数更新本地业务模型之后,利用更新后的本地业务模型处理本地的验证集,得到准确率。停止条件例如是准确率大于预定的准确度阈值等。在另一个实施例中,阶段转换指标是模型损失。单个数据方利用更新后的本地业务模型分多个批次处理本地的验证集,每个批次都确定一个模型损失,针对连续多个批次,将模型损失的单次降幅是否均小于预定值(如0.001),或者整体降幅是否小于预定值(如0.01)等作为阶段转换指标。也就是说,停止条件为模型损失降幅小于预定幅值。在一个实施例中,数据方还可以检测最近多个(如10个)训练周期(与数据方的交互周期)中,损失函数是否趋于稳定,如降幅小于预定值(如0.001)等,作为阶段转换指标。亦即,此时的停止条件可以为模型损失连续预定次数降幅小于预定幅值。
在更多实施例中,数据方也可以使用其他评价指标,或者使用其他方式确定阶段转换指标,以确定全量更新阶段是否结束。在全量更新阶段结束之后,单个数据方可以进入局部更新阶段,即在每个训练周期,在本地进行的多个轮次的模型参数更新过程中,仅更新相应训练周期对应分组的模型参数,例如第一方仅更新第一参数组中的模型参数。
在可能的设计中,单个数据方进入局部更新阶段后,还可以进一步检测以上阶段转换指标,在以上阶段转换指标满足全量更新的激活条件时,重新进入全量更新阶段进一步全量更新业务模型的模型参数。这里的激活条件也可以称为全量更新阶段的唤醒条件。例如可以为,检测到模型损失的下降幅值大于预设的激活值(如0.1)。
进一步地,通过步骤204,各个数据方将各自对应参数组的模型参数上传至服务方。具体地,分到第j组的第i个数据方向服务方反馈第j个参数组(如第j层神经网络)的模型参数wi,j。以前文的第一方为例,其至少可以向服务方上传针对第一参数组对应的模型参数更新后的参数值。为了描述方柏霓,本说明书可以将第一参数组对应的模型 参数的参数值记为第一参数集,则第一方可以反馈针对第一参数组更新后的第一参数集。可选地,数据方向服务方上传的数据还可以通过同态加密、秘密共享等预先约定的方式进行加密,以进一步保护数据隐私。
如此,进一步通过步骤205,服务方针对各个参数组,分别对相应的各组数据方反馈的模型参数进行融合,以更新全局模型参数。例如,服务方可以按照从1-N,分别融合各组模型参数,也可以根据各组数据方对模型参数反馈完毕的顺序,融合各参数组的模型参数。
服务方可以按照加权平均、取最小值、取中位数等方式对各组模型参数进行融合,在此不作限定。其中,在加权平均方式下,权重可以设为一致或不一致。如果设为不一致,则各个数据方对应的权重可以和相应数据方持有的业务数据条数正相关。各组模型参数的融合结果可以用于更新其全局模型参数。
以上步骤201至步骤205可以看作服务方辅助进行联邦学习过程汇总的一个周期。其中,基于本说明书的技术构思,步骤201至步骤205中各个步骤的执行顺序不限于以上实施例里给出的顺序。例如,步骤201、步骤202、步骤203可以按照上述顺序执行,也可以同时执行,还可以混合执行。以混合执行为例,服务方可以通过步骤201为向各个数据方提供当前的全局模型参数,再通过步骤202为各个数据方分组,并向数据方提供相应分组标识。在可选的实现方式中,服务方确定并向数据方提供相应分组标识可以在数据方利用本地业务数据训练本地业务模型的同时进行。
另外,在数据方的分组在整个联邦学习过程中确定不变的情况下,仅在第一个训练周期中涉及服务方针对多个数据方进行分组并确定相应分组对应的模型参数,或者在训练开始之前预先确定分组并确定相应分组对应的模型参数,以提供给数据方,后续流程中服务方不再执行上述的步骤201,以及步骤202中将数据方与参数组的对应关系提供给各个数据方的步骤。
回顾以上流程,在基于隐私保护通过图2示出的流程的联合更新业务模型过程中,由于对作为训练成员的多个数据方进行分组,每个数据方仅向服务方上传部分模型参数,可以有效减少多方协作过程中,各个数据方与服务方之间的通信量,以及服务方的数据处理量,从而避免造成通信阻塞,有利于提高整体的训练效率。
另外,对于单个数据方而言,训练过程可以分为两个阶段,全量更新阶段训练成员全局更新但分组局部上传模型参数,有利于加快收敛速度,提高联合训练的效率,局部 更新阶段训练成员分组局部更新并局部上传模型参数,有利于提高模型性能,从而提高联合训练的业务模型对业务数据的处理能力。
本说明书提供的联合更新业务模型的方法,可以适用于任何联邦学习过程,尤其在数据方较多,或者训练样本数量较大的情形下,以上效果更加显著。并且,以上过程未对模型进行稀疏化或量化,从而模型信息无损,对模型收敛性的影响较小,对训练成员进行随机分组,也保证联邦模型对训练数据的鲁棒性。
根据另一方面的实施例,还提供一种联合更新业务模型的系统,包括服务方和多个数据方,多个数据方在服务方的辅助下基于隐私保护联合训练业务模型,业务模型用于处理业务数据,得到相应业务处理结果。
其中:服务方配置为向各个数据方提供全局模型参数,以及各个数据方各自与全局模型参数划分的N个参数组的对应关系;各个数据方各自配置为利用全局模型参数更新本地业务模型,并基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型,以将与自身所对应参数组中的模型参数上传至服务方;服务方还配置为分别针对各个参数组,对接收到的模型参数进行融合,从而更新全局模型参数。
具体地,如图3、图4所示,服务方和单个数据方分别可以通过联合更新业务模型的装置300、装置400执行相应操作。
如图3所示,装置300可以包括:提供单元31,配置为向第一方提供当前全局模型参数,以及第一方与全局模型参数划分的N个参数组中的第一参数组的对应关系,以供第一方利用当前全局模型参数更新本地业务模型,并在基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型之后,反馈针对所述第一参数组的第一参数集;接收单元32,配置为接收第一方反馈的第一参数集;更新单元33,配置为基于第一参数集及从其他数据方接收的关于第一参数组的其他参数集,更新全局模型参数中的第一参数组,从而依据对第一参数组的更新,更新当前全局模型参数。
可以理解的是,实际上,接收单元32还可以配置为接收其他数据方反馈的各个参数集,而不仅仅是第一方反馈的第一参数集。这里由于服务方和各个数据方的交互过程的一致性,仅描述了数据方中的第一方和服务方的交互,因此仅描述了涉及第一方的参数集。
如图4所示,以多个数据方中的第一方为例,装置400可以包括:接收单元41,配置为从服务方接收当前全局模型参数,以及第一方与全局模型参数划分的N个参数组中 的第一参数组的对应关系;替换单元42,配置为利用当前全局模型参数更新本地业务模型;训练单元43,配置为基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型;反馈单元44,配置为向服务方反馈针对第一参数组进行更新得到的第一参数集,以供服务方基于第一参数集及从其他数据方接收的关于第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而更新当前全局模型参数。
值得说明的是,图3所示的装置300、图4所示的装置400,分别是与图2示出的方法实施例中设于服务方、数据方的装置实施例,以实现相应业务方的功能。因此,图2示出的方法实施例中的相应描述同样适用于装置300或装置400,在此不再赘述。
根据另一方面的实施例,还提供一种计算机可读存储介质,其上存储有计算机程序,当计算机程序在计算机中执行时,令计算机执行结合图2所描述的方法中与服务方或数据方对应的操作。
根据再一方面的实施例,还提供一种计算设备,包括存储器和处理器,存储器中存储有可执行代码,处理器执行可执行代码时,实现结合图2的方法中与服务方或数据方对应的操作。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本说明书实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。
以上的具体实施方式,对本说明书的技术构思的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上仅为本说明书的技术构思的具体实施方式而已,并不用于限定本说明书的技术构思的保护范围,凡在本说明书实施例的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本说明书的技术构思的保护范围之内。

Claims (18)

  1. 一种联合更新业务模型的方法,用于多个数据方在服务方的辅助下基于隐私保护联合训练业务模型,所述业务模型用于处理业务数据,得到相应业务处理结果;所述方法包括:
    服务方向各个数据方提供全局模型参数,以及各个数据方各自与所述全局模型参数划分的N个参数组的对应关系;
    各个数据方各自利用全局模型参数更新本地业务模型;
    各个数据方基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型,并将与自身所对应参数组中的模型参数上传至服务方;
    服务方分别针对各个参数组,对接收到的模型参数进行融合,从而更新全局模型参数。
  2. 根据权利要求1所述的方法,各个数据方基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型包括:
    各个数据方各自在利用全局模型参数更新本地业务模型之后,使用本地业务数据检测当前的阶段转换指标;
    所述阶段转换指标满足全量更新的停止条件的数据方,进入局部更新阶段;
    进入局部更新阶段的数据方对其对应参数组中的模型参数进行更新。
  3. 根据权利要求2所述的方法,其中,所述阶段转换指标为更新后的本地业务模型的模型性能,所述停止条件为所述模型性能满足预设值。
  4. 一种联合更新业务模型的方法,用于辅助多个数据方基于隐私保护联合训练业务模型的服务方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述多个数据方包括第一方,所述方法包括:
    向所述第一方提供当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系,以供所述第一方利用所述当前全局模型参数更新本地业务模型,并在基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型之后,反馈针对所述第一参数组的第一参数集;
    接收所述第一方反馈的第一参数集;
    基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而根据对所述第一参数组的更新,更新所述当前全局模型参数。
  5. 根据权利要求4所述的方法,其中,所述第一方与所述第一参数组的对应关系基 于以下方式确定:
    将所述多个数据方分为M组,其中,单组数据方对应至少一个数据方,所述第一方属于M组数据方中的第一组;
    确定M组数据方分别与N个参数组的对应关系,其中,单组数据方对应至少一个参数组,单个参数组对应至少一组数据方,所述第一组对应的参数组为第一参数组。
  6. 根据权利要求5所述的方法,其中,所述将所述多个数据方分为M组包括以下中的一项:
    以各组数据方持有的业务数据条数一致为目标,将所述多个数据方分为M组;
    以单个数据方持有的业务数据条数与相对应的参数组包括的模型参数数量正相关为目标,将所述多个数据方分为M组。
  7. 根据权利要求4所述的方法,其中,所述基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组包括:
    对所述第一参数集及关于所述第一参数组的其他参数集进行以下至少一种方式的融合:加权平均、取最小值、取中位数;
    根据融合结果更新全局模型参数中的第一参数组。
  8. 根据权利要求4所述的方法,其中,所述依据对所述第一参数组的更新,更新所述当前全局模型参数包括:
    对其他各个参数组分别按照从各自对应的若干数据方反馈的相应参数集进行更新,从而更新所述当前全局模型参数。
  9. 一种联合更新业务模型的方法,用于在服务方的辅助下,基于隐私保护联合训练业务模型的多个数据方中的第一方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述方法包括:
    从所述服务方接收当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系;
    利用所述当前全局模型参数更新本地业务模型;
    基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型;
    向所述服务方反馈针对所述第一参数组进行更新得到的第一参数集,以供所述服务方基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而更新当前全局模型参数。
  10. 根据权利要求9所述的方法,其中,所述基于本地业务数据对更新后的本地业务 模型进行进一步的更新,得到新的本地业务模型包括:
    利用本地业务数据检测更新后的本地业务模型当前的阶段转换指标;
    在所述阶段转换指标满足全量更新的停止条件的情况下,进入对所述第一参数组进行更新的局部更新阶段。
  11. 根据权利要求10所述的方法,其中,在所述阶段转换指标不满足所述停止条件的情况下,继续对本地业务模型的全部模型参数进行更新的全量更新阶段。
  12. 根据权利要求10或11所述的方法,其中,所述阶段转换指标为更新后的本地业务模型的模型性能,所述停止条件为所述模型性能满足预设值。
  13. 根据权利要求10所述的方法,其中,在所述局部更新阶段,所述基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型包括:
    检测所述阶段转换指标是否满足全量更新的激活条件;
    在所述阶段转换指标满足所述激活条件的情况下,重新进入对本地业务模型的全部模型参数进行更新的全量更新阶段。
  14. 一种联合更新业务模型的系统,包括服务方和多个数据方,所述多个数据方在所述服务方的辅助下基于隐私保护联合训练业务模型,所述业务模型用于处理业务数据,得到相应业务处理结果;其中:
    所述服务方配置为向各个数据方提供全局模型参数,以及各个数据方各自与所述全局模型参数划分的N个参数组的对应关系;
    各个数据方各自配置为利用全局模型参数更新本地业务模型,并基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型,以将与自身所对应参数组中的模型参数上传至服务方;
    所述服务方还配置为分别针对各个参数组,对接收到的模型参数进行融合,从而更新全局模型参数。
  15. 一种联合更新业务模型的装置,设于辅助多个数据方基于隐私保护联合训练业务模型的服务方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述多个数据方包括第一方,所述装置包括:
    提供单元,配置为向所述第一方提供当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系,以供所述第一方利用所述当前全局模型参数更新本地业务模型,并在基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型之后,反馈针对所述第一参数组的第一参数集;
    接收单元,配置为接收所述第一方反馈的第一参数集;
    更新单元,配置为基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而根据对所述第一参数组的更新,更新所述当前全局模型参数。
  16. 一种联合更新业务模型的装置,设于在服务方的辅助下,基于隐私保护联合训练业务模型的多个数据方中的第一方,所述业务模型用于处理业务数据,得到相应业务处理结果,所述装置包括:
    接收单元,配置为从所述服务方接收当前全局模型参数,以及所述第一方与所述全局模型参数划分的N个参数组中的第一参数组的对应关系;
    替换单元,配置为利用所述当前全局模型参数更新本地业务模型;
    训练单元,配置为基于本地业务数据对更新后的本地业务模型进行进一步的更新,得到新的本地业务模型;
    反馈单元,配置为向所述服务方反馈针对所述第一参数组进行更新得到的第一参数集,以供所述服务方基于所述第一参数集及从其他数据方接收的关于所述第一参数组的其他参数集,更新全局模型参数中的第一参数组,进而更新当前全局模型参数。
  17. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求4-13中任一项的所述的方法。
  18. 一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求4-13中任一项所述的方法。
PCT/CN2022/085876 2021-04-12 2022-04-08 联合更新业务模型的方法及装置 WO2022218231A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/485,765 US20240037252A1 (en) 2021-04-12 2023-10-12 Methods and apparatuses for jointly updating service model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110390904.XA CN113052329B (zh) 2021-04-12 2021-04-12 联合更新业务模型的方法及装置
CN202110390904.X 2021-04-12

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/485,765 Continuation US20240037252A1 (en) 2021-04-12 2023-10-12 Methods and apparatuses for jointly updating service model

Publications (1)

Publication Number Publication Date
WO2022218231A1 true WO2022218231A1 (zh) 2022-10-20

Family

ID=76519116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085876 WO2022218231A1 (zh) 2021-04-12 2022-04-08 联合更新业务模型的方法及装置

Country Status (3)

Country Link
US (1) US20240037252A1 (zh)
CN (1) CN113052329B (zh)
WO (1) WO2022218231A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093903A (zh) * 2023-10-19 2023-11-21 中国科学技术大学 一种纵向联邦学习场景中标签推理攻击方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052329B (zh) * 2021-04-12 2022-05-27 支付宝(杭州)信息技术有限公司 联合更新业务模型的方法及装置
CN114357526A (zh) * 2022-03-15 2022-04-15 中电云数智科技有限公司 抵御推断攻击的医疗诊断模型差分隐私联合训练方法
CN114707662A (zh) * 2022-04-15 2022-07-05 支付宝(杭州)信息技术有限公司 联邦学习方法、装置及联邦学习系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285980A1 (en) * 2019-03-08 2020-09-10 NEC Laboratories Europe GmbH System for secure federated learning
CN112015749A (zh) * 2020-10-27 2020-12-01 支付宝(杭州)信息技术有限公司 基于隐私保护更新业务模型的方法、装置及系统
CN112288097A (zh) * 2020-10-29 2021-01-29 平安科技(深圳)有限公司 联邦学习数据处理方法、装置、计算机设备及存储介质
CN112488322A (zh) * 2020-12-15 2021-03-12 杭州电子科技大学 一种基于数据特征感知聚合的联邦学习模型训练方法
CN113052329A (zh) * 2021-04-12 2021-06-29 支付宝(杭州)信息技术有限公司 联合更新业务模型的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10970402B2 (en) * 2018-10-19 2021-04-06 International Business Machines Corporation Distributed learning preserving model security
CN111626506B (zh) * 2020-05-27 2022-08-26 华北电力大学 基于联邦学习的区域光伏功率概率预测方法及其协同调控系统
CN111476376B (zh) * 2020-06-24 2020-10-16 支付宝(杭州)信息技术有限公司 联盟学习方法、联盟学习装置及联盟学习系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200285980A1 (en) * 2019-03-08 2020-09-10 NEC Laboratories Europe GmbH System for secure federated learning
CN112015749A (zh) * 2020-10-27 2020-12-01 支付宝(杭州)信息技术有限公司 基于隐私保护更新业务模型的方法、装置及系统
CN112288097A (zh) * 2020-10-29 2021-01-29 平安科技(深圳)有限公司 联邦学习数据处理方法、装置、计算机设备及存储介质
CN112488322A (zh) * 2020-12-15 2021-03-12 杭州电子科技大学 一种基于数据特征感知聚合的联邦学习模型训练方法
CN113052329A (zh) * 2021-04-12 2021-06-29 支付宝(杭州)信息技术有限公司 联合更新业务模型的方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093903A (zh) * 2023-10-19 2023-11-21 中国科学技术大学 一种纵向联邦学习场景中标签推理攻击方法
CN117093903B (zh) * 2023-10-19 2024-03-29 中国科学技术大学 一种纵向联邦学习场景中标签推理攻击方法

Also Published As

Publication number Publication date
US20240037252A1 (en) 2024-02-01
CN113052329A (zh) 2021-06-29
CN113052329B (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2022218231A1 (zh) 联合更新业务模型的方法及装置
US11620403B2 (en) Systems and methods for secure data aggregation and computation
WO2021204040A1 (zh) 联邦学习数据处理方法、装置、设备及存储介质
US10355869B2 (en) Private blockchain transaction management and termination
Batool et al. Block-FeST: A blockchain-based federated anomaly detection framework with computation offloading using transformers
US11126659B2 (en) System and method for providing a graph protocol for forming a decentralized and distributed graph database
CN111612455A (zh) 一种面向用电信息保护的拜占庭容错联盟链共识方法及其系统、存储介质
WO2017211259A1 (zh) 一种用户信用评分优化方法和装置
CN112799708B (zh) 联合更新业务模型的方法及系统
US11558420B2 (en) Detection of malicious activity within a network
CN111860865B (zh) 模型构建和分析的方法、装置、电子设备和介质
CN113377797B (zh) 联合更新模型的方法、装置及系统
CN113360514A (zh) 联合更新模型的方法、装置及系统
Qin et al. A selective model aggregation approach in federated learning for online anomaly detection
CN115034836A (zh) 一种模型训练方法及相关装置
CN115049011A (zh) 确定联邦学习的训练成员模型贡献度的方法及装置
CN112101577B (zh) 基于XGBoost的跨样本联邦学习、测试方法、系统、设备和介质
CN111865595A (zh) 一种区块链的共识方法及装置
US11734455B2 (en) Blockchain-based data processing method and apparatus, device, and storage medium
CN113887740A (zh) 联合更新模型的方法、装置及系统
Buyukates et al. Proof-of-Contribution-Based Design for Collaborative Machine Learning on Blockchain
Zhao et al. FedCom: Byzantine-Robust Federated Learning Using Data Commitment
Li et al. Analysis of duplicate packing in fruitchain
Niu et al. A Sensitivity-aware and Block-wise Pruning Method for Privacy-preserving Federated Learning
Xu et al. BASS: A Blockchain-Based Asynchronous SignSGD Architecture for Efficient and Secure Federated Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22787457

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22787457

Country of ref document: EP

Kind code of ref document: A1