CN112613618A - Safe federal learning logistic regression algorithm - Google Patents

Safe federal learning logistic regression algorithm Download PDF

Info

Publication number
CN112613618A
CN112613618A CN202110002749.XA CN202110002749A CN112613618A CN 112613618 A CN112613618 A CN 112613618A CN 202110002749 A CN202110002749 A CN 202110002749A CN 112613618 A CN112613618 A CN 112613618A
Authority
CN
China
Prior art keywords
guest
host
data
logistic regression
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110002749.XA
Other languages
Chinese (zh)
Inventor
祝文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenpu Technology Shanghai Co ltd
Original Assignee
Shenpu Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenpu Technology Shanghai Co ltd filed Critical Shenpu Technology Shanghai Co ltd
Priority to CN202110002749.XA priority Critical patent/CN112613618A/en
Publication of CN112613618A publication Critical patent/CN112613618A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a safe federal learning logistic regression algorithm, which comprises the following steps: host calculates E (W)BXB) Sending to the guest; guest calculation
Figure DDA0002882215270000011
Computing
Figure DDA0002882215270000012
Computing
Figure DDA0002882215270000013
E (y) is calculated and sent to the host, and then the gradient value L is calculatedAUpdating local model parameter W'A(ii) a If isstop ≠ 0, then calculate the gradient value LBUpdate model parameter E (W'B) Then, repeating step 1: if the isstop is 0, entering the next step; guest returns WAHost selects a random vector to which perturbed W 'will be added'BSending to the guest; guest helps decrypt W'BAnd sending the data to host; step 6: host returns model parameters WB. The system architecture provided by the invention can be easily expanded to support multi-party model training, and can train a combined model on a large corpus of dispersed data owned by different parties, and meanwhile, the privacy of the data is kept, and the precision of the model is ensured.

Description

Safe federal learning logistic regression algorithm
Technical Field
The invention relates to the technical field of federal learning, in particular to a safe federal learning logistic regression algorithm.
Background
Machine Learning (ML) refers to a process of using some algorithms to guide a computer to autonomously construct a reasonable model by using known data and to judge a new situation by using the model, and plays a very important role in various applications such as network search, online advertisement, commodity recommendation, mechanical failure prediction, insurance pricing, financial risk management, and the like. Traditionally, machine learning models are trained on a centralized corpus of data, which may be collected by a single or multiple data providers. Although parallel distributed algorithms have been developed to speed up the training process, the training data itself is still collected centrally and stored in one data center.
In 5.2018, the European Union mentioned a new height to the privacy Protection requirement by the General Data Protection Regulation (GDPR) act. In addition to this, there are many legal regulations that have been published about private data. Therefore, the previous platform mechanism is challenged to share data in any way, and also brings serious privacy problems to the data collection of machine learning. Because data used for machine learning training is often sensitive, it may come from multiple owners with different privacy requirements. This serious privacy problem limits the actual amount of data.
Many scholars propose to train data encryption directly by using a secure multiparty computing technology, which obviously brings about considerable computing overhead. To address this challenge, google introduced the Federal Learning (FL) system. Yangqiang et al of the micro-public bank expands the concept of federal learning, covers more scenarios, and forms a comprehensive and safe federal learning framework including Horizontal Federal Learning (HFL), Vertical Federal Learning (VFL), and Federal Transfer Learning (FTL).
The federal study defines that all data are kept locally, so that privacy is not disclosed and regulations are not violated; multiple participants join the data to build a virtual common model and to share the beneficial architecture. Specifically, the data can not go out of the local, and then a virtual common model is established in a parameter exchange mode under an encryption mechanism under the condition of not violating the data privacy regulation. Federal learning is used as a modeling method for guaranteeing data safety, and has huge application prospects in industries such as sales, finance and the like. In these industries, data cannot be aggregated directly for machine learning model training due to a number of factors including intellectual property, privacy protection, data security, etc. At this point, a federated model needs to be trained via federated learning.
Accordingly, the difficulties of federal learning remain. Firstly, the method comprises the following steps: how enterprise B without the tagged data computes the model. Secondly, the method comprises the following steps: how the server updates the new model. Thirdly, the method comprises the following steps: each server will not reverse out other information after obtaining the latest model. Therefore, how to carry out model training by using a machine learning algorithm for secrecy is also a key point.
The logistic regression algorithm is a classic algorithm in the machine learning algorithm, and when the machine learning problem is processed, a simple algorithm is preferentially adopted, and parameters of the simple algorithm are optimized. If the purpose can not be achieved, more complex algorithms such as neural networks and the like are selected. The logistic regression algorithm is wide in application range, the body shadow of the logistic regression algorithm can be seen in industries such as finance, internet and the like, and logistic regression is basically the first choice as long as two categories are involved.
Due to the various advantages of logistic regression and its widespread use in many binary tasks, there have been several implementations of logistic regression for vertical federal learning. The logistic regression algorithm under the existing federal learning system is established on one coordinator C, namely, three participants, namely an enterprise A, an enterprise B and the coordinator C which helps to fuse the models are needed for joint modeling, and the coordinator C plays an important role in the training of the models.
In some recent work, the raw data of the modeling participants is encrypted by a fully homomorphic cryptosystem and then uploaded to a central server (e.g., a cloud host) by running a machine learning algorithm appropriate for AHE, i.e., training the model on the encrypted data. In this way it can protect the original data. However, computing encrypted data consumes memory and processing time. On the other hand, data, although encrypted, cannot be stored locally, increasing the potential risk of data leakage. If the encrypted data is not to be transmitted to the central server, some intermediate results may be encrypted and transmitted using a fully homomorphic cryptosystem during the training process. This provides some significant benefits (1) the original data is kept locally by both parties. (2) The amount of data that needs to be encrypted is minimized, thereby greatly reducing the overall computational overhead. In this direction of research, Hardy et al proposed a solution to federal logistic regression based on vertically partitioned data. The scheme proposed in hundred degrees in 2019 removes a third party, but the problem of leakage of Y label information exists through the theory. In 2020, a non-interactive training mode is provided, and the CSP is handed to, and the decryption is also carried out by a third-party CSP, so that the security is not high enough.
Disclosure of Invention
The invention aims to provide a safe federated learning logistic regression algorithm, which can greatly reduce the complexity of a system, reduce the cost of establishing a joint model by any two parties and solve the problems provided in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a safe federated learning logistic regression algorithm comprises longitudinal federated learning and comprises the following steps:
step 1: host calculates E (W)BXB) Sending to the guest;
step 2: guest calculation
Figure BDA0002882215250000031
Computing
Figure BDA0002882215250000032
Computing
Figure BDA0002882215250000033
E (y) is calculated and sent to the host, and then the gradient value L is calculatedAUpdating local model parameter W'A
And step 3: if isstop ≠ 0, then calculate the gradient value LBUpdate model parameter E (W'B) Then, repeating step 1: if the isstop is 0, entering the step 4;
and 4, step 4: guest returns WAHost selects a random vector to which perturbed W 'will be added'BSending to the guest;
and 5: guest helps decrypt W'BAnd sending the data to host;
step 6: host returns model parameters WB
Further, a data application party (guest) and a data holding party (host) respectively hold a feature matrix
Figure BDA0002882215250000041
And
Figure BDA0002882215250000042
guest holds the label matrix y ∈ Rn×1Wherein
Figure BDA0002882215250000043
yj∈R1×1,i∈[A,B],j∈[1,n]Instance of records belonging to a certain user, use
Figure BDA0002882215250000044
Representing corresponding feature matrix XiA feature set.
Further, when the machine learning algorithm is applied, a small batch gradient descent method is adopted to train the adopted algorithm.
Further, in order to train a homomorphic encryption-based logistic regression federated model, the label matrix y ∈ R of the guard is required to be protectedn×1And a partial eigenvalue matrix
Figure BDA0002882215250000045
Partial eigenvalue matrix of required protection host
Figure BDA0002882215250000046
And Guest and host initialized model parameters
Figure BDA0002882215250000047
Furthermore, a data matrix and a tag matrix are held by the guest, the active side can access the tag y, the guest naturally plays a role of a leading server in federal learning, the host is regarded as a data holder and only has the data matrix, and the host plays a role of a client in federal learning to predict the client of the host.
Further, the minibatch gradient descent method updates the parameter batch size with a fraction of samples at a time.
Further, when the batch size is 1, SGD is obtained, and when the batch size is m, BGD is obtained, and the batch size is usually set to the power of 2, and usually 2, 4, 8, 16, 32, 64, 128, 256, and 512 are provided.
Compared with the prior art, the invention has the beneficial effects that:
(1) a new vertical federated learning architecture is proposed, eliminating the role of third party coordinators, which greatly reduces the complexity of the system and allows any two parties to train a federated model without the help of a trusted coordinator. In addition to two-way model training, the proposed system architecture can also be easily extended to support multi-way model training.
(2) Based on the framework, parallel distributed logistic regression for vertical federated learning is realized, and a large amount of training data can be processed by running a training algorithm on one machine cluster of both parties. Federated Learning (FL), a new machine learning mechanism, is involved that can train a federated model over a large corpus of dispersed data owned by different parties while maintaining privacy of the data.
(3) The Taylor expansion is avoided from being used for approximating the current operator, so that the accuracy of the subsequent final result is ensured without generating deviation, and the precision of the model is ensured.
Drawings
FIG. 1 is a flow chart of the central logistic regression training model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a safe federal learning logistic regression algorithm includes longitudinal federal learning, and includes the following steps:
step 1: host calculates E (W)BXB) Sending to the guest;
step 2: guest calculation
Figure BDA0002882215250000051
Computing
Figure BDA0002882215250000052
Computing
Figure BDA0002882215250000053
E (y) is calculated and sent to the host, and then the gradient value L is calculatedAUpdating local model parameter W'A
And step 3: if isstop ≠ 0, then calculate the gradient value LBUpdate model parameter E (W'B) Then, repeating step 1: if the isstop is 0, entering the step 4;
and 4, step 4: guest returns WAHost selects a random vector to which perturbed W 'will be added'BSending to the guest;
and 5: guest helps decrypt W'BAnd sending the data to host;
step 6: host returns model parameters WB
Given, the data application party (guest) and the data holder party (host) respectively hold a feature matrix
Figure BDA0002882215250000061
And
Figure BDA0002882215250000062
guest holds the label matrix y ∈ Rn×1Wherein
Figure BDA0002882215250000063
yj∈R1×1,i∈[A,B],j∈[1,n]An instance of a record belonging to a certain user. Use of
Figure BDA0002882215250000064
Representing corresponding feature matrix XiSet of features, here requiring F1∩F2Is empty. If generalizing to multiple hosts and a guest, different hosts are likely to have partially identical sets of users.
Learn, a machine learning model M, does not provide any party's data matrix information to others during the learning process. The model M has one projection M on each sideiA function of, and MiAccepting its own characteristic XiIs input.
The Lossless and Eiffent constraint requires that the model M guarantees the execution efficiency on the premise of guaranteeing the precision.
Where the gust holds a data matrix and a tag matrix. Since tag information is essential to supervised learning, there must be an active party with access to tag y, which is naturally acting as the leading server in federal learning. host is considered as a data holder, only the data matrix. host plays a role of a client in federal learning and predicts the client. In the invention, an enterprise A is a data application, an enterprise B is a data holder, and M is a logistic regression algorithm.
When applying machine learning algorithms, gradient descent methods are typically employed to train the algorithms employed. In fact, the common Gradient Descent method also specifically comprises three different forms, which mainly comprise a Batch Gradient Descent method (BGD), a random Gradient Descent method (SGD) and a small Batch Gradient Descent method (Mini-Batch Gradient Descent-MBGD). The invention adopts the MBGD gradient descent method, because the training process of the algorithm is faster, and the accuracy of the final parameter training is also ensured, thus being a compromise scheme of BGD and SGD.
MBGD updates the parameter, i.e. the batch size, with a fraction of samples at a time. Therefore, when the batch size is 1, SGD is obtained, and when the batch size is m, BGD is obtained. The batch size is typically set to the power of 2, typically 2, 4, 8, 16, 32, 64, 128, 256, 512 (rarely set greater than 512). Setting to the power of 2 is more advantageous for GPU acceleration. Assuming that batch size is r, r ∈ [1, m ], each gradient descent formula transforms into a gradient descent formula
Figure BDA0002882215250000071
In order to train a homomorphic encryption-based logistic regression federated model, the label matrix y belonging to the protection gust belongs to the Rn×1And a partial eigenvalue matrix
Figure BDA0002882215250000072
Partial eigenvalue matrix of required protection host
Figure BDA0002882215250000073
And Guest and host initialized model parameters
Figure BDA0002882215250000074
Logistic regression federal training algorithm (SeceumLR) is correct under safety definition. The main steps of the proposed Federal logistic regression training process are shown in the flow chart 1. In practice, when training a model on a data set, it is common to iterate many times until a maximum number of iterations is reached or some convergence condition is met. Note that during model training, some intermediate data is exchanged, but the data listed in FIG. 1 is not transferred between the two parties. The training step including the intermediate data exchange may result in potential data leakage as listed below: first, host will get E (W)BXB) Sent to the guest, guest calculation
Figure BDA0002882215250000075
And
Figure BDA0002882215250000076
then the guest calculates
Figure BDA0002882215250000077
And E (, y) ═ E (y-y'). At this time, guest may update its own parameter value W'A
Guest needs to encrypt the loss value to be transmitted to the host, avoiding the host knowing the loss value. According to the homomorphism of the encryption system, host calculates the gradient value E (L) of the hostB) Then parameter value E (W ') is updated locally'B)。
Finally, the loss value is counted by the guest until the maximum iteration number is reached or some convergence condition is met. Guest sends a signal with isstop 1 to host, which needs to let Guest help itself decrypt its gradient value and then return its parameter value. Otherwise, the isstop is 0, and the guest and host respectively locally update the weight parameters with respective gradient values, and continuously iterate. Guest decides whether to terminate according to iteration error and iteration number.
The invention provides a decentered parallel distributed logistic regression method for vertical federated learning, called SecureLR. Wherein enterprise A has part of characteristic value XA∈R1×aAnd class label Y ∈ Rn×1Enterprise B has partial feature value XB∈R1×b,XA∩XBIs empty.
The main reasons behind this variation are twofold. First, it is inherently difficult to find an authoritative third party that can be trusted by any two parties. Second, third party C is involved in addition to both Enterprise A and Enterprise B, increasing the risk of data leakage. The coordinator C is removed, the complexity of the system can be greatly reduced, and the cost for establishing the combined model by any two parties is reduced.
First, an architecture is employed that eliminates third party coordinators, which can greatly simplify system deployment. Second, the loss function is kept constant, rather than approximating it with a polynomial. Second, the goal is to process very large data sets, and the solution is designed to be parallel, distributed, and scalable. The current operator is approximated with taylor expansion and then the corresponding cryptographic or secure multi-party computing (MPC) primitive is invoked to bypass complex operations. Third, the intermediate gradient values are preserved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.

Claims (7)

1. A safe federated learning logistic regression algorithm is characterized by comprising longitudinal federated learning and comprising the following steps:
step 1: host calculates E (W)BXB) Sending to the guest;
step 2: guest calculation
Figure FDA0002882215240000011
Computing
Figure FDA0002882215240000012
Computing
Figure FDA0002882215240000013
Computing
Figure FDA0002882215240000014
Sent to host and then calculated by calculating the gradient value LAUpdating local model parameter W'A
And step 3: if isstop ≠ 0, then calculate the gradient value LBUpdate model parameter E (W'B) Then, repeating step 1: if the isstop is 0, entering the step 4;
and 4, step 4: guest returns WAHost selects a random vector to which perturbed W 'will be added'BSending to the guest;
and 5: guest helps decrypt W'BAnd sending the data to host;
step 6: host returns model parameters WB
2. The secure federated learning logistic regression algorithm of claim 1, wherein the data application party (guest) and the data holder party (host) each hold a feature matrix
Figure FDA0002882215240000015
And
Figure FDA0002882215240000016
guest holds the label matrix y ∈ Rn×1Wherein
Figure FDA0002882215240000017
yj∈R1×1,i∈[A,B],j∈[1,n]Instance of records belonging to a certain user, use
Figure FDA0002882215240000018
Representing corresponding feature matrix XiA feature set.
3. The safe federal learning logistic regression algorithm of claim 1, wherein, in applying the machine learning algorithm, a small batch gradient descent method is used to train the algorithm used.
4. The secure federated learning logistic regression algorithm of claim 1, wherein to train a logistic regression federated model based on homomorphic encryption, the label matrix y e R that needs to protect the gustn×1And a partial eigenvalue matrix
Figure FDA0002882215240000021
Partial eigenvalue matrix of required protection host
Figure FDA0002882215240000022
And Guest and host initialized model parameters
Figure FDA0002882215240000023
5. The secure federated learning logistic regression algorithm of claim 1, wherein Guest holds a data matrix and a tag matrix, and an active party can access the tag y, and Guest naturally plays a role as a leading server in federated learning, and host is regarded as a data holder, and only the data matrix and host play a role as a client in federated learning to predict own clients.
6. A safe federal learned logistic regression algorithm as claimed in claim 3, wherein the minibatch gradient descent method updates the parameter batch size with a fraction of samples at a time.
7. The safe federal learned logistic regression algorithm of claim 6, wherein, if the batch size is 1, it becomes SGD, and if the batch size is m, it becomes BGD, and the batch size is usually set to the power of 2, setting 2, 4, 8, 16, 32, 64, 128, 256, 512.
CN202110002749.XA 2021-01-04 2021-01-04 Safe federal learning logistic regression algorithm Pending CN112613618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110002749.XA CN112613618A (en) 2021-01-04 2021-01-04 Safe federal learning logistic regression algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110002749.XA CN112613618A (en) 2021-01-04 2021-01-04 Safe federal learning logistic regression algorithm

Publications (1)

Publication Number Publication Date
CN112613618A true CN112613618A (en) 2021-04-06

Family

ID=75253982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110002749.XA Pending CN112613618A (en) 2021-01-04 2021-01-04 Safe federal learning logistic regression algorithm

Country Status (1)

Country Link
CN (1) CN112613618A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268760A (en) * 2021-07-19 2021-08-17 浙江数秦科技有限公司 Distributed data fusion platform based on block chain
CN113434898A (en) * 2021-05-22 2021-09-24 西安电子科技大学 Non-interactive privacy protection logistic regression federal training method and system
CN113505894A (en) * 2021-06-02 2021-10-15 北京航空航天大学 Longitudinal federated learning linear regression and logistic regression model training method and device
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN114004363A (en) * 2021-10-27 2022-02-01 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN114186263A (en) * 2021-12-17 2022-03-15 大连理工大学 Data regression method based on longitudinal federal learning and electronic device
CN114004363B (en) * 2021-10-27 2024-05-31 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434898A (en) * 2021-05-22 2021-09-24 西安电子科技大学 Non-interactive privacy protection logistic regression federal training method and system
CN113505894A (en) * 2021-06-02 2021-10-15 北京航空航天大学 Longitudinal federated learning linear regression and logistic regression model training method and device
CN113505894B (en) * 2021-06-02 2023-12-15 北京航空航天大学 Longitudinal federal learning linear regression and logistic regression model training method and device
CN113268760A (en) * 2021-07-19 2021-08-17 浙江数秦科技有限公司 Distributed data fusion platform based on block chain
CN113268760B (en) * 2021-07-19 2021-11-02 浙江数秦科技有限公司 Distributed data fusion platform based on block chain
CN113689003A (en) * 2021-08-10 2021-11-23 华东师范大学 Safe mixed federal learning framework and method for removing third party
CN113689003B (en) * 2021-08-10 2024-03-22 华东师范大学 Mixed federal learning framework and method for safely removing third party
CN114004363A (en) * 2021-10-27 2022-02-01 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN114004363B (en) * 2021-10-27 2024-05-31 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN114186263A (en) * 2021-12-17 2022-03-15 大连理工大学 Data regression method based on longitudinal federal learning and electronic device
CN114186263B (en) * 2021-12-17 2024-05-03 大连理工大学 Data regression method based on longitudinal federal learning and electronic device

Similar Documents

Publication Publication Date Title
CN112733967B (en) Model training method, device, equipment and storage medium for federal learning
CN112613618A (en) Safe federal learning logistic regression algorithm
Liu et al. Vertical Federated Learning: Concepts, Advances, and Challenges
Zhang et al. Additively homomorphical encryption based deep neural network for asymmetrically collaborative machine learning
CN114696990B (en) Multi-party computing method, system and related equipment based on fully homomorphic encryption
Kang et al. Privacy-preserving federated adversarial domain adaptation over feature groups for interpretability
CN113505882A (en) Data processing method based on federal neural network model, related equipment and medium
Baryalai et al. Towards privacy-preserving classification in neural networks
CN114547643A (en) Linear regression longitudinal federated learning method based on homomorphic encryption
CN113051586B (en) Federal modeling system and method, federal model prediction method, medium, and device
He et al. Secure logistic regression for vertical federated learning
CN112989399A (en) Data processing system and method
CN112818369A (en) Combined modeling method and device
CN114564641A (en) Personalized multi-view federal recommendation system
CN116186780A (en) Privacy protection method and system based on noise disturbance in collaborative learning scene
CN112507372B (en) Method and device for realizing privacy protection of multi-party collaborative update model
CN113051608A (en) Method for transmitting virtualized sharing model for federated learning
Wang et al. Efficient and secure pedestrian detection in intelligent vehicles based on federated learning
CN117034307A (en) Data encryption method, device, computer equipment and storage medium
Yang et al. Federated Transfer Learning
CN115423208A (en) Electronic insurance value prediction method and device based on privacy calculation
CN115130568A (en) Longitudinal federated Softmax regression method and system supporting multiple parties
CN113887740A (en) Method, device and system for jointly updating model
CN114547684A (en) Method and device for protecting multi-party joint training tree model of private data
Jain et al. Design of Advanced Privacy Preserving Model for Protecting Privacy within a Fog Computing Scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210406