WO2022257730A1 - 实现隐私保护的多方协同更新模型的方法、装置及系统 - Google Patents

实现隐私保护的多方协同更新模型的方法、装置及系统 Download PDF

Info

Publication number
WO2022257730A1
WO2022257730A1 PCT/CN2022/094020 CN2022094020W WO2022257730A1 WO 2022257730 A1 WO2022257730 A1 WO 2022257730A1 CN 2022094020 W CN2022094020 W CN 2022094020W WO 2022257730 A1 WO2022257730 A1 WO 2022257730A1
Authority
WO
WIPO (PCT)
Prior art keywords
gradient vector
participant
server
value
current
Prior art date
Application number
PCT/CN2022/094020
Other languages
English (en)
French (fr)
Inventor
吕灵娟
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2022257730A1 publication Critical patent/WO2022257730A1/zh
Priority to US18/535,061 priority Critical patent/US20240112091A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • One or more embodiments of this specification relate to the field of computer technology, and in particular to a method, device and system for implementing a privacy-protected multi-party collaborative update model.
  • federated learning also known as federated learning
  • federated learning has revolutionized traditional centralized machine learning, allowing participants to collaboratively build more accurate models without uploading local data.
  • federated learning is often realized by sharing model parameters or gradients among participants.
  • model parameters or gradients are usually high-dimensional private data
  • traditional federated learning is accompanied by high communication overhead and privacy issues to a certain extent. leakage etc.
  • One or more embodiments of this specification describe a method, device and system for implementing privacy-protected multi-party collaborative model update, which can effectively reduce communication resource consumption caused by multi-party collaborative modeling, and at the same time play a role in privacy protection.
  • a method for implementing a privacy-preserving multi-party cooperative update model including: each participant i determines the corresponding local gradient vector according to the local sample set and the current model parameters; each participant i uses the satisfying difference
  • the randomization algorithm of privacy performs random binarization processing on each element in the local gradient vector to obtain a disturbance gradient vector; each participant i sends the determined disturbance gradient vector to the server; the server aggregates The n disturbance gradient vectors sent by the n participants, and according to the sign of each element in the current aggregation result, perform binary representation on each element to obtain the target gradient vector; each participant i obtains the target gradient vector from the server Receive the target gradient vector, and update the current model parameters according to the target gradient vector for the next round of iterations; after the multiple rounds of iterations, each participant i uses the current model parameters obtained by it as its A business forecasting model that is updated in collaboration with other parties.
  • the second aspect provides a method for multi-party cooperative update model that realizes privacy protection, including: determining the corresponding local gradient vector according to the local sample set and the current model parameters; Each element in the gradient vector is randomly binarized to obtain a disturbance gradient vector; the disturbance gradient vector is sent to the server; a target gradient vector is received from the server; wherein the target gradient vector is the After aggregating the n disturbance gradient vectors sent by the n participants, the server performs binary representation on each element according to the sign of each element in the current aggregation result; updates the current model according to the target gradient vector parameters for the next round of iterations; after the multiple rounds of iterations, the obtained current model parameters are used as the business forecasting model that it coordinates with other participants to update.
  • a system for implementing a privacy-preserving multi-party cooperative update model including: each participant i is used to determine the corresponding local gradient vector according to the local sample set and the current model parameters; each participant i , is also used to use a randomization algorithm that satisfies differential privacy to perform random binarization on each element in the local gradient vector to obtain a disturbance gradient vector; each participant i is also used to convert the respective determined disturbance gradient
  • the vector is sent to the server; the server is configured to aggregate n disturbance gradient vectors sent by the n participants, and perform binary representation of each element according to the sign of each element in the current aggregation result, Obtain the target gradient vector; each participant i is also used to receive the target gradient vector from the server, and update the current model parameters according to the target gradient vector for the next iteration; each participant i , which is also used to use the obtained current model parameters after the multiple rounds of iterations as the business forecasting model that it coordinates with other participants to update.
  • a device for implementing privacy-protected multi-party collaborative model update including: a determination unit, configured to determine a corresponding local gradient vector according to a local sample set and current model parameters; a processing unit, configured to use a satisfying difference A privacy randomization algorithm, performing random binarization processing on each element in the local gradient vector to obtain a disturbance gradient vector; a sending unit, configured to send the disturbance gradient vector to the server; a receiving unit, configured to Receive the target gradient vector from the server; wherein, the target gradient vector is after the server aggregates the n disturbance gradient vectors sent by the n participants, and then according to the sign of each element in the current aggregation result , is obtained by binary representation of each element; the update unit is used to update the current model parameters according to the target gradient vector for the next round of iteration; the determination unit is also used for after the multiple rounds of iteration , using the current model parameters obtained by it as the business forecasting model that it updates with other participants.
  • a computer storage medium on which a computer program is stored, and when the computer program is executed in a computer, it causes the computer to execute the method of the first aspect or the second aspect.
  • a computing device including a memory and a processor, wherein executable code is stored in the memory, and when the processor executes the executable code, the method of the first aspect or the second aspect is implemented.
  • each participant only sends the disturbance gradient vector to the server, because the disturbance gradient vector is a randomized algorithm that satisfies differential privacy , which is obtained by perturbing the original local gradient vector, so that this scheme can balance the validity and privacy protection of each participant's data.
  • the server only sends the binary representation of each element in the current aggregation result to each participant, which can solve the problem of occupying communication resources by sending high-dimensional model parameters or gradients to each participant in traditional technology.
  • Figure 1 is a schematic diagram of federated learning based on centralized differential privacy
  • Figure 2 is a schematic diagram of federated learning based on local differential privacy
  • FIG. 3 is a schematic diagram of an implementation scenario of an embodiment provided in this specification.
  • FIG. 4 is an interaction diagram of a method for implementing a privacy-protected multi-party collaborative update model provided by an embodiment of this specification
  • Fig. 5 is a system schematic diagram of a multi-party collaborative update model for realizing privacy protection provided by an embodiment of this specification
  • FIG. 6 is a schematic diagram of an apparatus for implementing a privacy-protected multi-party collaborative update model provided by an embodiment of this specification.
  • CDP Central Differential Privacy
  • LDP Local Differential Privacy
  • FIG. 1 is a schematic diagram of federated learning based on centralized differential privacy.
  • each participant uploads its own model gradients: ⁇ w1, ⁇ w2, ..., ⁇ wn to a trusted third-party server (hereinafter referred to as the server).
  • the server aggregates the model gradients uploaded by each participant: aggregate( ⁇ w1+ ⁇ w2+...+ ⁇ wn), and adds noise to the aggregated model gradients through the differential privacy mechanism M: M(aggregate(%)), and finally, the server will add The model gradient w' after noise is sent to all participants for each participant to update their local models based on it.
  • M(aggregate(%) the differential privacy mechanism
  • FIG. 2 is a schematic diagram of federated learning based on local differential privacy.
  • each participant performs local differential privacy through the differential privacy mechanism M on their respective model gradients, and then passes through the local differential privacy model gradients (M( ⁇ w1), M( ⁇ w2),..., M( ⁇ wn)) uploaded to the server.
  • the server aggregates the model gradients of each participant through local differential privacy: aggregate(M( ⁇ w1)+M( ⁇ w2)+...+M( ⁇ wn)), and sends the aggregated model gradient w' to each participant , upon which each participant updates their local models.
  • a large communication overhead will also be caused.
  • this application proposes a method for multi-party collaborative model construction to achieve privacy protection.
  • the server and each participant need to interact twice, one of which is that each participant uploads to the server through the corresponding local gradient vector.
  • the perturbation gradient vector obtained by perturbation is used to realize local differential privacy (Local Differential Privacy, LDP) processing on the respective local gradient vectors of each participant.
  • LDP Local Differential Privacy
  • Another interaction is that the server sends the binary representation of each element in the aggregation result of n disturbance gradient vectors to each participant.
  • the amount of data represented by the perturbation gradient vector and the binary representation of each element is far smaller than the high-precision real model gradient, so the solution of the present application can effectively reduce the consumption of communication resources caused by multi-party collaborative modeling.
  • Fig. 3 is a schematic diagram of an implementation scenario of an embodiment provided in this specification.
  • the scenario of multi-party cooperative update model involves the server and n participants, where n is a positive integer.
  • each participant can be implemented as any device, platform, server or device cluster with computing and processing capabilities.
  • the parties may be institutions with sample sets of different sizes.
  • the model here may be a business forecasting model for performing forecasting tasks for business objects.
  • the business objects therein can be pictures, audio or text, etc., for example.
  • each participant maintains the same current model parameter w[t] locally, and has their own different local sample sets D i .
  • the method of the present application includes multiple rounds of iterations, wherein in the t-th iteration, each participant i determines the corresponding local gradient vector g i according to the local sample set D i and the current model parameter w[t], and Using the randomization algorithm that satisfies differential privacy, each element in the local gradient vector g i is randomly binarized to obtain the disturbance gradient vector g' i . Afterwards, each participant i sends the determined perturbation gradient vector g' i to the server.
  • the server aggregates n disturbance gradient vectors sent by n participants, and performs binary representation of each element according to the sign of each element in the current aggregation result to obtain the target gradient vector G.
  • Each participant i receives the target gradient vector G from the server, and updates the current model parameters w[t] according to the target gradient vector to obtain w[t+1] for the next iteration.
  • each participant i uses the current model parameters it obtains as its business forecast model to be updated collaboratively with other participants.
  • the following uses the implementation scenario shown in FIG. 3 as an example to describe the method for implementing the privacy-protected multi-party collaborative update model provided in this specification.
  • FIG. 4 is an interaction diagram of a method for implementing a privacy-protected multi-party collaborative update model provided by an embodiment of this specification. It should be noted that this method involves multiple rounds of iterations.
  • Figure 4 shows the interaction steps included in the tth (t is a positive integer) round of iterations, and because the interaction process between each participant participating in the tth iteration and the server is similar, Therefore, Fig. 4 mainly shows the interaction steps between any participant participating in the t-th iteration (for convenience of description, referred to as the first participant) and the server, and the interaction steps between other participants participating in the iteration and the server, Refer to the interaction steps between the first participant and the server.
  • the method may include the following steps: Step 402 , each participant i determines a corresponding local gradient vector according to a local sample set and current model parameters.
  • the samples in the local sample set maintained by it may include any of the following: pictures, text, and audio.
  • the above-mentioned current model parameters may be model parameters of the neural network model.
  • the above-mentioned current model parameters may be initialized by the server before the start of multiple iterations of the neural network model, and then the initialized model parameters are delivered or provided to each participant.
  • each participant can use the above-mentioned initialized model parameters as its current model parameters.
  • it is also possible for each participant to agree on the structure of the model (such as which model to use, the number of layers of the model, the number of neurons in each layer, etc.), and then perform the same initialization. to get the respective current model parameters.
  • the above current model parameters may be updated in the t-1th iteration.
  • the prediction result can be determined first according to the local sample set and the current model parameters, and then the prediction loss can be determined according to the prediction result and sample labels. Finally, according to the prediction loss and using the backpropagation method, the local gradient vector corresponding to the current model parameters is determined.
  • each participant i uses a randomization algorithm satisfying differential privacy to randomly binarize each element in the local gradient vector to obtain a disturbance gradient vector.
  • the stochastic binarization process in step 404 above aims to randomly convert the value of each element in the local gradient vector to -1 or 1 based on the requirement of differential privacy.
  • the randomization algorithm may be implemented in multiple ways. In multiple embodiments, for any specific element, the greater its corresponding value, the greater the probability of converting to 1; the smaller its corresponding value, the greater the probability of converting to -1.
  • the disturbance gradient vector described in the embodiment of this specification is only a low-precision vector (only including -1 and 1) used to reflect the overall characteristics of the local gradient vector, and the communication resources occupied by it during transmission are far less than High-precision local gradient vectors.
  • the element i in the local gradient vector is randomly selected, which is called the first element for simplicity.
  • the random binarization process for the first element in the above step 404 includes converting the value of the first element to 1 with the first probability (Pr), and converting it to -1 with the second probability (1-Pr), And the first probability is positively correlated with the magnitude of the value of the first element.
  • the method for determining the first probability may include: adding a noise value to the value of the first element. According to the value of the first element after adding the noise value, the first probability is determined by using the cumulative distribution function of the Gaussian distribution.
  • the above stochastic binarization process can be expressed as:
  • t represents the t-th iteration
  • i represents the participant i
  • j represents the j-th vector element.
  • Z represents the noise value.
  • ⁇ () is the cumulative distribution function of the Gaussian distribution.
  • the noise value in the embodiment of this specification may be randomly sampled from a Gaussian distribution with an expected value of 0 and a variance of ⁇ 2 .
  • is determined from at least the product of the global sensitivity of the local gradient vector and the ratio of the two differentially private parameters.
  • the global sensitivity can refer to parameters related to the data distribution and complexity of the local sample set.
  • the above two differential privacy parameters are ( ⁇ , ⁇ )-privacy budget ⁇ and slack term ⁇ of the differential privacy algorithm (ie The probability of exposing real private data).
  • represents the standard deviation of the Gaussian distribution
  • represents the global sensitivity of the local gradient vector
  • ⁇ and ⁇ represent the two privacy parameters of the ( ⁇ , ⁇ )-differential privacy algorithm, respectively.
  • the value range of ⁇ may be greater than or equal to 0, and the value range of ⁇ may be [0,1].
  • can be set according to the following constraints: the third probability calculated using the cumulative distribution function for the maximum boundary value of the function determined at least based on ⁇ is the smallest The fourth probability for boundary value calculations is close.
  • the maximum boundary value of the function may be the difference between the first ratio determined according to the global sensitivity and ⁇ , and the second ratio determined according to the product of the privacy budget ⁇ and ⁇ and the global sensitivity.
  • the function minimum boundary value may be the difference between the inverse of the first scale and the second scale.
  • represents the standard deviation of the Gaussian distribution
  • represents the global sensitivity of the local gradient vector
  • ⁇ and ⁇ represent the two privacy parameters of the ( ⁇ , ⁇ )-differential privacy algorithm (ie, the privacy budget ⁇ and the relaxation term ⁇ ), respectively
  • ⁇ () is the cumulative distribution function of the Gaussian distribution.
  • Equation 3 the upper boundary value of the function is the first ratio with the second ratio The difference, the lower boundary value of the function is the opposite number of the first proportion with the second ratio difference.
  • first ratio and the second ratio can also be used to obtain other forms of constraints, and only the noise value sampled from the Gaussian distribution defined by ⁇ under the constraints is required It is sufficient to meet the requirements of differential privacy.
  • Step 406 each participant i sends the determined disturbance gradient vector to the server.
  • the perturbation gradient vector corresponding to each participant i is obtained by using a randomization algorithm that satisfies differential privacy, it can not only realize the privacy protection of the data of each participant, but also ensure a certain availability.
  • the disturbance gradient vector is a low-precision vector used to reflect the overall characteristics of the local gradient vector, communication resources can be greatly saved.
  • step 408 the server aggregates n disturbance gradient vectors sent by n participants, and performs binary representation on each element according to the sign of each element in the current aggregation result to obtain a target gradient vector.
  • the server can average or weight the n disturbance gradient vectors sent by n participants to obtain the current aggregation result.
  • a sign function can be used to directly perform binary representation on each element based on the sign of each element in the current aggregation result to obtain the target gradient vector. Since the target gradient vector here can be a low-precision vector (only including -1 and 1) used to reflect the overall characteristics of the current aggregation result, it usually occupies less communication resources during transmission.
  • t represents the t-th iteration
  • n represents the number of parameter squares
  • sign() represents the sign function
  • G (t) represents the t-th round target gradient vector.
  • the current error compensation vector may also be superimposed on the current aggregation result to obtain the superposition result, and then use a sign function to perform binary representation on each element based on the sign of each element in the superposition result.
  • the above-mentioned current error compensation vector by superimposing the difference between the previous round of aggregation results and the binary representation results corresponding to the previous round of aggregation results (ie, the previous round of target gradient vectors) on the previous round of error compensation vectors worth it.
  • e (t) represents the error compensation vector of the tth round, and its calculation formula can be as follows:
  • t and t-1 represent the t-th round and the t-1-th round of iterations respectively
  • n represents the number of parameter squares
  • e (t-1) represents the error compensation vector of the t-1-th round
  • G (t-1) represents the binary representation result corresponding to the aggregation result of the t-1 round (that is, the target gradient vector of the t-1 round)
  • e (t) represents the t-th round Error compensation vector
  • represents the error decay rate.
  • Step 410 each participant i receives the target gradient vector from the server, and updates the current model parameters according to the target gradient vector for the next iteration.
  • the updated current model parameters can be obtained by subtracting the product of the target gradient vector and the learning step from the current model parameters.
  • steps 402 to 410 are repeated multiple times, so that multiple rounds of iterative updating of the current model parameters maintained by each participant can be realized.
  • the current model parameters used in each iteration are the updated model parameters in the previous round.
  • the termination condition of the iteration may be that the number of iterations reaches a predetermined number of rounds or the model parameters converge.
  • each participant i uses the current model parameters it obtains as its business forecast model to be updated collaboratively with other participants.
  • the service prediction model that it cooperates with other participants to update can be a picture recognition model.
  • the service prediction model that it updates in cooperation with other participants may be an audio recognition model.
  • the service prediction model that it updates in cooperation with other participants can be a text recognition model and so on.
  • each participant only sends the perturbation gradient vector to the server. Since the perturbation gradient vector is obtained by perturbing the original local gradient vector using a randomization algorithm that satisfies differential privacy, this scheme can balance The validity and privacy protection of the data of each participant.
  • the server only sends the binary representation of each element in the current aggregation result to each participant. Since the above-mentioned perturbation gradient vector and the data volume of the binarized representation result are far smaller than the high-precision real model gradient, the solution of the present application can effectively reduce communication resource consumption caused by multi-party collaborative modeling.
  • an embodiment of this specification also provides a system for implementing a privacy-protected multi-party collaborative update model, as shown in FIG. 5 , the system includes: a server 502 and n participants 504.
  • Each participant 504 is used to determine the corresponding local gradient vector according to the local sample set and the current model parameters.
  • Each participant 504 is also used to use a randomization algorithm that satisfies differential privacy to randomly binarize each element in the local gradient vector to obtain a disturbance gradient vector.
  • the above-mentioned local gradient vector includes the first element; each participant 504 is specifically configured to: determine the first probability according to the value of the first element, and the first probability is positively correlated with the value of the first element; The value of the first element is converted to 1, and the value of the first element is converted to -1 with a second probability, wherein the sum of the first probability and the second probability is 1.
  • each participant 504 is further specifically configured to: add a noise value to the value of the first element; determine the first probability by using the cumulative distribution function of Gaussian distribution according to the value of the first element after adding the noise value.
  • the aforementioned noise value may be randomly sampled from a Gaussian distribution with an expected value of 0 and a variance ⁇ 2 , where ⁇ is determined by at least the product of the global sensitivity and the ratio of the two differential privacy parameters.
  • the two differential privacy parameters here are the privacy budget ⁇ and the slack term ⁇ .
  • the above noise value is randomly sampled from a Gaussian distribution with an expected value of 0 and a variance of ⁇ 2 , where ⁇ satisfies the following constraints:
  • the third probability calculated with the cumulative distribution function for the maximum boundary value of the function determined at least based on ⁇ is close to the fourth probability calculated with the cumulative distribution function for the minimum boundary value of the function determined at least based on ⁇ .
  • the maximum boundary value of the above function is the difference between the first ratio determined according to the global sensitivity and ⁇ , and the second ratio determined according to the product of the privacy budget ⁇ and the ⁇ and the global sensitivity;
  • the minimum boundary value of the above function is The difference between the inverse of the above-mentioned first ratio and the second ratio.
  • Each participant 504 is also configured to send the determined disturbance gradient vector to the server.
  • the server 502 is configured to aggregate n disturbance gradient vectors sent by n participants, and perform binary representation of each element according to the sign of each element in the current aggregation result to obtain a target gradient vector.
  • the server 502 is specifically configured to: perform binary representation on each element based on the sign of each element in the current aggregation result by using a sign function.
  • the server 502 is further specifically configured to: superimpose the current error compensation vector on the current aggregation result to obtain the superposition result; use a sign function to perform binarization on each element based on the sign of each element in the superposition result Representation; wherein, the above-mentioned current error compensation vector is obtained by superimposing the difference between the previous round of aggregation result and the binary representation result corresponding to the previous round of aggregation result on the previous round of error compensation vector.
  • Each participant 504 is also configured to receive the target gradient vector from the server 502, and update the current model parameters according to the target gradient vector for the next iteration.
  • Each participant 504 is also used to use the current model parameters obtained by it after multiple rounds of iterations as a business forecast model that it updates in cooperation with other participants.
  • the samples in the local sample set of any participant i are pictures, and the service prediction model that it cooperates with other participants to update is a picture recognition model;
  • the service prediction model updated collaboratively by parties is an audio recognition model; or, the sample in the local sample set of any participant i is text, and the service prediction model updated cooperatively with other participants is a text recognition model.
  • the system for implementing privacy-protected multi-party collaborative model update provided by an embodiment of this specification can effectively reduce communication resource consumption caused by multi-party collaborative modeling, and at the same time play a role in privacy protection.
  • an embodiment of this specification further provides an apparatus for implementing a privacy-protected multi-party coordinated update model.
  • the multi-party here includes the server and n participants.
  • the device is set at any participant i among the n participants, and is used to execute multiple rounds of iterations. As shown in FIG. 6 , the device executes any t-th iteration through the following units: a determination unit 602 , configured to determine a corresponding local gradient vector according to the local sample set and the current model parameters.
  • the processing unit 604 is configured to use a randomization algorithm satisfying differential privacy to perform random binarization processing on each element in the local gradient vector to obtain a disturbance gradient vector.
  • a sending unit 606, configured to send the perturbation gradient vector to the server.
  • the receiving unit 608 is configured to receive a target gradient vector from the server, where the target gradient vector is obtained by the server after aggregating n disturbance gradient vectors sent by n participants, and then according to the sign of each element in the current aggregation result, It is obtained by binarizing each element.
  • the update unit 610 is configured to update the current model parameters according to the target gradient vector for the next iteration.
  • the determining unit 602 is further configured to use the current model parameters obtained by it as a business forecasting model that it coordinates with other participants to update after the multiple rounds of iterations.
  • An embodiment of the present specification provides a device for implementing privacy-protected multi-party collaborative model update, which can effectively reduce communication resource consumption caused by multi-party collaborative modeling, and at the same time play a role in privacy protection.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is instructed to execute the method described in conjunction with FIG. 4 .
  • a computing device including a memory and a processor, wherein executable code is stored in the memory, and when the processor executes the executable code, the implementation described in conjunction with FIG. 4 is implemented. method.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • the steps of the methods or algorithms described in conjunction with the disclosure of this specification can be implemented in the form of hardware, or can be implemented in the form of a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, mobile hard disk, CD-ROM or any other form of storage known in the art medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC may be located in the server.
  • the processor and the storage medium can also exist in the server as discrete components.
  • the functions described in the present invention may be implemented by hardware, software, firmware or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Bioethics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种实现隐私保护的多方协同更新模型的方法及装置。在协同更新模型的方法中,每个参与方 i 根据本地样本集以及当前模型参数,确定对应的局部梯度向量(S402),并利用满足差分隐私的随机化算法,对局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量(S404)。每个参与方 i 将各自确定的扰动梯度向量发送给服务器(S406)。服务器聚合 n 份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量(S408)。每个参与方 i 从服务器接收目标梯度向量,并根据目标梯度向量更新当前模型参数,以用于下一轮迭代(S410)。在多轮迭代后,每个参与方 i 将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。

Description

实现隐私保护的多方协同更新模型的方法、装置及系统 技术领域
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种实现隐私保护的多方协同更新模型的方法、装置及系统。
背景技术
联邦学习(也称联合学习)的出现革新了传统的集中式机器学习,使得参与方在不需要上传本地数据的情况下,就可以协同构建更精确的模型。
目前,联邦学习往往是通过在各参与方之间共享模型参数或梯度来实现,然而由于模型参数或梯度通常为高维度的隐私数据,因此传统的联邦学习一定程度的伴随着通信开销大、隐私泄露等问题。
发明内容
本说明书一个或多个实施例描述了一种实现隐私保护的多方协同更新模型的方法、装置及系统,可有效降低多方协同建模引起的通信资源消耗,同时起到隐私保护作用。
第一方面,提供了一种实现隐私保护的多方协同更新模型的方法,包括:每个参与方i根据本地样本集以及当前模型参数,确定对应的局部梯度向量;每个参与方i利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;每个参与方i将各自确定的扰动梯度向量发送给所述服务器;所述服务器聚合所述n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量;每个参与方i从所述服务器接收所述目标梯度向量,并根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;在所述多轮迭代后,每个参与方i将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
第二方面,提供了一种实现隐私保护的多方协同更新模型的方法,包括:根据本地样本集以及当前模型参数,确定对应的局部梯度向量;利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;将所述扰动梯度向量发送给所述服务器;从所述服务器接收目标梯度向量;其中,所述目标梯度向量,是所述服务器在聚合所述n个参与方发送的n份扰动梯度向量后,再根据当前聚合 结果中各元素的正负号,对各元素进行二值化表示得到;根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
第三方面,提供了一种实现隐私保护的多方协同更新模型的系统,包括:每个参与方i,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向量;每个参与方i,还用于利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;每个参与方i,还用于将各自确定的扰动梯度向量发送给所述服务器;所述服务器,用于聚合所述n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量;每个参与方i,还用于从所述服务器接收所述目标梯度向量,并根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;每个参与方i,还用于在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
第四方面,提供了一种实现隐私保护的多方协同更新模型的装置,包括:确定单元,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向量;处理单元,用于利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;发送单元,用于将所述扰动梯度向量发送给所述服务器;接收单元,用于从所述服务器接收目标梯度向量;其中,所述目标梯度向量,是所述服务器在聚合所述n个参与方发送的n份扰动梯度向量后,再根据当前聚合结果中各元素的正负号,对各元素进行二值化表示得到;更新单元,用于根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;所述确定单元,还用于在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
第五方面,提供了一种计算机存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行第一方面或第二方面的方法。
第六方面,提供了一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现第一方面或第二方面的方法。
本说明书一个或多个实施例提供的实现隐私保护的多方协同更新模型的方法、装置及系统,各参与方只向服务器发送扰动梯度向量,由于该扰动梯度向量是利用满足差分隐私的随机化算法,对原始的局部梯度向量进行扰动得到,从而本方案可以平衡各参与方数据的有效性和隐私保护。此外,服务器只向各参与方下发当前聚合结果中各元素的二值化表示,可解决传统技术中需要向各参与方下发高维的模型参数或梯度而占用通信 资源的问题。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为基于集中式差分隐私的联邦学习示意图;
图2为基于本地差分隐私的联邦学习示意图;
图3为本说明书提供的一个实施例的实施场景示意图;
图4为本说明书一个实施例提供的实现隐私保护的多方协同更新模型的方法交互图;
图5为本说明书一个实施例提供的实现隐私保护的多方协同更新模型的系统示意图;
图6为本说明书一个实施例提供的实现隐私保护的多方协同更新模型的装置示意图。
具体实施方式
下面结合附图,对本说明书提供的方案进行描述。
如前所述,传统的联邦学习通过在各参与方之间共享模型参数或梯度来实现。其中,主流的方案主要分为两种:第一种,基于集中式差分隐私(Central Differential Privacy,CDP)的联邦学习;第二种,基于本地差分隐私(Local Differential Privacy,LDP)的联邦学习。以下结合附图对该两种方法进行说明。
图1为基于集中式差分隐私的联邦学习示意图。图1中,首先,各参与方向可信的第三方服务器(以下简称服务器)上传各自的模型梯度:Δw1、Δw2、…、Δwn。之后,服务器聚合各参与方上传的模型梯度:aggregate(Δw1+Δw2+…+Δwn),并在聚合的模型梯度中通过差分隐私机制M添加噪声:M(aggregate(…)),最后,服务器将添加噪声后的模型梯度w’下发给各参与方,以供各参与方基于其更新各自本地的模型。然而,由于目前受信任的第三方在实际场景中很少见,而且容易遭受窃听者的攻击,因此,该方法的适用性较差。此外,由于模型梯度的维数通常较高,因此服务器与各参与方之间交互梯度会造成较大的通信开销。
图2为基于本地差分隐私的联邦学习示意图。图2中,各参与方在上传之前,先在各自的模型梯度上通过差分隐私机制M执行本地差分隐私,之后将经过本地差分隐私的模型梯度(M(Δw1)、M(Δw2)、…、M(Δwn))上传至服务器。最后,服务器聚合各参与方的经过本地差分隐私的模型梯度:aggregate(M(Δw1)+M(Δw2)+…+M(Δwn)),并将聚合的模型梯度w’下发至各参与方,以供各参与方基于其更新各自本地的模型。在该方案中,同样会造成较大的通信开销。
可见,上述两种联邦学习均会造成较大的通信开销。基于此,本申请提出了一种实现隐私保护的多方协同构建模型的方法,服务器与各参与方需要进行两次交互,其中一次交互是,各参与方向服务器上传各自通过对对应的局部梯度向量进行扰动而得到的扰动梯度向量,以实现对各参与方各自的局部梯度向量进行本地差分隐私(Local Differential Privacy,LDP)处理。另一次交互是,服务器向各参与方下发n份扰动梯度向量的聚合结果中各元素的二值化表示。其中的扰动梯度向量和各元素的二值化表示的数据量,远远小于高精度的真实的模型梯度,从而本申请的方案可以有效降低多方协同建模引起的通信资源消耗。
图3为本说明书提供的一个实施例的实施场景示意图。图3中,多方协同更新模型的场景涉及服务器和n个参与方,n为正整数。其中,各参与方可以实现为任何具有计算、处理能力的设备、平台、服务器或设备集群。在一个具体例子中,各参与方可以为具有不同规模样本集的机构。需要说明,服务器和各参与方要在保护数据隐私的情况下,协同更新各参与方各自本地的模型。这里的模型可以是用于执行针对业务对象的预测任务的业务预测模型。其中的业务对象例如可以为图片、音频或文本等。
图3中,各参与方各自在本地维护有相同的当前模型参数w[t],并拥有各自不同的本地样本集D i。具体地,本申请的方法包括多轮迭代,其中在第t轮迭代中,每个参与方i根据本地样本集D i以及当前模型参数w[t],确定对应的局部梯度向量g i,并利用满足差分隐私的随机化算法,对局部梯度向量g i中的各元素进行随机二值化处理,得到扰动梯度向量g’ i。之后,每个参与方i将各自确定的扰动梯度向量g’ i发送给服务器。服务器聚合n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量G。每个参与方i从服务器接收目标梯度向量G,并根据目标梯度向量更新当前模型参数w[t],得到w[t+1]以用于下一轮迭代。
在多轮迭代后,每个参与方i将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
以下以图3示出的实施场景为例,对本说明书提供的实现隐私保护的多方协同更新模型的方法进行说明。
图4为本说明书一个实施例提供的实现隐私保护的多方协同更新模型的方法交互图。需要说明,该方法涉及多轮迭代,图4中示出其中第t(t为正整数)轮迭代包括的交互步骤,并且,因参与第t轮迭代的各个参与方与服务器的交互过程相近,所以图4中主要示出参与该第t轮迭代的任意一个参与方(为便于描述,称作第一参与方)与服务器的交互步骤,参与该轮迭代的其它参与方与服务器的交互步骤,可以参见该第一参与方与服务器的交互步骤。可以理解,通过重复执行其中示出的交互步骤,可以实现对各参与方各自维护的模型的多轮迭代更新,进而将最后一轮迭代更新得到的模型,作为各自最终使用的模型。如图4所示,该方法可以包括如下步骤:步骤402,每个参与方i根据本地样本集以及当前模型参数,确定对应的局部梯度向量。
以任一参与方为例来说,其维护的本地样本集中的样本可以包括以下中的任一种:图片、文本以及音频等。
此外,上述当前模型参数可以为神经网络模型的模型参数。
需要说明,当第t轮迭代为首轮迭代时,上述当前模型参数可以是由服务器在多轮迭代开始之前,对神经网络模型进行初始化,然后将初始化的模型参数下发或提供给各参与方,从而各参与方可以将上述初始化的模型参数作为其当前模型参数。当然,在实际应用中,也可以是由各参与方先约定好模型的结构(例如采用何种模型,模型的层数,每层的神经元的数目等等),之后再进行相同的初始化,以得到各自的当前模型参数。
当第t轮迭代为非首轮迭代时,上述当前模型参数可以是在第t-1轮迭代中更新得到。
对于局部梯度向量的确定可参考现有技术。举例来说,可以先根据本地样本集以及当前模型参数,确定预测结果,再根据预测结果以及样本标签,确定预测损失。最后,再根据预测损失,并利用反向传播法,确定当前模型参数对应的局部梯度向量。
步骤404,每个参与方i利用满足差分隐私的随机化算法,对局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量。
上述步骤404的随机二值化处理过程,旨在基于差分隐私的要求,将局部梯度向量中各元素的值随机地转换为-1或1。该随机化算法具体可以采用多种方式实现。在多个实施例中,对于任意一个特定的元素,其对应的值越大,转换为1的概率越大;其对应的值越小,转换为-1的概率越大。
也就是说,本说明书实施例所述的扰动梯度向量,只是用于反映局部梯度向量总体特性的低精度向量(只包括-1和1),其在传输过程中所占用的通信资源远远小于高精度的局部梯度向量。
具体地,任取局部梯度向量中的元素i,简单起见称为第一元素。上述步骤404中针对第一元素的随机二值化处理过程包括,以第一概率(Pr)将第一元素的值转换为1,以第二概率(1-Pr)将其转换为-1,并且第一概率与第一元素的值的大小正相关。
在一个示例中,上述第一概率的确定方法可以包括:在第一元素的值上添加噪声值。根据添加噪声值后的第一元素的值,利用高斯分布的累积分布函数,确定第一概率。
在一个例子中,上述随机二值化处理过程可以表示为:
Figure PCTCN2022094020-appb-000001
其中,t表示第t轮迭代,i表示参与方i,1≤i≤n,j表示第j个向量元素。
Figure PCTCN2022094020-appb-000002
表示参与方i的第t轮局部梯度向量中的第j个元素,Z表示噪声值。
Figure PCTCN2022094020-appb-000003
表示参与方i的第t轮扰动梯度向量中的第j个元素。Φ()为高斯分布的累计分布函数。
需要说明,本说明书实施例的噪声值可以是从期望值为0、方差为σ 2的高斯分布中随机采样得到。
在一个实施例中,σ至少根据局部梯度向量的全局敏感度与两个差分隐私参数的比例的乘积确定。其中的全局敏感度可以是指与本地样本集的数据分布以及复杂度等相关的参数,上述两个差分隐私参数分别为(ε,δ)-差分隐私算法的隐私预算ε和松弛项δ(即暴露真实隐私数据的概率)。
在一个例子中,σ的计算公式具体可以表示为:
Figure PCTCN2022094020-appb-000004
其中,σ表示高斯分布的标准差,Δ表示局部梯度向量的全局敏感度,ε和δ分别表示(ε,δ)-差分隐私算法的两个隐私参数。具体地,ε的取值范围可以大于等于0,δ的取值范围可以为[0,1]。
在另一个实施例中,σ可以根据如下约束条件来设定:利用累积分布函数针对至少基于σ确定的函数最大边界值计算的第三概率,与利用累积分布函数针对至少基于σ确定的函数最小边界值计算的第四概率相接近。
其中,函数最大边界值可以是根据全局敏感度与σ确定的第一比例,与根据隐私预算ε和σ的乘积与全局敏感度确定的第二比例的差值。函数最小边界值可以是第一比例的相反数与第二比例的差值。
在一个具体例子中,上述约束条件可以表示为:
Figure PCTCN2022094020-appb-000005
其中,σ表示高斯分布的标准差,Δ表示局部梯度向量的全局敏感度,ε和δ分别表示(ε,δ)-差分隐私算法的两个隐私参数(即隐私预算ε和松弛项δ),Φ()为高斯分布的累计分布函数。
在公式3中,函数上边界值为第一比例
Figure PCTCN2022094020-appb-000006
与第二比例
Figure PCTCN2022094020-appb-000007
的差值,函数下边界值为第一比例的相反数
Figure PCTCN2022094020-appb-000008
与第二比例
Figure PCTCN2022094020-appb-000009
的差值。
需要说明,公式3中的隐私预算ε越小,意味着针对函数最大边界值计算得到的概率,接近于针对函数最小边界值计算得到的概率,从而隐私保护程度越高。
应理解,在其它例子中,还可以采用其它形式的第一比例和第二比例,进而得到其它形式的约束条件,只需要从基于该约束条件下的σ所限定的高斯分布中采样的噪声值满足差分隐私的要求即可。
步骤406,每个参与方i将各自确定的扰动梯度向量发送给服务器。
需要说明,由于每个参与方i对应的扰动梯度向量,通过利用满足差分隐私的随机化算法得到,从而其既可以实现对各参与方数据的隐私保护,又可以保证一定的可用性。此外,由于扰动梯度向量是用于反映局部梯度向量总体特性的低精度向量,从而可以大大节约通信资源。
步骤408,服务器聚合n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量。
比如,服务器可以对n个参与方发送的n份扰动梯度向量进行求平均或者求加权平 均,得到当前聚合结果。
关于上述二值化表示的过程,在一个示例中,可以利用符号函数,直接基于当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量。由于这里的目标梯度向量可以是用于反映当前聚合结果总体特性的低精度向量(只包括-1和1),其在传输过程中所占用的通信资源通常也比较少。
在一个例子中,该二值化表示的过程具体可以表示为如下:
Figure PCTCN2022094020-appb-000010
其中,t表示第t轮迭代,n表示参数方的数目,
Figure PCTCN2022094020-appb-000011
表示参与方i的第t轮局部梯度向量,
Figure PCTCN2022094020-appb-000012
表示参与方i的第t轮扰动梯度向量,sign()表示符号函数,G (t)表示第t轮目标梯度向量。
在另一个示例中,也可以先在当前聚合结果上叠加当前误差补偿向量,得到叠加结果,再利用符号函数,基于叠加结果中各元素的正负号,对各元素进行二值化表示。
其中,上述当前误差补偿向量,通过在前一轮误差补偿向量上,叠加前一轮聚合结果与前一轮聚合结果对应的二值化表示结果(即前一轮目标梯度向量)之间的差值得到。
在一个例子中,该二值化表示的过程具体可以表示如下:
Figure PCTCN2022094020-appb-000013
其中,t,n,
Figure PCTCN2022094020-appb-000014
sign()以及G (t)的含义同上所述。e (t)表示第t轮误差补偿向量,其计算公式可以如下:
Figure PCTCN2022094020-appb-000015
其中,t和t-1分别表示第t轮和第t-1轮迭代,n表示参数方的数目,e (t-1)表示第t-1轮误差补偿向量,
Figure PCTCN2022094020-appb-000016
表示第t-1轮聚合结果,G (t-1)表示第t-1轮聚合结果对应的二值化表示结果(即第t-1轮目标梯度向量),e (t)表示第t轮误差补偿 向量,λ表示误差衰减率。
在该示例中,通过在当前聚合结果上叠加当前误差补偿向量,可以对当前聚合结果进行误差补偿,从而提高二值化表示结果的准确性,进而可以提升所构建的业务预测模型的精度。
步骤410,每个参与方i从服务器接收目标梯度向量,并根据目标梯度向量更新当前模型参数,以用于下一轮迭代。
比如,可以将当前模型参数减去目标梯度向量与学习步长之间的乘积,得到更新后的当前模型参数。
需要说明,在本说明书实施例中,步骤402-步骤410是重复多次执行的,由此可以实现对各参与方各自维护的当前模型参数的多轮迭代更新。且每次迭代所使用的当前模型参数是上一轮更新后的模型参数。该迭代的终止条件可以为迭代次数达到预定轮次或者模型参数收敛。
在多轮迭代后,每个参与方i将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
以任意的参与方i为例来说,在其本地样本集中的样本为图片的情况下,那么其与其它参与方协同更新的业务预测模型可以为图片识别模型。在其本地样本集中的样本为音频的情况下,那么其与其它参与方协同更新的业务预测模型可以为音频识别模型。在其本地样本集中的样本为文本的情况下,那么其与其它参与方协同更新的业务预测模型可以为文本识别模型等等。
综合以上,本说明书实施例中,各参与方只向服务器发送扰动梯度向量,由于该扰动梯度向量是利用满足差分隐私的随机化算法,对原始的局部梯度向量进行扰动得到,从而本方案可以平衡各参与方数据的有效性和隐私保护。另外,服务器只向各参与方下发当前聚合结果中各元素的二值化表示。由于上述扰动梯度向量和二值化表示结果的数据量,远远小于高精度的真实的模型梯度,从而本申请的方案可以有效降低多方协同建模引起的通信资源消耗。
与上述实现隐私保护的多方协同更新模型的方法对应地,本说明书一个实施例还提供的一种实现隐私保护的多方协同更新模型的系统,如图5所示,该系统包括:服务器502和n个参与方504。
每个参与方504,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向 量。
每个参与方504,还用于利用满足差分隐私的随机化算法,对局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量。
其中,上述局部梯度向量包括第一元素;每个参与方504具体用于:根据第一元素的值确定第一概率,该第一概率与第一元素的值大小正相关;以第一概率将第一元素的值转换为1,以第二概率将第一元素的值转换为-1,其中,第一概率和第二概率之和为1。
其中,每个参与方504还具体用于:在第一元素的值上添加噪声值;根据添加噪声值后的第一元素的值,利用高斯分布的累积分布函数,确定第一概率。
在一个例子中,上述噪声值可以是从期望值为0、方差为σ 2的高斯分布中随机采样得到,这里的σ至少根据全局敏感度与两个差分隐私参数的比例的乘积确定。这里的两个差分隐私参数为隐私预算ε和松弛项δ。
在另一个例子中,上述噪声值是从期望值为0、方差为σ 2的高斯分布中随机采样得到,这里的σ满足如下约束条件:
利用累积分布函数针对至少基于σ确定的函数最大边界值计算的第三概率,与利用所述累积分布函数针对至少基于σ确定的函数最小边界值计算的第四概率相接近。
其中,上述函数最大边界值是根据全局敏感度与σ确定的第一比例,与根据隐私预算ε和所述σ的乘积与全局敏感度确定的第二比例的差值;上述函数最小边界值是上述第一比例的相反数与第二比例的差值。
每个参与方504,还用于将各自确定的扰动梯度向量发送给服务器。
服务器502,用于聚合n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量。
在一个示例中,服务器502具体用于:利用符号函数,基于当前聚合结果中各元素的正负号,对各元素进行二值化表示。
在另一个示例中,服务器502还具体用于:在当前聚合结果上叠加当前误差补偿向量,得到叠加结果;利用符号函数,基于叠加结果中各元素的正负号,对各元素进行二值化表示;其中,上述当前误差补偿向量,通过在前一轮误差补偿向量上,叠加前一轮聚合结果与前一轮聚合结果对应的二值化表示结果之间的差值得到。
每个参与方504,还用于从服务器502接收目标梯度向量,并根据目标梯度向量更新当前模型参数,以用于下一轮迭代。
每个参与方504,还用于在多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
其中,任意的参与方i本地样本集中的样本为图片,其与其它参与方协同更新的业务预测模型为图片识别模型;或者,任意的参与方i本地样本集中的样本为音频,其与其它参与方协同更新的业务预测模型为音频识别模型;或者,任意的参与方i本地样本集中的样本为文本,其与其它参与方协同更新的业务预测模型为文本识别模型。
说明书上述实施例系统的各功能模块的功能,可以通过上述方法实施例的各步骤来实现,因此,本说明书一个实施例提供的系统的具体工作过程,在此不复赘述。
本说明书一个实施例提供的实现隐私保护的多方协同更新模型的系统,可有效降低多方协同建模引起的通信资源消耗,同时起到隐私保护作用。
与上述实现隐私保护的多方协同更新模型的方法对应地,本说明书一个实施例还提供的一种实现隐私保护的多方协同更新模型的装置。这里的多方包括服务器和n个参与方。该装置设置于n个参与方中任意的参与方i,用于执行多轮迭代。如图6所示,该装置通过其包括的以下单元执行其中任意的第t轮迭代:确定单元602,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向量。
处理单元604,用于利用满足差分隐私的随机化算法,对局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量。
发送单元606,用于将扰动梯度向量发送给服务器。
接收单元608,用于从服务器接收目标梯度向量,其中,该目标梯度向量,是服务器在聚合n个参与方发送的n份扰动梯度向量后,再根据当前聚合结果中各元素的正负号,对各元素进行二值化表示得到。
更新单元610,用于根据目标梯度向量更新当前模型参数,以用于下一轮迭代。
确定单元602,还用于在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
本说明书上述实施例装置的各功能模块的功能,可以通过上述方法实施例的各步骤来实现,因此,本说明书一个实施例提供的装置的具体工作过程,在此不复赘述。
本说明书一个实施例提供的实现隐私保护的多方协同更新模型的装置,可有效降低多方协同建模引起的通信资源消耗,同时起到隐私保护作用。
根据另一方面的实施例,还提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行结合图4所描述的方法。
根据再一方面的实施例,还提供一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现结合图4所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
结合本说明书公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于服务器中。当然,处理器和存储介质也可以作为分立组件存在于服务器中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
以上所述的具体实施方式,对本说明书的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本说明书的具体实施方式而已,并不用于限定本说明书的保护范围,凡在本说明书的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本说明书的保护范围之内。

Claims (24)

  1. 一种实现隐私保护的多方协同更新模型的方法,所述多方包括服务器和n个参与方;所述方法包括多轮迭代,其中任意的第t轮迭代包括:
    每个参与方i根据本地样本集以及当前模型参数,确定对应的局部梯度向量;
    每个参与方i利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;
    每个参与方i将各自确定的扰动梯度向量发送给所述服务器;
    所述服务器聚合所述n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量;
    每个参与方i从所述服务器接收所述目标梯度向量,并根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;
    在所述多轮迭代后,每个参与方i将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
  2. 根据权利要求1所述的方法,其中,所述局部梯度向量包括第一元素;每个参与方i利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,包括:
    根据第一元素的值确定第一概率,所述第一概率与所述第一元素的值大小正相关;
    以所述第一概率将所述第一元素的值转换为1,以第二概率将所述第一元素的值转换为-1,其中,所述第一概率和第二概率之和为1。
  3. 根据权利要求2所述的方法,其中,所述根据第一元素的值确定第一概率,包括:
    在所述第一元素的值上添加噪声值;
    根据添加噪声值后的第一元素的值,利用高斯分布的累积分布函数,确定第一概率。
  4. 根据权利要求3所述的方法,其中,所述噪声值是从期望值为0、方差为σ 2的高斯分布中随机采样得到;所述σ至少根据全局敏感度与两个差分隐私参数的比例的乘积确定。
  5. 根据权利要求4所述的方法,其中,所述两个差分隐私参数为隐私预算ε和松弛项δ。
  6. 根据权利要求3所述的方法,其中,所述噪声值是从期望值为0、方差为σ 2的高斯分布中随机采样得到,所述σ满足如下约束条件:
    利用所述累积分布函数针对至少基于所述σ确定的函数最大边界值计算的第三概率,与利用所述累积分布函数针对至少基于所述σ确定的函数最小边界值计算的第四概率相 接近。
  7. 根据权利要求6所述的方法,其中,所述函数最大边界值是根据全局敏感度与所述σ确定的第一比例,与根据隐私预算ε和所述σ的乘积与全局敏感度确定的第二比例的差值;所述函数最小边界值是所述第一比例的相反数与所述第二比例的差值。
  8. 根据权利要求1所述的方法,其中,所述根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,包括:
    利用符号函数,基于当前聚合结果中各元素的正负号,对各元素进行二值化表示。
  9. 根据权利要求1所述的方法,其中,所述根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,包括:
    在当前聚合结果上叠加当前误差补偿向量,得到叠加结果;
    利用符号函数,基于所述叠加结果中各元素的正负号,对各元素进行二值化表示;
    其中,所述当前误差补偿向量,通过在前一轮误差补偿向量上,叠加前一轮聚合结果与前一轮聚合结果对应的二值化表示结果之间的差值得到。
  10. 根据权利要求1所述的方法,其中,
    任意的参与方i本地样本集中的样本为图片,所述业务预测模型为图片识别模型;或者,
    任意的参与方i本地样本集中的样本为音频,所述业务预测模型为音频识别模型;或者,
    任意的参与方i本地样本集中的样本为文本,所述业务预测模型为文本识别模型。
  11. 一种实现隐私保护的多方协同更新模型的方法,所述多方包括服务器和n个参与方;所述方法通过所述n个参与方中任意的参与方i执行;所述方法包括多轮迭代,其中任意的第t轮迭代包括:
    根据本地样本集以及当前模型参数,确定对应的局部梯度向量;
    利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;
    将所述扰动梯度向量发送给所述服务器;
    从所述服务器接收目标梯度向量;其中,所述目标梯度向量,是所述服务器在聚合所述n个参与方发送的n份扰动梯度向量后,再根据当前聚合结果中各元素的正负号,对各元素进行二值化表示得到;
    根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;
    在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业 务预测模型。
  12. 一种实现隐私保护的多方协同更新模型的系统,包括服务器和n个参与方;
    每个参与方i,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向量;
    每个参与方i,还用于利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;
    每个参与方i,还用于将各自确定的扰动梯度向量发送给所述服务器;
    所述服务器,用于聚合所述n个参与方发送的n份扰动梯度向量,并根据当前聚合结果中各元素的正负号,对各元素进行二值化表示,得到目标梯度向量;
    每个参与方i,还用于从所述服务器接收所述目标梯度向量,并根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;
    每个参与方i,还用于在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
  13. 根据权利要求12所述的系统,其中,所述局部梯度向量包括第一元素;每个参与方i具体用于:
    根据第一元素的值确定第一概率,所述第一概率与所述第一元素的值大小正相关;
    以所述第一概率将所述第一元素的值转换为1,以第二概率将所述第一元素的值转换为-1,其中,所述第一概率和第二概率之和为1。
  14. 根据权利要求13所述的系统,其中,每个参与方i还具体用于:
    在所述第一元素的值上添加噪声值;
    根据添加噪声值后的第一元素的值,利用高斯分布的累积分布函数,确定第一概率。
  15. 根据权利要求14所述的系统,其中,所述噪声值是从期望值为0、方差为σ 2的高斯分布中随机采样得到;所述σ至少根据全局敏感度与两个差分隐私参数的比例的乘积确定。
  16. 根据权利要求15所述的系统,其中,所述两个差分隐私参数为隐私预算ε和松弛项δ。
  17. 根据权利要求14所述的系统,其中,所述噪声值是从期望值为0、方差为σ 2的高斯分布中随机采样得到,所述σ满足如下约束条件:
    利用所述累积分布函数针对至少基于所述σ确定的函数最大边界值计算的第三概率,与利用所述累积分布函数针对至少基于所述σ确定的函数最小边界值计算的第四概率相接近。
  18. 根据权利要求17所述的系统,其中,所述函数最大边界值是根据全局敏感度与 所述σ确定的第一比例,与根据隐私预算ε和所述σ的乘积与全局敏感度确定的第二比例的差值;所述函数最小边界值是所述第一比例的相反数与所述第二比例的差值。
  19. 根据权利要求12所述的系统,其中,所述服务器具体用于:
    利用符号函数,基于当前聚合结果中各元素的正负号,对各元素进行二值化表示。
  20. 根据权利要求12所述的系统,其中,所述服务器还具体用于:
    在当前聚合结果上叠加当前误差补偿向量,得到叠加结果;
    利用符号函数,基于所述叠加结果中各元素的正负号,对各元素进行二值化表示;
    其中,所述当前误差补偿向量,通过在前一轮误差补偿向量上,叠加前一轮聚合结果与前一轮聚合结果对应的二值化表示结果之间的差值得到。
  21. 根据权利要求12所述的系统,其中,
    任意的参与方i本地样本集中的样本为图片,所述业务预测模型为图片识别模型;或者,
    任意的参与方i本地样本集中的样本为音频,所述业务预测模型为音频识别模型;或者,
    任意的参与方i本地样本集中的样本为文本,所述业务预测模型为文本识别模型。
  22. 一种实现隐私保护的多方协同更新模型的装置,所述多方包括服务器和n个参与方;所述装置设置于所述n个参与方中任意的参与方i,用于执行多轮迭代,所述装置通过其包括的以下单元执行其中任意的第t轮迭代:
    确定单元,用于根据本地样本集以及当前模型参数,确定对应的局部梯度向量;
    处理单元,用于利用满足差分隐私的随机化算法,对所述局部梯度向量中的各元素进行随机二值化处理,得到扰动梯度向量;
    发送单元,用于将所述扰动梯度向量发送给所述服务器;
    接收单元,用于从所述服务器接收目标梯度向量;其中,所述目标梯度向量,是所述服务器在聚合所述n个参与方发送的n份扰动梯度向量后,再根据当前聚合结果中各元素的正负号,对各元素进行二值化表示得到;
    更新单元,用于根据所述目标梯度向量更新当前模型参数,以用于下一轮迭代;
    所述确定单元,还用于在所述多轮迭代后,将其得到的当前模型参数,作为其与其它参与方协同更新的业务预测模型。
  23. 一种计算机可读存储介质,其上存储有计算机程序,其中,当所述计算机程序在计算机中执行时,令计算机执行权利要求1-11中任一项所述的方法。
  24. 一种计算设备,包括存储器和处理器,其中,所述存储器中存储有可执行代码, 所述处理器执行所述可执行代码时,实现权利要求1-11中任一项所述的方法。
PCT/CN2022/094020 2021-06-11 2022-05-20 实现隐私保护的多方协同更新模型的方法、装置及系统 WO2022257730A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/535,061 US20240112091A1 (en) 2021-06-11 2023-12-11 Methods, apparatuses, and systems for multi-party collaborative model updating for privacy protection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110657041.8A CN113221183B (zh) 2021-06-11 2021-06-11 实现隐私保护的多方协同更新模型的方法、装置及系统
CN202110657041.8 2021-06-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/535,061 Continuation US20240112091A1 (en) 2021-06-11 2023-12-11 Methods, apparatuses, and systems for multi-party collaborative model updating for privacy protection

Publications (1)

Publication Number Publication Date
WO2022257730A1 true WO2022257730A1 (zh) 2022-12-15

Family

ID=77081483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094020 WO2022257730A1 (zh) 2021-06-11 2022-05-20 实现隐私保护的多方协同更新模型的方法、装置及系统

Country Status (3)

Country Link
US (1) US20240112091A1 (zh)
CN (1) CN113221183B (zh)
WO (1) WO2022257730A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115860789A (zh) * 2023-03-02 2023-03-28 国网江西省电力有限公司信息通信分公司 一种基于frl的ces日前调度方法
CN115859367A (zh) * 2023-02-16 2023-03-28 广州优刻谷科技有限公司 一种多模态联邦学习的隐私保护方法及系统
CN117056979A (zh) * 2023-10-11 2023-11-14 杭州金智塔科技有限公司 基于用户隐私数据的业务处理模型更新方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221183B (zh) * 2021-06-11 2022-09-16 支付宝(杭州)信息技术有限公司 实现隐私保护的多方协同更新模型的方法、装置及系统
CN116415676A (zh) * 2021-12-29 2023-07-11 新智我来网络科技有限公司 联合学习中的预测方法及装置
CN117112186A (zh) * 2022-05-13 2023-11-24 抖音视界(北京)有限公司 用于模型性能评估的方法、装置、设备和介质
CN115081642B (zh) * 2022-07-19 2022-11-15 浙江大学 一种多方协同更新业务预测模型的方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218171A1 (en) * 2017-01-31 2018-08-02 Hewlett Packard Enterprise Development Lp Performing privacy-preserving multi-party analytics on horizontally partitioned local data
CN111611610A (zh) * 2020-04-12 2020-09-01 西安电子科技大学 联邦学习信息处理方法、系统、存储介质、程序、终端
CN112100642A (zh) * 2020-11-13 2020-12-18 支付宝(杭州)信息技术有限公司 在分布式系统中保护隐私的模型训练方法及装置
CN112541593A (zh) * 2020-12-06 2021-03-23 支付宝(杭州)信息技术有限公司 基于隐私保护的联合训练业务模型的方法及装置
CN112818394A (zh) * 2021-01-29 2021-05-18 西安交通大学 具有本地隐私保护的自适应异步联邦学习方法
CN113221183A (zh) * 2021-06-11 2021-08-06 支付宝(杭州)信息技术有限公司 实现隐私保护的多方协同更新模型的方法、装置及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11449639B2 (en) * 2019-06-14 2022-09-20 Sap Se Differential privacy to prevent machine learning model membership inference
WO2020257264A1 (en) * 2019-06-18 2020-12-24 Google Llc Scalable and differentially private distributed aggregation
US11443240B2 (en) * 2019-09-06 2022-09-13 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation
CN111325417B (zh) * 2020-05-15 2020-08-25 支付宝(杭州)信息技术有限公司 实现隐私保护的多方协同更新业务预测模型的方法及装置
CN112232401A (zh) * 2020-10-12 2021-01-15 南京邮电大学 一种基于差分隐私及随机梯度下降的数据分类方法
CN112182633B (zh) * 2020-11-06 2023-03-10 支付宝(杭州)信息技术有限公司 保护隐私的模型联合训练方法及装置
CN112541592B (zh) * 2020-12-06 2022-05-17 支付宝(杭州)信息技术有限公司 基于差分隐私的联邦学习方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180218171A1 (en) * 2017-01-31 2018-08-02 Hewlett Packard Enterprise Development Lp Performing privacy-preserving multi-party analytics on horizontally partitioned local data
CN111611610A (zh) * 2020-04-12 2020-09-01 西安电子科技大学 联邦学习信息处理方法、系统、存储介质、程序、终端
CN112100642A (zh) * 2020-11-13 2020-12-18 支付宝(杭州)信息技术有限公司 在分布式系统中保护隐私的模型训练方法及装置
CN112541593A (zh) * 2020-12-06 2021-03-23 支付宝(杭州)信息技术有限公司 基于隐私保护的联合训练业务模型的方法及装置
CN112818394A (zh) * 2021-01-29 2021-05-18 西安交通大学 具有本地隐私保护的自适应异步联邦学习方法
CN113221183A (zh) * 2021-06-11 2021-08-06 支付宝(杭州)信息技术有限公司 实现隐私保护的多方协同更新模型的方法、装置及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115859367A (zh) * 2023-02-16 2023-03-28 广州优刻谷科技有限公司 一种多模态联邦学习的隐私保护方法及系统
CN115860789A (zh) * 2023-03-02 2023-03-28 国网江西省电力有限公司信息通信分公司 一种基于frl的ces日前调度方法
CN117056979A (zh) * 2023-10-11 2023-11-14 杭州金智塔科技有限公司 基于用户隐私数据的业务处理模型更新方法及装置
CN117056979B (zh) * 2023-10-11 2024-03-29 杭州金智塔科技有限公司 基于用户隐私数据的业务处理模型更新方法及装置

Also Published As

Publication number Publication date
US20240112091A1 (en) 2024-04-04
CN113221183B (zh) 2022-09-16
CN113221183A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2022257730A1 (zh) 实现隐私保护的多方协同更新模型的方法、装置及系统
Huang et al. Geographical POI recommendation for Internet of Things: A federated learning approach using matrix factorization
CN112085159B (zh) 一种用户标签数据预测系统、方法、装置及电子设备
CN111553470B (zh) 适用于联邦学习的信息交互系统及方法
CN113505882B (zh) 基于联邦神经网络模型的数据处理方法、相关设备及介质
CN111401558A (zh) 数据处理模型训练方法、数据处理方法、装置、电子设备
US11341411B2 (en) Method, apparatus, and system for training neural network model
CN113469373A (zh) 基于联邦学习的模型训练方法、系统、设备及存储介质
WO2022199480A1 (zh) 实现隐私保护的多方协同更新模型的方法、装置及系统
WO2023174018A1 (zh) 一种纵向联邦学习方法、装置、系统、设备及存储介质
CN116862012A (zh) 机器学习模型训练方法、业务数据处理方法、装置及系统
CN114358111A (zh) 对象聚类模型的获取方法、对象聚类方法及装置
CN113850669A (zh) 用户分群方法、装置、计算机设备及计算机可读存储介质
Firdaus et al. Personalized federated learning for heterogeneous data: A distributed edge clustering approach
CN112101609A (zh) 关于用户还款及时性的预测系统、方法、装置及电子设备
CN116978450A (zh) 蛋白质数据的处理方法、装置、电子设备及存储介质
Wang et al. Heterogeneous defect prediction algorithm combined with federated sparse compression
CN117033997A (zh) 数据切分方法、装置、电子设备和介质
Li et al. HBMD-FL: Heterogeneous federated learning algorithm based on blockchain and model distillation
CN114638998A (zh) 模型更新方法、装置、系统及设备
Canese et al. Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss
Gao et al. Towards fair and decentralized federated learning system for gradient boosting decision trees
Kuznetsova et al. Solving Blockchain Scalability Problem Using ZK-SNARK
CN113111254B (zh) 推荐模型的训练方法、拟合方法、装置和电子设备
CN113836438B (zh) 用于帖子推荐的方法、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819328

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE