WO2021120888A1 - 一种基于隐私数据进行模型训练的方法及系统 - Google Patents

一种基于隐私数据进行模型训练的方法及系统 Download PDF

Info

Publication number
WO2021120888A1
WO2021120888A1 PCT/CN2020/125316 CN2020125316W WO2021120888A1 WO 2021120888 A1 WO2021120888 A1 WO 2021120888A1 CN 2020125316 W CN2020125316 W CN 2020125316W WO 2021120888 A1 WO2021120888 A1 WO 2021120888A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
gradient
encryption
decryption
private data
Prior art date
Application number
PCT/CN2020/125316
Other languages
English (en)
French (fr)
Inventor
陈超超
王力
周俊
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021120888A1 publication Critical patent/WO2021120888A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • One or more embodiments of this specification relate to multi-party data cooperation, and in particular to a method and system for model training based on private data.
  • machine learning models can be used to analyze and discover potential data value. Since the data held by a single data owner may be incomplete, it is difficult to accurately describe the target. In order to obtain better model prediction results, the joint training of the model is carried out through the data cooperation of multiple data owners. Has been widely used. However, in the process of multi-party data cooperation, issues such as data security and model security are involved.
  • One aspect of the embodiments of this specification provides a method for model training based on private data; the method includes: a second terminal receives encrypted first private data from a first terminal; and the first private data corresponds to it. The characteristics and model parameters of the second terminal are determined; the second terminal calculates at least the encrypted data of the first private data and the second private data to obtain the encrypted result; the second private data is determined by the corresponding characteristics and model parameters Determine; the second terminal obtains the encryption loss value based at least on the model jointly trained with the first private data and the second private data based on the encrypted result and the sample label; the encryption loss value is obtained by a third party Participate in the calculation of the first decryption gradient and the second decryption gradient; the first decryption gradient and the second decryption gradient correspond to the first privacy data and the second privacy data, respectively, the first decryption gradient and the second decryption gradient Used to update the joint training model; wherein, the encryption is homomorphic encryption; the third party holds the public key of the homomorphic encryption
  • the system includes: a first data receiving module for receiving encrypted first private data from a first terminal; One private data is determined by its corresponding characteristics and model parameters; the encryption result determination module is used to calculate at least the encrypted data of the first private data and the second private data after encryption to obtain the encrypted result; the second The private data is determined by its corresponding characteristics and model parameters; the encryption loss value determination module is used to obtain a joint training based on at least the first private data and the second private data based on the encrypted result and the sample label The encryption loss value of the model; the model parameter update module is used to participate in the calculation of the first decryption gradient and the second decryption gradient by the third party; the first decryption gradient and the second decryption gradient are respectively the same as the The first private data corresponds to the second private data, and the first decryption gradient and the second decryption gradient are used to update the joint training model; wherein, the encryption is homomorphic encryption; the
  • the apparatus includes a processor and a memory; the memory is used for storing instructions, and the processor is used for executing the instructions to implement all the instructions. The operation corresponding to the method of model training based on private data is described.
  • the method includes: a first terminal receives an encryption loss value from a second terminal; and the encryption loss value participates in the first decryption through a third party. Calculation of the gradient and the second decryption gradient; the first decryption gradient and the second decryption gradient correspond to the first privacy data and the second privacy data, respectively, and the first decryption gradient and the second decryption gradient are used to update the The joint training model; wherein the encryption is homomorphic encryption; the first terminal and the second terminal hold first private data and second private data, respectively, the first private data and the first private data The two private data correspond to the same training sample.
  • the system includes: an encryption loss value receiving module for receiving an encryption loss value from a second terminal; and a model parameter updating module for The encryption loss value participates in the calculation of the first decryption gradient and the second decryption gradient by a third party; the first decryption gradient and the second decryption gradient correspond to the first private data and the second private data, respectively, and the first A decryption gradient and a second decryption gradient are used to update the joint training model; wherein, the encryption is homomorphic encryption; the first terminal and the second terminal hold the first privacy data and the second privacy, respectively Data, the first private data and the second private data correspond to the same training sample.
  • the apparatus includes a processor and a memory; the memory is used for storing instructions, and the processor is used for executing the instructions to implement all the instructions. The operation corresponding to the method of model training based on private data is described.
  • Fig. 1 is an exemplary application scenario diagram of a system for model training based on private data according to some embodiments of this specification;
  • Fig. 2 is an exemplary flowchart of a method for model training based on private data according to some embodiments of this specification.
  • Fig. 3 is an exemplary flowchart of a method for model training based on private data according to some other embodiments of this specification.
  • system used in this specification is a method for distinguishing different components, elements, parts, parts, or assemblies of different levels.
  • the words can be replaced by other expressions.
  • Data processing and analysis such as data analysis, data mining, and trend prediction are widely used in more and more scenarios for the large amount of information data that is flooded in various industries such as economy, culture, education, medical treatment, and public management.
  • data cooperation can enable multiple data owners to obtain better data processing results.
  • more accurate model parameters can be obtained through joint training of multi-party data.
  • a joint training system for models based on private data can be applied to scenarios where all parties coordinate to train machine learning models for use by multiple parties while ensuring the security of the data of all parties.
  • multiple data parties have their own data, and they want to use each other's data for unified modeling (for example, linear regression models, logistic regression models, etc.), but they do not want their own data (especially private data) Was leaked.
  • Internet savings institution A has a batch of user data
  • government bank B has another batch of user data.
  • the training sample set determined based on the user data of A and B can train a better machine learning model. Both A and B are willing to participate in model training through each other's user data, but for some reasons, A and B are unwilling to have their user data information leaked, or at least they are unwilling to let the other party know their user data information.
  • the system for model training based on private data can make the private data of multiple parties obtain a commonly used machine learning model through joint training of multi-party data without being leaked, and achieve a win-win cooperation state.
  • a garbage circuit or secret sharing method may be used in order to prevent the leakage of private data.
  • the private data of all parties can also be homomorphically encrypted, and then the private data of all parties can participate in the calculation of the model training in the encrypted state.
  • homomorphic encryption only supports product operations and/or sum operations. In the process of use, the corresponding calculation formulas need to be converted accordingly. In some scenarios with large feature dimensions, the homomorphic encryption scheme has high computational efficiency.
  • a third party can also intervene to improve computing efficiency.
  • multiple data owners encrypt private data to a third party, which is then collected and processed by the third party, and then distributed to each data owner.
  • a third-party server Through the participation of a third-party server, multi-party data owners can encrypt their own data with a third-party public key, then use the encrypted data to participate in the calculation, and finally send the encrypted calculation result to the third party for decryption in a secure manner.
  • the participation of a third party can reduce the number of data encryption public keys, reduce the number of data encryption layers, and reduce the number of data encryption.
  • Public key encryption and multi-layer public key encryption bring low operation efficiency, that is, it can improve operation efficiency.
  • Fig. 1 is an exemplary application scenario diagram of a system for model training based on private data according to some embodiments of this specification.
  • the system 100 for model training based on private data includes a first terminal 110, a second terminal 120, a third party 130, and a network 140.
  • the first terminal 110 can be understood as the first-party data owner, including the processing device 110-1 and the storage device 110-2;
  • the second terminal 120 can be understood as the second-party data owner, including the processing device 120-1, Storage device 120-2;
  • the third party 130 is not the data owner and does not hold the training data of the model.
  • the third party participates in the joint training of the model based on the multi-party data owner through an intermediary.
  • multi-party data owners will encrypt their own data with a third-party public key, use the encrypted data to participate in model training, and when appropriate, use a secure method to send the encrypted operation results to a third-party for decryption. , And then obtain the value that can update the model parameters.
  • the data held by the first-party data owner and the second-party data owner relate to user-related information in different fields.
  • the data held by both parties can include the amount of bank accounts that users deposit each year; it can also include information such as gender, age, income, and address of the user group involved in a certain investment and wealth management project or a certain insurance brand.
  • the number of data owners in FIG. 1 is two. In other embodiments, third-party data owners and fourth-party data owners may also be included.
  • the first terminal 110 and the second terminal 120 may be devices with data acquisition, storage, and/or sending functions.
  • the first terminal 110 and the second terminal 120 may include, but are not limited to, a mobile device, a tablet computer, a notebook computer, a desktop computer, etc., or any combination thereof.
  • the first terminal 110 and the second terminal 120 may receive related data from each other, or may receive related data from a third party 130.
  • the first terminal 110 may receive the encryption loss value from the second terminal.
  • the first terminal 110 and the second terminal 120 may receive the public key of the third party 130 from the third party 130.
  • the first terminal 110 may also send the first encryption gradient plus the mask to the third party 130.
  • the processing devices 110-1 and 120-1 of the first terminal and the second terminal may perform data and/or instruction processing.
  • the processing devices 110-1 and 120-1 can encrypt data, and can also execute related algorithms and/or instructions.
  • the processing device 110-1 of the first terminal 110 may receive the public key from the third party 130 and use the public key to encrypt the first private data, or may use the encryption loss value to participate in the joint training of the model.
  • the processing device 120-1 of the second terminal 120 may receive the public key from the third party 130, and use the public key to encrypt the second private data, and may also calculate the encryption loss value based on related algorithm instructions.
  • the storage devices 110-2 and 120-2 of the first terminal and the second terminal can store data and/or instructions used by the corresponding processing devices 110-1 and 120-1, and the processing devices 110-1 and 120-1 can execute Or use the data and/or instructions to implement the exemplary methods in this specification.
  • the storage devices 110-2 and 120-2 can be used to store the first privacy data and the second privacy data, respectively; and can also store related instructions instructing the first terminal and the second terminal to perform operations.
  • the storage devices 110-2 and 120-2 may also store data processed by the processing devices 110-1 and 120-1, respectively.
  • the storage devices 110-2 and 120-2 may also store the model parameters of the features corresponding to the first private data and the model parameters of the features corresponding to the second private data, respectively.
  • the storage device 110-2 and the storage device 120-2 may also be a storage device, where the first terminal and the second terminal can only obtain data stored by themselves from the storage device.
  • the storage device may include mass memory, removable memory, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof.
  • the third party 130 has at least the ability to process data and/or instructions.
  • the third party includes at least processing equipment with computing capabilities, such as cloud servers, terminal processing equipment, etc.
  • the third party 130 may send the public key to various data owners (for example, the first terminal 110 and the second terminal 120).
  • the third party 130 may perform a decryption operation, for example, the masked first encryption gradient from the first terminal 110 performs decryption.
  • the third party 130 may also have data and/or instruction storage capabilities, that is, the third party 130 may also include a storage device.
  • the storage device may be used to store the public key and private key of the third party 130, as well as operation instructions used by the third party to execute.
  • the third party may belong to an impartial judicial institution or government department; it may also belong to a unit recognized by all parties to the data.
  • the network 140 may facilitate the exchange of information and/or data.
  • the system 100 for model training based on private data for example, the first terminal 110 (processing device 110-1 and storage device 110-2) and the second terminal 120 (processing device 120-1 and storage device 120) -2)
  • One or more components can send information and/or data to other components in the system 100 via the network 140.
  • the processing device 120-1 of the second terminal 120 may obtain the first privacy data from the first terminal 110 via the network 140.
  • the processing device 110-1 of the first terminal 110 may obtain the first privacy data from the storage device 110-2 of the first terminal 110 through the network 140.
  • the network 140 may be any form of wired or wireless network, or any combination thereof.
  • the system in one or more embodiments of this specification may be composed of a data receiving module and several data processing modules.
  • the data receiving module includes a first data receiving module; the data processing module may include an encryption result determination module, an encryption loss value determination module, and model parameters. Update the module.
  • the above-mentioned modules are all executed in the computing system introduced in the application scenario, and each module includes its own instructions.
  • the instructions can be stored on a storage medium, and the instructions can be executed in a processor. Different modules can be located on the same device or on different devices. Data can be transmitted between them through a program interface, a network, etc., and data can be read from or written to a storage device.
  • the first data receiving module may be used to receive encrypted first private data from the first terminal, where the first private data is determined by the corresponding characteristics and model parameters.
  • the encryption result determination module may be used to calculate at least the encrypted data of the encrypted first privacy data and the second privacy data to obtain the encrypted result; the second privacy data is determined by the corresponding characteristics and model parameters.
  • the encryption loss value determination module may be used to obtain an encryption loss value based on at least a model jointly trained with the first private data and the second private data based on the encrypted result and the sample label.
  • the encryption loss value determination module may also be used to determine the encryption loss value based on a Taylor expansion formula and a Sigmoid function.
  • the model parameter update module can be used to participate in the calculation of the first decryption gradient and the second decryption gradient through the third party for the encryption loss value; the first decryption gradient and the second decryption gradient are respectively related to the first privacy data and Corresponding to the second privacy data, the first decryption gradient and the second decryption gradient are used to update the joint training model; wherein the encryption is homomorphic encryption; the third party holds the homomorphic encryption Key and corresponding private key; the first private data and the second private data correspond to the same training sample.
  • the model parameter update module may also be used to determine a second encryption gradient based on the encryption loss value and the characteristics corresponding to the second privacy data.
  • the model parameter update module may also be used to determine a second mask gradient based on the second encryption gradient and the second mask, and transmit the second mask gradient to the A third party; receiving a second decryption result from a third party; the second decryption result corresponds to the second mask gradient; based on the second decryption result and the second mask, a second decryption gradient is determined, and based on The second decryption gradient updates the jointly trained model.
  • the system further includes other data receiving modules, which can be used to receive other private data from other terminals; the encryption result determination module is also used to: combine the encrypted first private data and the encrypted first data. Other private data and the encrypted data of the second private data are calculated to obtain an encrypted result.
  • the data receiving module includes an encryption loss value receiving module; the data processing module may include a model parameter update module.
  • the data receiving module may be used to receive the encryption loss value from the second terminal.
  • the model parameter update module can be used for the encryption loss value to participate in encryption model training through a third party to obtain a parameter update model.
  • system and its modules in one or more implementations of this specification can be implemented in various ways.
  • the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
  • the hardware part can be implemented using dedicated logic;
  • the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
  • processor control codes for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
  • the system and its modules of this application can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It may also be implemented by software executed by various types of processors, or may be implemented by a combination of the above hardware circuit and software (for example, firmware).
  • Fig. 2 is an exemplary flowchart of a method for model training based on private data according to some embodiments of the present specification.
  • variable names and formulas in this manual are only for better understanding of the methods described in this manual.
  • various non-substantial transformations can be made to the following processes, variable names, and formulas, such as changing the order of rows or columns, and transforming to when matrix multiplication, etc. Price form, expressing the same calculation in other forms of calculation, etc.
  • the training data of the joint training model includes m data samples, and each sample data includes n-dimensional features.
  • the n-dimensional feature data of the m samples are held by at least the first-party data owner and the second-party data owner.
  • some embodiments of this specification take two-party data owners as examples for detailed description, and use A and B to represent the first-party data owner and the second-party data owner, respectively.
  • the first-party data owner may also be referred to as the first terminal
  • the second-party data owner may also be referred to as the second terminal.
  • the first-party data owner A owns the data (Xa) corresponding to the p-dimensional features in the m samples, and the model parameters (Wa) corresponding to the p-dimensional features; the second-party data owns Person B has data (Xb) corresponding to another q-dimensional feature in the m samples, and model parameters (Wb) corresponding to the q-dimensional feature.
  • model parameters can also be referred to simply as models.
  • Xa is a matrix composed of m samples, and each sample is a row vector with 1 row and p columns, that is, Xa is a matrix with m rows and p columns.
  • Wa is a parameter matrix of p features corresponding to A, and Wa is a p*1 dimensional matrix.
  • Xb is a matrix with m rows and q columns.
  • Label y is held by one of A and B, but holding by the other party will not have a substantial impact.
  • the label y is held by B, and y is an m*1-dimensional column vector.
  • the third party may be an impartial judicial institution or government agency; it may also be a unit recognized by all parties to the data.
  • the third party includes processing devices with at least computing capabilities, such as servers or terminal processing devices. The third party owns its own public key and private key, and gives the public key to each terminal that owns the data.
  • [X] means to encrypt X with a third-party public key.
  • X is a matrix, it means to encrypt each element of the matrix. If there is no further explanation, encryption can refer to any asymmetric encryption method.
  • variable names, formulas and other expressions appearing in this manual are only for a better understanding of the methods described in this manual.
  • various insubstantial transformations can be made to the representation method, variable names, formulas, calculation methods, etc., without affecting its essence and corresponding technical effects . For example, but not limited to changing the order of rows or columns, transforming to an equivalent form during matrix multiplication, or expressing the same calculation in other calculation forms.
  • Step 210 The third party sends the public keys to A and B respectively.
  • the third party will give its own public key to data owner A and data owner B for subsequent use in data encryption. For example, a third party can transmit its public key to A and B through the network.
  • Step 220 A and B respectively calculate Ua and Ub and encrypt them.
  • Both parties perform product operations of the model parameters and feature data they hold, and encrypt the results of their product operations with a third-party public key.
  • the data owner A sends the ciphertext data to the data owner B.
  • the Ua and [Ua] thus obtained are both a matrix of m rows and 1 column;
  • the encryption algorithm used is a homomorphic encryption algorithm.
  • [Ua]a+[Ub]a [Ua+Ub]a.
  • step 230 B calculates the encryption loss value and sends it to A.
  • the data owner B who owns the encrypted data of the two parties adds the encrypted data of the two parties by addition. Since the encryption algorithm is a homomorphic encryption algorithm, the summed value is equal to the encrypted value of the sum of the unencrypted data of both parties.
  • Further B calculates the loss value based on the added ciphertext data.
  • the Taylor expansion can be used to approximate the Sigmoid function. Since Taylor's expansion is the addition and multiplication of polynomials and can support homomorphic encryption, the Taylor expansion can be used to calculate the approximate loss value in the encrypted state.
  • the calculated encryption loss value [d] is a matrix with m rows and 1 column.
  • Step 240 B calculates the second encryption gradient value.
  • the data owner B substitutes the encryption loss value into the gradient descent formula, that is, performs a product operation between the encryption loss value and the data corresponding to its own characteristics, and calculates the second encryption gradient value.
  • Party B uses the gradient calculation formula to calculate: B obtains the second encryption gradient value [Gb] encrypted with a third-party public key according to homomorphic multiplication.
  • the second encrypted gradient value [Gb] thus obtained is a matrix with q rows and 1 column.
  • Step 242 B adds the second encryption gradient to the second mask, and sends it to a third party for decryption.
  • the second mask and the first mask mentioned later are values set by the second party, and the main purpose is to prevent the third party from knowing the decrypted second gradient value.
  • the mask described in one or more embodiments of this specification can be understood as any numerical value that can participate in an encryption operation.
  • the second mask can be -0.001, 0.1, 3, 300, and so on. This specification does not limit the setting range of the specific values of the first mask and the second mask, as long as the above-mentioned purpose can be met.
  • B calculates [Gb]+[mask2] and sends it to a third party.
  • mask2 is the second mask, which has the same dimension as the second gradient value Gb, so Gb+mask2 is also a matrix with q rows and 1 column.
  • Step 244 B receives the decryption result returned by the third party.
  • the third party sends the decryption result with the second mask to the data owner B.
  • B receives the decryption result and removes the second mask to obtain the second gradient value of party B.
  • Gb+mask2 is the decryption result.
  • the second gradient value Gb is a matrix with q rows and 1 column.
  • Step 246 B updates the model based on the second gradient value.
  • the data owner B calculates his own second gradient value, and performs a product operation on the second gradient value and the leaning rate to update the model.
  • learning_rate represents a parameter that affects the degree of decline in the gradient descent method.
  • Step 250 A calculates the first encryption gradient value.
  • the data owner A substitutes the encryption loss value into the gradient descent formula, that is, performs a product operation between the encryption loss value and the data corresponding to its own characteristics, and calculates the first encryption gradient value.
  • Party A uses the gradient calculation formula to calculate: A obtains the first encryption gradient value [Ga] encrypted with a third-party public key according to the homomorphic encryption operation.
  • the first encrypted gradient value [Ga] thus obtained is a matrix with p rows and 1 column.
  • Step 252 A adds the first encryption gradient value to the first mask, and sends it to a third party for decryption.
  • A adds the first encrypted gradient value to the first mask encrypted with the third-party public key, and sends it to the third party, and the third party decrypts the received encrypted data with its own private key.
  • Ga+mask1 is also a matrix with p rows and 1 column.
  • Step 254 A receives the decryption result returned by the third party.
  • the third party sends the decryption result with the first mask to the data owner A, and A receives the decryption result and removes the first mask to obtain the first gradient value of party A.
  • Ga+mask1 is the decryption result
  • the first gradient value Ga is a matrix with p rows and 1 column.
  • Step 256 A updates the model based on the first gradient value.
  • the data owner A calculates his own first gradient value, and performs a product operation on the first gradient value and the "leaning rate" to update the model.
  • Fig. 3 is an exemplary flowchart of a method for processing dialog information according to some embodiments of the present specification.
  • one or more steps in the method 300 may be implemented in the system 100 shown in FIG. 1.
  • one or more steps in the method 300 may be stored in the storage device 110-2/storage device 120-2 as instructions, and called and/or executed by the processing device 110-1/processing device 120-1.
  • Step 310 The second terminal receives the encrypted first privacy data from the first terminal.
  • step 310 may be performed by the first data receiving module.
  • the first terminal may be the data owner A described in part of FIG. 2, and the second terminal may be the data owner B described in part of FIG. 2.
  • the first private data is held by the first terminal, and the second terminal holds the second private data.
  • the first private data and the second private data correspond to different features (Xa and Xb) and model parameters (Wa and Wb) of the same sample.
  • the first private data may be determined by the product Ua of the first feature and the first model parameter.
  • the first private data is Wa*Xa.
  • the second privacy data can be determined by the product Ub of the second feature and the second model parameter, that is, Wb*Xb.
  • Ua, Ub; Wa, Xa; and Wb, Xb please refer to the relevant description in FIG. 2.
  • the first terminal uses a third-party public key to encrypt the first private data.
  • a third-party public key For a detailed description of encrypting the first private data and transmitting the encrypted data to the second terminal, refer to step 220 in FIG. 2 of this specification.
  • the first private data may also be Wa and Xa, and in some embodiments, the second private data may also include Wb and Xb.
  • Encryption in one or more embodiments of this specification refers to homomorphic encryption, that is, the calculation result after encryption is decrypted, and the decryption result obtained is the same as the calculation result of the unencrypted original data.
  • the third party holds the public key required for "encryption" in the embodiment and the corresponding private key.
  • the sample data held by the data owner may be user attribute information in at least one of the fields of insurance, banking, and medical care.
  • the bank owns the identity information, flow information, and credit information of the bank’s customers
  • the insurance company owns the company’s customer identity information, historical purchase insurance information, historical claims information, health information, vehicle status information, etc.
  • medical institutions own the institution Patient identification information, historical medical records, etc.
  • the user attribute information includes images, text, or voice.
  • the model owned by the data owner can make predictions based on the characteristics of the sample data. For example, the bank can predict the annual deposit growth rate of the bank based on the characteristics of user growth in the first and second quarters, increasing user identities, and new bank policies.
  • the model may also be used to confirm the user's identity information, and the user's identity information may include, but is not limited to, the user's credit evaluation.
  • the private data in one or more embodiments of this specification may include private data related to the entity.
  • an entity can be understood as a visualized subject, which can include, but is not limited to, users, merchants, and so on.
  • the privacy data may include image data, text data, or sound data.
  • the image data in the privacy data may be a user's face image, a merchant's logo image, a two-dimensional code image that can reflect user or merchant information, and so on.
  • the text data in the privacy data may be text data such as the user's gender, age, education background, income, etc., or text data such as the type of merchandise traded by the merchant, the time when the merchant conducts the merchandise transaction, and the price range of the merchandise.
  • the voice data of the privacy data may be related voice content including user personal information or user feedback, and corresponding user personal information or user feedback information can be obtained by parsing the voice content.
  • Step 320 The second terminal calculates at least the encrypted data of the first private data and the encrypted data of the second private data to obtain an encrypted result. In some embodiments, step 320 may be performed by the encryption result determination module.
  • the encrypted result may be understood as the result obtained by calculating the first private data and the second private data in an encrypted state.
  • the encrypted data of the first private data and the encrypted data of the second private data may be subjected to a sum operation to obtain the encrypted result.
  • the encrypted data of the first private data Ua is [Ua]
  • the encrypted data of the second private data Ub is [Ub]
  • the encrypted result obtained by the sum operation is [Ua]+[Ub], which is [ Ua+Ub].
  • the specific encryption process refer to step 230 in FIG. 2.
  • step 330 the second terminal obtains an encryption loss value based at least on the model jointly trained with the first private data and the second private data based on the encrypted result and the sample label.
  • step 330 may be performed by the encryption loss value determination module.
  • the loss value can be used to reflect the difference between the predicted value of the training model and the actual sample data.
  • the loss value may reflect the difference between the preset value and the true value by participating in calculations.
  • the related calculation formulas of different training models are different, and the calculation formulas corresponding to different parameter optimization algorithms are also different when the same training model is used.
  • the calculation formula for the loss value is However, one or more embodiments of this specification do not limit the calculation formula for determining the loss value.
  • the second terminal may calculate the encryption loss value of the joint training model based on the encrypted result [Ua+Ub] and the sample label y, for example, [d] in FIG. 2.
  • the tag y can be held by either the first terminal or the second terminal.
  • the jointly trained model may include a linear regression model; it may also include a logistic regression model.
  • the Sigmoid function when the joint training model includes a logistic regression model, the Sigmoid function needs to be used to calculate the loss value d. Since the homomorphic encryption algorithm only supports product operations and sum operations, the Sigmoid function can be replaced with an approximate function that can support product operations and sum operations as needed. For example, in some embodiments, the Sigmoid formula can be compared to the Taylor formula Perform expansion, and then calculate the encryption loss value based on the Taylor expansion formula of Sigmoid. For a detailed description, please refer to step 230 in FIG. 2.
  • the joint training model is a linear regression model
  • a linear function can be used to calculate the predicted value In the linear regression model, because the linear function can be directly used to calculate the homomorphic encryption algorithm, it is not necessary to use Taylor expansion.
  • the second terminal can calculate the encryption loss value based on the sum z of the first privacy data and the second privacy data
  • step 340 the encryption loss value is trained by a third party to participate in the encryption model to obtain a parameter update model.
  • step 340 may be performed by a model parameter update module.
  • the third party may be a terminal processing device or a server.
  • terminal processing equipment includes processors and storage devices, such as iPads, desktop computers, notebooks, and so on.
  • the encryption loss value is used by a third party to participate in the encryption model training, which can be understood as using the encryption loss value to perform encryption calculation with the participation of the third party, and finally obtain the model parameter update through decryption. The value of, and then get the updated model of the parameters.
  • a gradient descent method may be used to obtain a parameter update model.
  • the obtained encryption loss value can be calculated to obtain the encryption gradient value to participate in the model training, and the above process is repeated until the number of iterations reaches the upper limit of the predefined number of iterations or the error calculated after the encryption loss value is added is less than the predefined Numerical value, that is, the trained model is obtained.
  • a gradient descent method may be used to minimize the loss value d.
  • the first encryption gradient [Ga] and the first encryption gradient of the first terminal may be determined based on the encryption loss value [d] and the characteristics Xa and Xb corresponding to the first privacy data and the second privacy data.
  • the first terminal and the second terminal may respectively determine the corresponding first decryption gradient Ga and the second decryption gradient Gb based on the first encryption gradient [Ga] and the second encryption gradient [Gb], and respectively based on The first decryption gradient Ga and the second decryption gradient Gb update the model parameters, thereby obtaining a parameter updated model.
  • the specific process of determining the second encryption gradient [Gb] by the second terminal based on the encryption loss value [d] and the feature Xb corresponding to the second privacy data may refer to step 240 in FIG. 2.
  • the second terminal may obtain the corresponding second decryption gradient based on the second encryption gradient by adding a mask. Specifically, the second terminal obtains the corresponding second mask gradient determined based on the second encryption gradient and the mask, and transmits the second mask gradient to a third party holding the encryption private key; the third party Decode the received second mask gradient, and transmit the corresponding second decryption result to the second terminal; the second terminal is based on the received first decoding result and the second mask , Remove the second mask, and get the second decryption gradient.
  • the second mask gradient can be understood as a result of an operation of the second encryption gradient and the second mask.
  • the operation may include a product operation or a sum operation; the second mask may also include one value or multiple values.
  • the mask mask2 is a value, and the operation is a sum operation, then the corresponding mask gradient may be [Gb]+[mask2].
  • the second mask gradient when the second mask is added by a product operation, the second mask gradient may be [Gb] ⁇ [mask2].
  • the second terminal updates the joint training model based on the second decryption gradient Gb. For a specific description, refer to step 246 in FIG. 2.
  • the second terminal determines the encryption loss value, it needs to pass the encryption loss value to the first terminal, and then the first terminal participates in the joint training of the model through a third party based on the received encryption loss value.
  • the first terminal may determine the first encryption gradient [Ga] based on the received encryption loss value [d] and the feature Xb corresponding to the first private data. For the specific process, refer to step 250 in FIG. 2.
  • the first terminal may also obtain the corresponding first decryption gradient based on the first encryption gradient by adding a mask. Specifically, see that the second terminal obtains the corresponding second decryption gradient process based on the second encryption gradient, and also see step 252 and step 254 in FIG. 2.
  • the first terminal updates the joint training model based on the first decryption gradient Ga.
  • the first terminal updates the joint training model based on the first decryption gradient Ga.
  • it also includes three or more data owners jointly training machine learning models through their own sample data. Among them, multiple data owners have relatively different characteristics of the same sample. In this scenario, it is necessary to select one of multiple data owners to calculate the encryption loss value. After the calculation is completed, the encryption loss value is sent to other data owners, and all data owners have encryption losses After the value, the encrypted loss value will be used for model training through a third party. For the convenience of description, some embodiments of this specification select the second party data owner, that is, the second terminal, to calculate the encryption loss value.
  • the second terminal may also receive other encrypted private data from other terminals to jointly train and update the model.
  • Other private data is held by other terminals, and the other private data is the same as the first private data, and the second private data corresponds to different characteristics of the same sample.
  • the other terminal may be one terminal or multiple terminals.
  • the other privacy data of the other terminal may be determined by the product of the feature corresponding to the other terminal and the model parameter.
  • other terminals include a third terminal and a fourth terminal, and the characteristics and model parameters corresponding to the third terminal and the fourth terminal are Xc and Wc, and Xd and Wd, respectively.
  • the third terminal privacy data Uc may be Wc*Xc
  • the fourth terminal privacy data Ud may be Wd*Xd.
  • other terminals also need to use a third-party public key to encrypt their own private data, and transmit the encryption result to the second terminal.
  • the encryption process adopts homomorphic encryption.
  • the second terminal after receiving encrypted other private data from other terminals, the second terminal passes through the encrypted first private data, encrypted other private data, and its own encrypted second private data. The operation gets the encrypted result.
  • the encrypted data of the first private data, the encrypted data of the second private data, and the encrypted data of other private data may be summed to obtain the encrypted result.
  • the other terminals are the third terminal, the fourth terminal...the nth terminal.
  • the encrypted data of the private data of the third terminal [Uc], the encrypted data of the private data of the fourth terminal [Ud], and the encrypted data of the private data of the nth terminal are represented by [Un], then the encrypted result obtained by the sum operation It is [Ua]+[Ub]+[Uc]+[Ud]+...[Un], which is [Ua+Ub+Uc+Ud+...Un].
  • the specific encryption operation process please refer to the relevant example in Figure 2.
  • the second terminal may calculate the encryption loss value [d] of the joint training model based on the encrypted result and the sample label y, and send the encryption loss value [d] to other terminals.
  • other terminals After receiving the encryption loss value [d], other terminals can calculate their own encryption gradient, and then they can determine their own decrypted gradient value by adding a mask and a third-party decryption method, and then based on their own decryption gradient value To update its own model parameters.
  • the related description of the first terminal or refer to step 250 to step 256 in FIG. 2, which will not be repeated here.
  • the possible beneficial effects of the embodiments of this application include but are not limited to: (1) Multi-party data joint training, which improves the utilization of data, and improves the accuracy of the prediction model; (2) The homomorphic encryption method can improve the performance of multi-party data joint training. Security; (3) When the feature dimension is high, it can also have high computational efficiency; (4) Through the participation of a third-party server, in the process of model encryption training, for the data held by all parties, There is only one public key for encryption, that is, the third-party public key; there is only one level of encryption for the same data in the entire operation process. In the multi-party encryption training without the participation of a third party, each data party needs to encrypt the data with the public key of one party. During the operation, the intermediate operation result needs to be double-layered with the public key of the other party. Therefore, a homomorphic encryption scheme involving a third party can improve computing efficiency.
  • the possible beneficial effects may be any one or a combination of the above, or any other beneficial effects that may be obtained.
  • this application uses specific words to describe the embodiments of the application.
  • “one embodiment”, “an embodiment”, and/or “some embodiments” mean a certain feature, structure, or characteristic related to at least one embodiment of the present application. Therefore, it should be emphasized and noted that “one embodiment” or “one embodiment” or “an alternative embodiment” mentioned twice or more in different positions in this specification does not necessarily refer to the same embodiment. .
  • some features, structures, or characteristics in one or more embodiments of the present application can be appropriately combined.
  • the computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
  • the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or a suitable combination.
  • the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
  • the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
  • the computer program codes required for the operation of each part of this application can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, VisualBasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or as an independent software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing equipment.
  • the remote computer can be connected to the user's computer through any form of network, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifier "about”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
  • the numerical parameters used in the specification and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the range in some embodiments of the present application are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种基于隐私数据进行模型训练的方法及系统。该方法包括:第二终端接收来自第一终端的加密后的第一隐私数据;第一隐私数据由与其对应的特征和模型参数确定;第二终端至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;第二隐私数据由与其对应的特征和模型参数确定;第二终端基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和第二隐私数据联合训练的模型的加密损失值;通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;第一解密梯度和第二解密梯度分别与第一隐私数据和第二隐私数据对应,第一解密梯度和第二解密梯度用于更新联合训练的模型。

Description

一种基于隐私数据进行模型训练的方法及系统 技术领域
本说明书一个或多个实施例涉及多方数据合作,特别涉及一种基于隐私数据进行模型训练的方法和系统。
背景技术
在数据分析、数据挖掘、经济预测等领域,机器学习模型可被用来分析、发现潜在的数据价值。由于单个数据拥有方持有的数据可能是不完整的,由此难以准确地刻画目标,为了得到更好的模型预测结果,通过多个数据拥有方的数据合作,来进行模型的联合训练的方式得到了广泛的使用。但是在多方数据合作的过程中,涉及到数据安全和模型安全等问题。
因此,有必要提出一种安全的基于多方数据进行联合建模的方案。
发明内容
本说明书实施例的一个方面提供一种基于隐私数据进行模型训练的方法;所述方法包括:第二终端接收来自第一终端的加密后的第一隐私数据;所述第一隐私数据由与其对应的特征和模型参数确定;第二终端至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;所述第二隐私数据由与其对应的特征和模型参数确定;第二终端基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值;通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;其中,所述加密为同态加密;所述第三方持有所述同态加密的公钥以及对应的私钥;所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
本说明书实施例的另一个方面提供一种基于隐私数据进行模型训练的系统,所述系统包括:第一数据接收模块,用于接收来自第一终端的加密后的第一隐私数据;所述第一隐私数据由与其对应的特征和模型参数确定;加密结果确定模块,用于至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;所述第二隐私数据由与其对应的特征和模型参数确定;加密损失值确定模块,用于基于所述加密 后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值;模型参数更新模块,用于通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;其中,所述加密为同态加密;所述第三方持有所述同态加密的公钥以及对应的私钥;所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
本说明书实施例的另一个方面提供一种基于隐私数据进行模型训练的装置,所述装置包括处理器以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,以实现所述基于隐私数据进行模型训练的方法对应的操作。
本说明书实施例的另一方面提供一种基于隐私数据进行模型训练的方法,所述方法包括:第一终端接收来自第二终端的加密损失值;所述加密损失值通过第三方参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;其中,所述加密为同态加密;所述第一终端和所述第二终端分别持有第一隐私数据和第二隐私数据,所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
本说明书实施例的另一个方面提供一种基于隐私数据进行模型训练的系统,所述系统包括:加密损失值接收模块,用于接收来自第二终端的加密损失值;模型参数更新模块,用于所述加密损失值通过第三方参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;其中,所述加密为同态加密;所述第一终端和所述第二终端分别持有第一隐私数据和第二隐私数据,所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
本说明书实施例的另一个方面提供一种基于隐私数据进行模型训练的装置,所述装置包括处理器以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,以实现所述基于隐私数据进行模型训练的方法对应的操作。
附图说明
本说明书将以示例性实施例的方式进一步描述,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的基于隐私数据进行模型训练的系统的示例性应用场景图;
图2是根据本说明书一些实施例所示的基于隐私数据进行模型训练的方法的示例性流程图;以及
图3是根据本说明书另外的一些实施例所示的基于隐私数据进行模型训练的方法的示例性流程图。
具体实施方式
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
在经济、文化、教育、医疗、公共管理等各行各业充斥的大量信息数据,对其进行例如数据分析、数据挖掘、以及趋势预测等的数据处理分析在越来越多场景中得到广泛应用。其中,通过数据合作的方式可以使多个数据拥有方获得更好的数据处理结果。例如,可以通过多方数据的联合训练来获得更为准确的模型参数。
在一些实施例中,基于隐私数据进行模型的联合训练系统可以应用于在保证各方 数据安全的情况下,各方协同训练机器学习模型供多方使用的场景。在这个场景中,多个数据方拥有自己的数据,他们想共同使用彼此的数据来统一建模(例如,线性回归模型、逻辑回归模型等),但并不想各自的数据(尤其是隐私数据)被泄露。例如,互联网储蓄机构A拥有一批用户数据,政府银行B拥有另一批用户数据,基于A和B的用户数据确定的训练样本集可以训练得到比较好的机器学习模型。A和B都愿意通过彼此的用户数据共同参与模型训练,但因为一些原因A和B不愿意自己的用户数据信息遭到泄露,或者至少不愿意让对方知道自己的用户数据信息。
基于隐私数据进行模型训练的系统可以使多方的隐私数据在不受到泄露的情况下,通过多方数据的联合训练来得到共同使用的机器学习模型,达到一种共赢的合作状态。
在一些实施例中,基于多方数据进行的联合训练中,为了防止隐私数据的泄露,可以采用混淆电路(garbled circuit)或秘密分享的方式来进行。其中,在特征维度较大时,混淆电路(garbled circuit)或秘密分享方案的运算效率不高。在一些实施例中,也可以对各方的隐私数据进行同态加密,然后让各方的隐私数据在加密的状态参与模型训练的运算。其中,同态加密仅支持积运算和/或和运算,在使用的过程中,需要把对应的运算公式根据需要进行相应的转换。在一些特征维度较大的场景中,同态加密方案的运算效率高。在一些实施例中,在使用同态加密进行隐私数据建模时,还可以通过第三方的介入来提高运算效率。比如,多个数据拥有方分别把隐私数据加密传给第三方,然后由第三方统一收集处理,然后统一发放给各个数据拥有方。通过第三方服务器的参与,多方数据拥有者可以用第三方的公钥对自身的数据进行加密,然后用加密的数据参与运算,最后把加密的运算结果通过安全的方式发给第三方解密,相比于多方数据之间用自己方的公钥和私钥对数据进行加密的运算方式,第三方的参与可以减少数据加密公钥的个数,减少数据加密的层数,进而可以减少由多个公钥加密以及多层公钥加密带来的运算效率低,即能够提高运算效率。
图1为根据本说明书的一些实施例所示的基于隐私数据进行模型训练的系统的示例性应用场景图。
在一些实施例中,基于隐私数据进行模型训练的系统100包括第一终端110、第二终端120、第三方130以及网络140。其中,第一终端110可以理解为第一方数据拥有者,包括处理设备110-1,存储设备110-2;第二终端120可以理解为第二方数据拥有者,包括处理设备120-1,存储设备120-2;第三方130不是数据拥有者,不持有模型的训练数据,第三方通过中间媒介的方式参与基于多方数据拥有者的模型的联合训练。具体地, 多方数据拥有者会通过第三方的公钥对自己的数据进行加密,利用加密后的数据参与模型训练,并在适当时采取一种安全的方式把加密的运算结果发给第三方解密,进而获得能够更新模型参数的数值。在一些实施例中,第一方数据拥有者和第二方数据拥有者所持有的数据涉及到不同领域中的用户相关信息。例如,双方持有的数据可以包括用户每年存入的银行账户的金额;也可以某一投资理财项目或某一保险品牌所涉及用户群体的性别、年龄、收入、住址等信息。
需要注意的是,仅作为示例性的,图1中数据拥有者的数量为两方,在其他实施例中,还可以包括第三方数据拥有者以及第四方数据拥有者等。
第一终端110和第二终端120可以是带有数据获取、存储和/或发送功能的设备。在一些实施例中,第一终端110和第二终端120可以包括但不限于移动设备、平板电脑、笔记本电脑、台式电脑等或其任意组合。在一些实施例中,第一终端110和第二终端120可以接收来自对方的相关数据,也可以接收来自第三方130的相关数据。例如,第一终端110可以接收来自第二终端的加密损失值。例如,第一终端110和第二终端120可以从第三方130处接收第三方130的公钥。例如,第一终端110还可以把第一加密梯度加上掩码发送给第三方130。
第一终端和第二终端的处理设备110-1和120-1可以进行数据和/或指令处理。处理设备110-1和120-1可以对数据进行加密,也可以执行相关算法和/或指令。例如,第一终端110的处理设备110-1可以接收来自第三方130的公钥,并用所述公钥对第一隐私数据进行加密,也可以利用加密损失值参与模型的联合训练。例如,第二终端120的处理设备120-1可以接收来自第三方130的公钥,并用所述公钥对第二隐私数据进行加密,也可以基于相关算法指令计算加密损失值。
第一终端和第二终端的存储设备110-2和120-2可以存储对应处理设备110-1和120-1执行使用的数据和/或指令,处理设备110-1和120-1可以通过执行或使用所述数据和/或指令以实现本说明书中的示例性方法。存储设备110-2和120-2可以分别用于存储第一隐私数据和第二隐私数据;也可以存储指示第一终端和第二终端执行操作的相关指令。存储设备110-2和120-2还可以分别存储经处理设备110-1和120-1处理后数据。例如,存储设备110-2和120-2还可以分别存储第一隐私数据对应的特征的模型参数以及第二隐私数据对应的特征的模型参数。在一些实施例中,存储设备110-2和存储设备120-2也可以是一个存储设备,其中,第一终端和第二终端只能从该存储设备中获取自己存储的数据。在一些实施例中,存储设备可包括大容量存储器、可移动存储器、易失 性读写存储器、只读存储器(ROM)等或其任意组合。
第三方130至少具有数据和/或指令处理的能力。第三方至少包括具有运算能力的处理设备,例如,云端服务器,终端处理设备等。在一些实施例中,第三方130可以把公钥发送给各个数据拥有者(例如,第一终端110和第二终端120)。在一些实施例中,第三方130可以进行解密运算,例如,来自第一终端110的加了掩码的第一加密梯度进行解密。在一些实施例中,第三方130还可以具有数据和/或指令存储能力,即第三方130还可以包括存储设备。所述存储设备可以用来存储第三方130的公钥和私钥,以及第三方用来执行的操作指令。在一些实施例中,作为可置信的一方,第三方可以属于公正的司法机构或者政府部门;也可以属于数据拥有各方认可的单位。
网络140可以促进信息和/或数据的交换。在一些实施例中,基于隐私数据进行模型训练的系统100(例如,第一终端110(处理设备110-1和存储设备110-2)和第二终端120(处理设备120-1和存储设备120-2))的一个或以上部件可以经由网络140向所述系统100中的其他部件发送信息和/或数据。例如,第二终端120的处理设备120-1可以经由网络140从第一终端110中获得第一隐私数据。又例如,第一终端110的处理设备110-1可以通过网络140从第一终端110的存储设备110-2中获取第一隐私数据。在一些实施例中,网络140可以为任意形式的有线或无线网络,或其任意组合。
本说明书一个或多个实施例中的系统,可以由数据接收模块、及若干个数据处理模块组成。
在一些实施例中,在以第二终端作为执行主体的系统中,所述数据接收模块包括第一数据接收模块;所述数据处理模块可以包括加密结果确定模块、加密损失值确定模块、模型参数更新模块。上述模块均在应用场景所介绍的计算系统中执行,各模块包括各自的指令,指令可存储在存储介质上,指令可在处理器中执行。不同的模块可以位于相同的设备上,也可以位于不同的设备上。它们之间可以通过程序接口、网络等进行数据的传输,可以从存储设备中读取数据或者将数据写入到存储设备中。
第一数据接收模块,可以用于接收来自第一终端的加密后的第一隐私数据,所述第一隐私数据由与其对应的特征和模型参数确定。
加密结果确定模块,可以用于至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;所述第二隐私数据由与其对应的特征和模型参数确定。
加密损失值确定模块,可以用于基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值。在一些实施例中,当所述联合训练的模型包括逻辑回归模型时,所述加密损失值确定模块还可以用于:基于泰勒展开公式以及Sigmoid函数确定所述加密损失值。
模型参数更新模块,可以用于通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;其中,所述加密为同态加密;所述第三方持有所述同态加密的公钥以及对应的私钥;所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。在一些实施例中,所述模型参数更新模块还可以用于:基于所述加密损失值以及所述第二隐私数据对应的特征,确定第二加密梯度。在一些实施例中,所述模型参数更新模块还可以用于:基于所述第二加密梯度以及第二掩码,确定第二掩码梯度,并将所述第二掩码梯度传输给所述第三方;接收来自第三方的第二解密结果;所述第二解密结果对应于所述第二掩码梯度;基于所述第二解密结果以及第二掩码,确定第二解密梯度,并基于所述第二解密梯度更新联合训练的模型。
在一些实施例中,所述系统还包括其他数据接收模块,可以用于接收来自其他终端的其他隐私数据;所述加密结果确定模块还用于:将加密后的第一隐私数据、加密后的其他隐私数据以及所述第二隐私数据的加密数据进行计算,得到加密后的结果。
在一些实施例中,在以第一终端作为执行主体的系统中,所述数据接收模块包括加密损失值接收模块;所述数据处理模块可以包括模型参数更新模块。其中,所述数据接收模块可以用于接收来自第二终端的加密损失值。所述模型参数更新模块,可以用于所述加密损失值通过第三方参与加密模型训练,得到参数更新的模型。
应当理解,本说明书一个或多个实施中的所述系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本申请的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半 导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于处理设备及其模块的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。
图2是根据本说明书的一些实施例所示的基于隐私数据进行模型训练的方法的示例性流程图。
本说明书中的变量名称、公式仅为更好地理解本说明书所述的方法。在应用本说明书时,基于常见的运算原理和机器学习原理,可以对下述过程、变量名称、公式做各种非实质性的变换,例如调换行或列的次序、在矩阵乘法时变换为等价形式、以其他计算形式来表示同一计算等。
在本说明书中,约定按以下方式表示:对于联合训练模型的训练数据,包括m个数据样本,每个样本数据包括n维特征。其中,m个样本的n维特征数据至少由第一方数据拥有者以及第二方数据拥有者持有。为方便说明,本说明书的一些实施例,以两方数据拥有者为例进行详细说明,并且分别用A和B来表示第一方数据拥有者和第二方数据拥有者。其中,第一方数据拥有者也可以称之为第一终端,第二方数据拥有者也可以称之为第二终端。
在本说明书的表示中,第一方数据拥有者A拥有m个样本中的p维特征对应的数据(Xa),及与所述p维特征对应的模型参数(Wa);第二方数据拥有者B拥有m个样本中另外q维特征对应的数据(Xb),及与所述q维特征对应的模型参数(Wb)。在本说明书中,模型参数亦可简称为模型。Xa是m个样本组成的矩阵,每个样本是1行p列的行向量,即Xa是个m行p列的矩阵。Wa是A所对应的p个特征的参数矩阵,Wa是个p*1维的矩阵。Xb是个m行q列的矩阵。Wb是B所对应的q个特征的参数矩阵,Wb是个q*1维的矩阵,且p+q=n。
标签y由A和B的其中一方持有,但另一方持有不会产生实质的影响。在本说明书的一些实施例中,标签y由B持有,y是一个m*1维的列向量。
在本说明书中,为简便起见,未对在线性回归或逻辑回归计算中对样本数据中增 加的恒值为1的数据列,以及在标签中增加的常量1进行特别说明,也未区分矩阵计算时的n与n+1。此简化对本说明书所说明的方法没有实质影响。
在一些实施例中,第三方可以是公正的司法机构或者政府部门;也可以是数据拥有各方认可的单位。具体地,第三方包括至少具有运算能力的处理设备,比如服务器或者终端处理设备。第三方拥有自己方的公钥和私钥,并将公钥给到拥有数据的各终端。
对任一变量X,[X]表示对X用第三方公钥加密。当X是一个矩阵时,表示对矩阵的每个元素加密。如无进一步说明,加密可指任何非对称加密方法。
以上约定的表示方法、变量名称,以及本说明书中出现的公式及其他表达方式,仅为更好地理解本说明书所述的方法。在应用本说明书时,基于常见的运算原理、技术原理、技术方法,可以对表示方法、变量名称、公式、计算方法等做各种非实质性的变换,而不影响其实质和相应的技术效果。例如但不限于调换行或列的次序、在矩阵乘法时变换为等价形式、以其他计算形式来表示同一计算等。
步骤210,第三方将公钥分别发送给A和B。
第三方将自有的公钥给到数据拥有者A以及数据拥有者B,以备后续进行数据加密时使用。例如,第三方可以通过网络把自己的公钥传输给A和B。
步骤220,A、B分别计算Ua、Ub,并对其加密。
双方分别进行所持有的模型参数与特征数据的积运算,并用第三方的公钥对各自的积运算结果加密。数据拥有者A将密文数据发送给数据拥有者B。
在本说明书约定的表示中,A方计算Ua=Xa×Wa,A将Ua用第三方公钥加密,得到[Ua],发送给B。由此得到的Ua和[Ua]都是一个m行1列的矩阵;B方计算Ub=Xb×Wb,B将Ub用第三方公钥加密,得到[Ub],由此得到的Ub或[Ub]都是一个m行1列的矩阵。
在本说明书的一个或多个实施例中,所用加密算法为同态加密算法。同态加密算法是指对于加密函数f,对任意的A、B,f(A)+f(B)=f(A+B),f(A)×f(B)=f(A×B)。对应到本实施例,即:[Ua]a+[Ub]a=[Ua+Ub]a。
步骤230,B计算加密损失值,并发送给A。
拥有两方加密数据的数据拥有者B将两方加密数据利用加法加和。由于加密算法是同态加密算法,因此加和后的值等于双方未加密数据求和后的加密值。
进一步B根据加和后的密文数据计算损失值。在计算损失值时,可以使用Taylor展开式来近似计算Sigmoid函数。由于Taylor展开式是多项式的加法和乘法运算,可以支持同态加密,因此可以通过Taylor展开式,在加密状态下计算得到近似的损失值。
在本说明书约定的表示中,B方计算[z]=[Ua]+[Ub]=[Ua+Ub]。
用Taylor展开式模拟计算:
Figure PCTCN2020125316-appb-000001
以使用一阶模拟为例,
Figure PCTCN2020125316-appb-000002
进一步计算损失值
Figure PCTCN2020125316-appb-000003
Figure PCTCN2020125316-appb-000004
B计算加密损失值
Figure PCTCN2020125316-appb-000005
其中,[z]=[Ua+Ub];
Figure PCTCN2020125316-appb-000006
表示模型预测值;y表示与所述样本数据对应的标签。由此计算得到的加密损失值[d]是一个m行1列的矩阵。
步骤240,B计算第二加密梯度值。
数据拥有者B将加密损失值代入梯度下降公式,即将加密损失值与自方的特征对应的数据做积运算,计算得到第二加密梯度值。
在本说明书约定的表示中,B方利用梯度计算公式计算:
Figure PCTCN2020125316-appb-000007
Figure PCTCN2020125316-appb-000008
B根据同态乘法得到了用第三方公钥加密的第二加密梯度值[Gb]。由此得到的第二加密梯度值[Gb]是一个q行1列的矩阵。
步骤242,B将第二加密梯度加上第二掩码,发给第三方解密。
B将第二加密梯度值加上用第三方公钥加密的第二掩码,并发给第三方,第三方对所接收的加密数据用己方私钥解密。其中,第二掩码以及后面提到的第一掩码是第二方设定的数值,主要目的是防止第三方获知解密后的第二梯度值。其中,本说明书一个或多个实施例所述的掩码可以理解为能够参与加密运算的任意数值,例如第二掩码可以是-0.001、0.1、3、300等。对于第一掩码和第二掩码具体数值的设定范围,本说明书不做限定,只要能满足上述目的即可。
在本说明书约定的表示中,B计算[Gb]+[mask2],并发给第三方。在本实施例中,mask2为第二掩码,与第二梯度值Gb维度相同,因此Gb+mask2也是一个q行1列的矩阵。
第三方获得[Gb]+[mask2]。由于加密算法为同态加密,因此[Gb]+[mask2]=[Gb+mask2],第三方用自有私钥解密获得Gb+mask2。由于第三方不知道mask2的值,所以第三方无法获知Gb的值。
步骤244,B接收第三方返回的解密结果。
第三方将带有第二掩码的解密结果发送给数据拥有者B,B接收到解密结果,并去除第二掩码,得到B方的第二梯度值。
在本说明书约定的表示中,在本实施例中,Gb+mask2为解密结果,B方接收到Gb+mask2,并去除mask2,计算第二梯度值Gb=Gb+mask2-mask2。由此得到第二梯度值Gb是一个q行1列的矩阵。
步骤246,B基于第二梯度值更新模型。
数据拥有者B计算得到自方的第二梯度值,并将第二梯度值与leaning rate做积运算,更新模型。
在本说明书约定的表示中,B方计算更新Wb=Wb-learning_rate×Gb。在本说明书中,learning_rate表示在梯度下降法中的影响下降幅度的参数。
步骤250,A计算第一加密梯度值。
数据拥有者A将加密损失值代入梯度下降公式,即将加密损失值与自方的特征对应的数据做积运算,计算得到第一加密梯度值。
在本说明书约定的表示中,A方利用梯度计算公式计算:
Figure PCTCN2020125316-appb-000009
Figure PCTCN2020125316-appb-000010
A根据同态加密运算得到了用第三方公钥进行加密的第一加密梯度值[Ga]。由此得到的第一加密梯度值[Ga]是一个p行1列的矩阵。
步骤252,A将第一加密梯度值加上第一掩码,发给第三方解密。
A将第一加密梯度值加上用第三方公钥加密的第一掩码,并发给第三方,第三方对所接收的加密数据用己方私钥解密。
在本说明书约定的表示中,A计算[Ga]+[mask1],并发给第三方。在本实施例中,mask1为第一掩码,与第一梯度值Ga维度相同,因此Ga+mask1也是一个p行1列的矩阵。
步骤254,A接收第三方返回的解密结果。
第三方将带有第一掩码的解密结果发送给数据拥有者A,A接收到解密结果,并去除第一掩码,得到A方的第一梯度值。
在本说明书约定的表示中,在本实施例中,Ga+mask1为解密结果,A方接收到Ga+mask1,并去除mask1,计算第二梯度值Ga=Ga+mask1–mask1。由此得到第一梯度值Ga是一个p行1列的矩阵。
步骤256,A基于第一梯度值更新模型。
数据拥有者A计算得到自方的第一梯度值,并将第一梯度值与leaning rate做积运算,更新模型。
在本说明书约定的表示中,A方计算更新Wa=Wa-learning_rate×Ga。
图3为根据本说明书的一些实施例所示的对话信息的处理方法的示例性流程图。
在一些实施例中,方法300中的一个或以上步骤可以在图1所示的系统100中实现。例如,方法300中的一个或以上步骤可以作为指令的形式存储在存储设备110-2/存储设备120-2中,并被处理设备110-1/处理设备120-1调用和/或执行。
步骤310,第二终端接收来自第一终端的加密后的第一隐私数据。在一些实施例中,步骤310可以由第一数据接收模块执行。
在一些实施例中,第一终端可以是图2部分描述的数据拥有者A,第二终端可以是图2部分描述的数据拥有者B。
在一些实施例中,第一隐私数据由第一终端持有,第二终端持有第二隐私数据。其中,第一隐私数据和第二隐私数据对应于相同样本的不同特征(Xa和Xb)以及模型参数(Wa和Wb)。
在一些实施例中,第一隐私数据可以由第一特征与第一模型参数的乘积Ua确定,例如,第一隐私数据为Wa*Xa。对应地,第二隐私数据可以由第二特征与第二模型参数的乘积Ub确定,即为Wb*Xb。其中,对第一终端、第二终端;Ua、Ub;Wa、Xa;以及Wb、Xb的理解可参见图2中的相关说明。
在一些实施例中,第一终端采用第三方的公钥将所述第一隐私数据进行加密。关于第一隐私数据加密以及将加密后的数据传输给第二终端的具体描述可参见本说明书图2的步骤220。
在一些实施例中,第一隐私数据也可以是Wa和Xa,在一些实施例中,第二隐私 数据也可以包括Wb和Xb。
本说明书一个或多个实施例中的“加密”指的是同态加密,即加密后计算的结果经过解密,得到的解密结果与未加密的原始数据计算结果相同。第三方持有实施例中“加密”所需的公钥以及对应的私钥。
在一些实施例中,数据拥有者持有的样本数据可以是保险、银行、医疗至少一个领域中的用户属性信息。例如,银行拥有该银行客户的身份信息、流水信息以及征信信息等;保险公司拥有该公司客户身份信息、历史购买保险信息、历史理赔信息、健康信息、车辆状况信息等;医疗机构拥有该机构病人身份信息、历史看病记录等。在一些实施例中,所述用户属性信息包括图像、文本或语音等。
在一些实施例中,数据拥有者拥有的模型可以根据样本数据的特征做出预测。例如,银行可以根据一二季度用户增长、增长用户身份、银行新增政策等数据的特征预测该行全年存款增长率。在一些实施例中,所述模型还可以用于确认用户的身份信息,所述用户的身份信息可以包括但不限于对用户的信用评价。
在一些实施例中,本说明书一个或多个实施例中的隐私数据(例如,第一隐私数据和第二隐私数据)可以包括与实体相关的隐私数据。在一些实施例中,实体可以理解为可视化的主体,可以包括但不限于用户、商户等。在一些实施例中,所述隐私数据可以包括图像数据、文本数据或声音数据。例如,隐私数据中的图像数据可以是用户的人脸图像、商户的logo图像、能够反映用户或商户信息的二维码图像等。例如,隐私数据中的文本数据可以是用户的性别、年龄、学历、收入等文本数据,或者是商户的交易商品类型、商户进行商品交易的时间以及所述商品的价格区间等等文本数据。例如,隐私数据的声音数据可以是包含了用户个人信息或用户反馈的相关语音内容,通过解析所述语音内容可得到对应的用户个人信息或用户反馈信息。
步骤320,第二终端至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果。在一些实施例中,步骤320可以由加密结果确定模块执行。
在一些实施例中,加密后的结果可以理解为将第一隐私数据和第二隐私数据在加密的状态进行计算得到的结果。在一些实施例中,第一隐私数据的加密数据与第二隐私数据的加密数据之间可以采用和运算来得到加密后的结果。例如,第一隐私数据Ua的加密数据为[Ua],第二隐私数据Ub的加密数据为[Ub],那么通过和运算得到的加密后的结果为[Ua]+[Ub],即为[Ua+Ub]。具体的加密过程,可参见图2的步骤230。
步骤330,第二终端基于加密后的结果以及样本标签,得到至少基于第一隐私数据和第二隐私数据联合训练的模型的加密损失值。在一些实施例中,步骤330可以由加密损失值确定模块执行。
在一些实施例中,损失值可以用来反映训练模型预测值与样本数据真实之间的差距。在一些实施例中,损失值可以通过参与运算的方式来反映预设值与真实值的差距。其中,不同训练模型的相关运算公式不同,相同训练模型时不同参数寻优算法对应的运算公式也不同。例如,本说明书图2给出的实施例中,损失值的计算公式为
Figure PCTCN2020125316-appb-000011
但本说明书一个或多个实施例并不会对确定损失值的运算公式进行限定。
在一些实施例中,第二终端可以基于加密后的结果[Ua+Ub],以及样本标签y,来计算联合训练模型的加密损失值,例如,图2中的[d]。其中,标签y可以由第一终端和第二终端中的任一方持有。
在一些实施例中,所述联合训练的模型可以包括线性回归模型;也可以包括逻辑回归模型。
在一些实施例中,当所述联合训练的模型包括逻辑回归模型,需要运用Sigmoid函数计算损失值d。由于同态加密算法仅支持积运算以及和运算,因此,根据需要可以把Sigmoid函数用一个可以支持积运算以及和运算的近似函数进行替代,例如,在一些实施例中可以通过Taylor公式对Sigmoid公式进行展开,然后基于Sigmoid的Taylor展开公式来计算加密损失值,详细描述可参见图2中的步骤230。在其他实施例中,也可以采用其他可近似的函数来替代Sigmoid函数,或者也可以采用其他展开公式来对Sigmoid进行展开来替代Sigmoid函数,只要所述的替代函数支持积运算和/或和运算,本说明书不做其他任何限制。
如果所述联合训练的模型是线性回归模型,可以使用线性函数来计算预测值
Figure PCTCN2020125316-appb-000012
在线性回归模型中,因为线性函数来计算时可以直接使用同态加密的算法,可以不使用Taylor展开式。具体的,以一次线性函数y=wx+b为例,加入同态加密的算法,第二终端基于第一隐私数据和第二隐私数据加和z,可计算得到加密损失值
Figure PCTCN2020125316-appb-000013
Figure PCTCN2020125316-appb-000014
步骤340,将加密损失值通过第三方参与加密模型训练,得到参数更新的模型。在一些实施例中,步骤340可以由模型参数更新模块执行。
在一些实施例中,第三方可以是终端处理设备,也可以是服务器。其中,终端处 理设备包括处理器和存储设备,例如,iPad、台式计算机、笔记本等。
在一些实施例中,将所述加密损失值通过第三方参与加密模型训练,可以理解为在第三方的参与下,利用加密损失值进行加密计算,最终通过解密的方式来获取能够进行模型参数更新的数值,进而得到参数更新的模型。
在一些实施例中,可以使用梯度下降法来获得参数更新的模型。具体的,可以将得到的加密损失值计算求得加密梯度值参与模型训练,重复上述过程直至迭代次数达到预定义的迭代次数上限值或带入加密损失值后计算得到的误差小于预定义的数值,即得到训练好的模型。
在一些实施例中,可以运用梯度下降法使得损失值d最小。例如,在一些实施例中,可以基于所述加密损失值[d],以及第一隐私数据和第二隐私数据对应的特征Xa和Xb来确定第一终端的第一加密梯度[Ga]和第二终端的第二加密梯度[Gb]。在一些实施例中,第一终端和第二终端可以分别基于第一加密梯度[Ga]和第二加密梯度[Gb]来确定对应的第一解密梯度Ga和第二解密梯度Gb,并分别基于第一解密梯度Ga和第二解密梯度Gb更新模型参数,进而得到参数更新的模型。
在其他实施例中,也可以采用其他参数寻优方法来替代梯度下降法,如牛顿下降法等,本说明书一个或多个实施例对此不作任何限定。需要注意的是,在使用相应的算法时需要考虑到同态加密仅支持积运算和/或和运算,可以使用近似函数替换的方式来解决运算类型支持的问题。
在一些实施例中,第二终端基于所述加密损失值[d]以及第二隐私数据对应的特征Xb,确定第二加密梯度[Gb]的具体过程可以参考图2的步骤240。
在一些实施例中,第二终端可以采用添加掩码的方式基于第二加密梯度得到对应的第二解密梯度。具体的,第二终端得到基于所述第二加密梯度和掩码确定对应的第二掩码梯度,并将所述第二掩码梯度传输给持有加密私钥的第三方;所述第三方将接收到的第二掩码梯度进行解码,并将对应的第二解密结果传输给所述第二终端;所述第二终端基于接收到的所述第一解码结果以及所述第二掩码,去除第二掩码,得到第二解密梯度。在一些实施例中,所述第二掩码梯度可以理解为第二加密梯度与第二掩码的运算结果。在一些实施中,所述运算可以包括积运算或和运算;所述第二掩码也可以包括一个值,也可以包括多个值。例如,在一些实施例中,所述掩码mask2为一个值,所述运算为和运算,那么对应的掩码梯度可以为[Gb]+[mask2]。关于第二终端通过添加第二掩码 方式获取第二解密梯度Gb的具体描述,可参见图2的步骤242和步骤244。
在一些实施例中,当所述第二掩码通过积运算方式添加第二掩码时,所述第二掩码梯度可以为[Gb]×[mask2]。
在一些实施例中,第二终端基于所述第二解密梯度Gb更新所述联合训练的模型,具体的描述,可参见图2的步骤246。
在一些实施例中,第二终端确定加密损失值后,需要将加密损失值传递给第一终端,然后第一终端基于接收到的加密损失值通过第三方参与模型的联合训练。
在一些实施例中,第一终端可以基于接收到的加密损失值[d]以及第一隐私数据对应的特征Xb来确定第一加密梯度[Ga],具体过程可参考图2的步骤250。
在一些实施例中,第一终端也可以采用添加掩码的方式基于第一加密梯度得到对应的第一解密梯度。具体的,可以参见第二终端基于第二加密梯度得到对应的第二解密梯度过程,也可以参见图2的步骤252和步骤254。
在一些实施例中,第一终端基于所述第一解密梯度Ga更新所述联合训练的模型,具体的描述,可参见图2的步骤256。
在本说明书一个或多个实施例中,还包括三个或者更多个数据拥有者通过自己方的样本数据来联合训练机器学习模型。其中,多个数据拥有者持有相对相同样本的不同特征。在该场景下,需要从多个数据拥有者中任一选出一个,用于计算加密损失值,计算完成后再把加密损失值发送给其他数据拥有者,各方数据拥有者都具有加密损失值后,通过第三方将加密损失值参与模型的训练。为了方便说明,本说明书一些实施例选取第二方数据拥有者也就是第二终端,来来计算加密损失值。
在一些实施例中,第二终端还可以接收来自其他终端的加密后的其他隐私数据来联合训练更新模型。其他隐私数据由其他终端持有,其他隐私数据与第一隐私数据,第二隐私数据对应于相同样本的不同特征。在一些实施例中,其他终端可以是一个终端,也可以是多个终端。
在一些实施例中,所述其他终端的其他隐私数据可以由其他终端对应的特征和模型参数的乘积确定。例如,其他终端包括第三终端和第四终端,第三终端和第四终端对应的特征和模型参数分别是Xc和Wc以及Xd和Wd。其中,第三终端隐私数据Uc可以是Wc*Xc,第四终端隐私数据Ud可以是Wd*Xd。
在一些实施例中,其他终端也需要利用第三方的公钥对自己的隐私数据进行加密,并把加密结果传输给第二终端。其中,所述加密过程采用的是同态加密。
在一些实施例中,接收来自其他终端的加密后的其他隐私数据后,第二终端基于加密后的第一隐私数据、加密后的其他隐私数据以及自己方的加密后的第二隐私数据,通过运算得到加密后的结果。在一些实施例中,第一隐私数据的加密数据、第二隐私数据的加密数据和其它隐私数据的加密数据之间可以采用和运算来得到加密后的结果。例如,其他终端是第三终端、第四终端…第n终端。其中,第三终端隐私数据的加密数据[Uc],第四终端隐私数据的加密数据[Ud],第n终端隐私数据的加密数据用[Un]表示,那么通过和运算得到的加密后的结果为[Ua]+[Ub]+[Uc]+[Ud]+…[Un],即为[Ua+Ub+Uc+Ud+…Un]。具体的加密运算过程,可参考图2中的相关示例。
在一些实施例中,第二终端可以基于加密后的结果,以及样本标签y,来计算联合训练模型的加密损失值[d],并把加密损失值[d]发送给其他终端。其他终端接收到加密损失值[d]后,可以计算自身的加密梯度,然后可以通过添加掩码,以及第三方解密的方式来确定自身的解密之后的梯度值,然后再基于自身的解密梯度值来更新自身的模型参数。详细描述可参考第一终端的相关描述或者参考图2中步骤250~步骤256,在此不再赘述。
应当注意的是,上述有关方法300的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对方法300进行各种修正和改变。然而,这些修正和改变仍在本申请的范围之内。
本申请实施例可能带来的有益效果包括但不限于:(1)多方数据联合训练,提高数据的利用率,提高预测模型的准确性;(2)同态加密方式可以提高多方数据联合训练的安全性;(3)在特征维度较高时,也能具有较高的运算效率;(4)通过第三方服务器的参与,在模型加密训练的过程中,对所有方持有的数据来说,加密公钥只有一个,即第三方公钥;对同一个数据在整个运算过程中,加密的层次只有一个。在没有第三方参与的多方加密训练中,需要各数据方需要用其中一方的公钥对数据进行加密,在运算过程中,还需要对中间运算结果用另一方公钥进行双层加密。因此,有第三方参与的同态加密方案,可以提高运算效率。
需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、VisualBasic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或 在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的实施例。

Claims (22)

  1. 一种基于隐私数据进行模型训练的方法;所述方法包括:
    第二终端接收来自第一终端的加密后的第一隐私数据;所述第一隐私数据由与其对应的特征和模型参数确定;
    第二终端至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;所述第二隐私数据由与其对应的特征和模型参数确定;
    第二终端基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值;
    通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;
    其中,所述加密为同态加密;所述第三方持有所述同态加密的公钥以及对应的私钥;所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
  2. 根据权利要求1所述的方法,所述联合训练的模型包括线性回归模型或逻辑回归模型。
  3. 根据权利要求1所述的方法,当所述联合训练的模型包括逻辑回归模型时,所述基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值包括:
    第二终端基于泰勒展开公式以及Sigmoid函数确定所述加密损失值。
  4. 根据权利要求1所述的方法,所述通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算包括:
    第二终端基于所述加密损失值以及所述第二隐私数据对应的特征,确定第二加密梯度。
  5. 根据权利要求4所述的方法,所述通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算还包括:
    第二终端基于所述第二加密梯度以及第二掩码,确定第二掩码梯度,并将所述第二掩码梯度传输给所述第三方;
    第二终端接收来自第三方的第二解密结果;所述第二解密结果对应于所述第二掩码梯度;
    第二终端基于所述第二解密结果以及第二掩码,确定第二解密梯度,并基于所述第二解密梯度更新联合训练的模型。
  6. 根据权利要求1所述的方法,所述方法还包括:接收来自其他终端的其他隐私数据;所述其他隐私数据由与其对应的特征和模型参数确定;所述至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果包括:
    第二终端将加密后的第一隐私数据、加密后的其他隐私数据以及所述第二隐私数据的加密数据进行计算,得到加密后的结果。
  7. 根据权利要求1所述的方法,所述第一隐私数据和所述第二隐私数据包括与实体相关的图像数据、文本数据或声音数据。
  8. 一种基于隐私数据进行模型训练的系统,所述系统包括:
    第一数据接收模块,用于接收来自第一终端的加密后的第一隐私数据;所述第一隐私数据由与其对应的特征和模型参数确定;
    加密结果确定模块,用于至少将加密后的第一隐私数据与第二隐私数据的加密数据进行计算,得到加密后的结果;所述第二隐私数据由与其对应的特征和模型参数确定;
    加密损失值确定模块,用于基于所述加密后的结果以及样本标签,得到至少基于所述第一隐私数据和所述第二隐私数据联合训练的模型的加密损失值;
    模型参数更新模块,用于通过第三方将所述加密损失值参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;
    其中,所述加密为同态加密;所述第三方持有所述同态加密的公钥以及对应的私钥;所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
  9. 根据权利要求8所述的系统,所述联合训练的模型包括线性回归模型或逻辑回归模型。
  10. 根据权利要求8所述的系统,当所述联合训练的模型包括逻辑回归模型时,所述加密损失值确定模块还用于:
    基于泰勒展开公式以及Sigmoid函数确定所述加密损失值。
  11. 根据权利要求8所述的系统,所述模型参数更新模块还用于:
    基于所述加密损失值以及所述第二隐私数据对应的特征,确定第二加密梯度。
  12. 根据权利要求11所述的系统,所述模型参数更新模块还用于:
    基于所述第二加密梯度以及第二掩码,确定第二掩码梯度,并将所述第二掩码梯度传输给所述第三方;
    接收来自第三方的第二解密结果;所述第二解密结果对应于所述第二掩码梯度;
    基于所述第二解密结果以及第二掩码,确定第二解密梯度,并基于所述第二解密梯 度更新联合训练的模型。
  13. 根据权利要求8所述的系统,所述系统还包括:其他数据接收模块,用于接收来自其他终端的其他隐私数据;所述其他隐私数据由与其对应的特征和模型参数确定;
    所述加密结果确定模块还用于:将加密后的第一隐私数据、加密后的其他隐私数据以及所述第二隐私数据的加密数据进行计算,得到加密后的结果。
  14. 根据权利要求8所述的系统,所述第一隐私数据和所述第二隐私数据包括与实体相关的图像数据、文本数据或声音数据。
  15. 一种基于隐私数据进行模型训练的装置,所述装置包括处理器以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,以实现如权利要求1至7中任一项所述基于隐私数据进行模型训练的方法对应的操作。
  16. 一种基于隐私数据进行模型训练的方法,所述方法包括:
    第一终端接收来自第二终端的加密损失值;
    所述加密损失值通过第三方参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;
    其中,所述加密为同态加密;所述第一终端和所述第二终端分别持有第一隐私数据和第二隐私数据,所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
  17. 根据权利要求16所述的方法,所述加密损失值通过第三方参与第一解密梯度和第二解密梯度的计算包括:
    第一终端基于所述加密损失值以及所述第一隐私数据对应的特征,确定第一加密梯度。
  18. 根据权利要求17所述的方法,所述加密损失值通过第三方参与加密模型训练,得到参数更新的模型包括:
    第一终端基于所述第一加密梯度以及第一掩码,确定第一掩码梯度,并将所述第一掩码梯度传输给所述第三方;
    第一终端接收来自第三方的第以解密结果;所述第一解密结果对应于所述第一掩码梯度;
    第一终端基于所述第二解密结果以及第二掩码,确定第二解密梯度,并基于所述第二解密梯度更新联合训练的模型。
  19. 一种基于隐私数据进行模型训练的系统,所述系统包括:
    加密损失值接收模块,用于接收来自第二终端的加密损失值;
    模型参数更新模块,用于所述加密损失值通过第三方参与第一解密梯度和第二解密梯度的计算;所述第一解密梯度和第二解密梯度分别与所述第一隐私数据和第二隐私数据对应,所述第一解密梯度和第二解密梯度用于更新所述联合训练的模型;
    其中,所述加密为同态加密;所述第一终端和所述第二终端分别持有第一隐私数据和第二隐私数据,所述第一隐私数据和所述第二隐私数据对应于相同的训练样本。
  20. 根据权利要求19所述的系统,所述模型参数更新模块还用于:
    基于所述加密损失值以及所述第一隐私数据对应的特征,确定第一加密梯度。
  21. 根据权利要求20所述的系统,所述模型参数更新模块还用于:
    基于所述第一加密梯度以及第一掩码,确定第一掩码梯度,并将所述第一掩码梯度传输给所述第三方;
    接收来自第三方的第以解密结果;所述第一解密结果对应于所述第一掩码梯度;
    基于所述第二解密结果以及第二掩码,确定第二解密梯度,并基于所述第二解密梯度更新联合训练的模型。
  22. 一种基于隐私数据进行模型训练的装置,所述装置包括处理器以及存储器;所述存储器用于存储指令,所述处理器用于执行所述指令,以实现如权利要求16至18中任一项所述基于隐私数据进行模型训练的方法对应的操作。
PCT/CN2020/125316 2019-12-20 2020-10-30 一种基于隐私数据进行模型训练的方法及系统 WO2021120888A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911329590.1A CN111125735B (zh) 2019-12-20 2019-12-20 一种基于隐私数据进行模型训练的方法及系统
CN201911329590.1 2019-12-20

Publications (1)

Publication Number Publication Date
WO2021120888A1 true WO2021120888A1 (zh) 2021-06-24

Family

ID=70501045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125316 WO2021120888A1 (zh) 2019-12-20 2020-10-30 一种基于隐私数据进行模型训练的方法及系统

Country Status (2)

Country Link
CN (1) CN111125735B (zh)
WO (1) WO2021120888A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496258A (zh) * 2021-06-28 2021-10-12 成都金融梦工场投资管理有限公司 一种基于边缘计算的物联网设备非共享数据训练方法
CN114186263A (zh) * 2021-12-17 2022-03-15 大连理工大学 一种基于纵向联邦学习的数据回归方法及电子装置
CN114610994A (zh) * 2022-03-09 2022-06-10 支付宝(杭州)信息技术有限公司 基于联合预测的推送方法和系统
CN114662156A (zh) * 2022-05-25 2022-06-24 蓝象智联(杭州)科技有限公司 一种基于匿名化数据的纵向逻辑回归建模方法
CN114745092A (zh) * 2022-04-11 2022-07-12 浙江工商大学 一种基于联邦学习的金融数据共享隐私保护方法
WO2023124219A1 (zh) * 2021-12-30 2023-07-06 新智我来网络科技有限公司 一种联合学习模型迭代更新方法、装置、系统及存储介质
CN116451872A (zh) * 2023-06-08 2023-07-18 北京中电普华信息技术有限公司 碳排放预测分布式模型训练方法、相关方法及装置
CN117349869A (zh) * 2023-12-05 2024-01-05 深圳市智能派科技有限公司 基于模型用途的切片数据加密处理方法及系统
CN118133993A (zh) * 2024-05-10 2024-06-04 蓝象智联(杭州)科技有限公司 一种高效联邦逻辑回归算力外包方法、介质及系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125735B (zh) * 2019-12-20 2021-11-02 支付宝(杭州)信息技术有限公司 一种基于隐私数据进行模型训练的方法及系统
CN112487460B (zh) * 2020-05-09 2022-04-12 支付宝(杭州)信息技术有限公司 基于隐私保护的业务预测模型训练方法和装置
CN111523134B (zh) * 2020-07-03 2020-11-03 支付宝(杭州)信息技术有限公司 基于同态加密的模型训练方法、装置及系统
CN111738441B (zh) * 2020-07-31 2020-11-17 支付宝(杭州)信息技术有限公司 兼顾预测精度和隐私保护的预测模型训练方法及装置
CN111738238B (zh) * 2020-08-14 2020-11-13 支付宝(杭州)信息技术有限公司 人脸识别方法和装置
CN112149157A (zh) * 2020-08-19 2020-12-29 成都飞机工业(集团)有限责任公司 一种基于公私密钥进行数据保密的3d打印数据库共享方法
CN112131581A (zh) * 2020-08-19 2020-12-25 成都飞机工业(集团)有限责任公司 一种单密钥加密解密的3d打印多数据库共享优化算法
CN111723404B (zh) * 2020-08-21 2021-01-22 支付宝(杭州)信息技术有限公司 联合训练业务模型的方法及装置
CN111931216B (zh) * 2020-09-16 2021-03-30 支付宝(杭州)信息技术有限公司 一种基于隐私保护的方式获取联合训练模型的方法及系统
CN112199709A (zh) * 2020-10-28 2021-01-08 支付宝(杭州)信息技术有限公司 基于多方的隐私数据联合训练模型的方法和装置
CN112632611B (zh) * 2020-12-28 2024-06-18 杭州趣链科技有限公司 数据聚合的方法、设备、电子装置和存储介质
CN113158232A (zh) * 2021-03-26 2021-07-23 北京融数联智科技有限公司 一种隐私数据的计算方法、装置及计算机设备
CN114491590A (zh) * 2022-01-17 2022-05-13 平安科技(深圳)有限公司 基于联邦因子分解机的同态加密方法、系统、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018183130A1 (en) * 2017-03-28 2018-10-04 Yodlee, Inc. Layered masking of content
CN109165725A (zh) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 基于迁移学习的神经网络联邦建模方法、设备及存储介质
CN109255444A (zh) * 2018-08-10 2019-01-22 深圳前海微众银行股份有限公司 基于迁移学习的联邦建模方法、设备及可读存储介质
CN109325584A (zh) * 2018-08-10 2019-02-12 深圳前海微众银行股份有限公司 基于神经网络的联邦建模方法、设备及可读存储介质
US20190182216A1 (en) * 2016-04-12 2019-06-13 The Governing Council Of The University Of Toronto System and methods for validating and performing operations on homomorphically encrypted data
CN110084063A (zh) * 2019-04-23 2019-08-02 中国科学技术大学 一种保护隐私数据的梯度下降计算方法
CN110443067A (zh) * 2019-07-30 2019-11-12 卓尔智联(武汉)研究院有限公司 基于隐私保护的联邦建模装置、方法及可读存储介质
CN111125735A (zh) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 一种基于隐私数据进行模型训练的方法及系统

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9854058B2 (en) * 2004-07-23 2017-12-26 At&T Intellectual Property I, L.P. Proxy-based profile management to deliver personalized services
GB201610883D0 (en) * 2016-06-22 2016-08-03 Microsoft Technology Licensing Llc Privacy-preserving machine learning
CN109308418B (zh) * 2017-07-28 2021-09-24 创新先进技术有限公司 一种基于共享数据的模型训练方法及装置
US11574075B2 (en) * 2018-06-05 2023-02-07 Medical Informatics Corp. Distributed machine learning technique used for data analysis and data computation in distributed environment
CN109165515A (zh) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 基于联邦学习的模型参数获取方法、系统及可读存储介质
CN109413087B (zh) * 2018-11-16 2019-12-31 京东城市(南京)科技有限公司 数据共享方法、装置、数字网关及计算机可读存储介质
CN109992977B (zh) * 2019-03-01 2022-12-16 西安电子科技大学 一种基于安全多方计算技术的数据异常点清洗方法
CN109886417B (zh) * 2019-03-01 2024-05-03 深圳前海微众银行股份有限公司 基于联邦学习的模型参数训练方法、装置、设备及介质
CN110276210B (zh) * 2019-06-12 2021-04-23 深圳前海微众银行股份有限公司 基于联邦学习的模型参数的确定方法及装置
CN110399742B (zh) * 2019-07-29 2020-12-18 深圳前海微众银行股份有限公司 一种联邦迁移学习模型的训练、预测方法及装置
CN110797124B (zh) * 2019-10-30 2024-04-12 腾讯科技(深圳)有限公司 一种模型多端协同训练方法、医疗风险预测方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190182216A1 (en) * 2016-04-12 2019-06-13 The Governing Council Of The University Of Toronto System and methods for validating and performing operations on homomorphically encrypted data
WO2018183130A1 (en) * 2017-03-28 2018-10-04 Yodlee, Inc. Layered masking of content
CN109165725A (zh) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 基于迁移学习的神经网络联邦建模方法、设备及存储介质
CN109255444A (zh) * 2018-08-10 2019-01-22 深圳前海微众银行股份有限公司 基于迁移学习的联邦建模方法、设备及可读存储介质
CN109325584A (zh) * 2018-08-10 2019-02-12 深圳前海微众银行股份有限公司 基于神经网络的联邦建模方法、设备及可读存储介质
CN110084063A (zh) * 2019-04-23 2019-08-02 中国科学技术大学 一种保护隐私数据的梯度下降计算方法
CN110443067A (zh) * 2019-07-30 2019-11-12 卓尔智联(武汉)研究院有限公司 基于隐私保护的联邦建模装置、方法及可读存储介质
CN111125735A (zh) * 2019-12-20 2020-05-08 支付宝(杭州)信息技术有限公司 一种基于隐私数据进行模型训练的方法及系统

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496258A (zh) * 2021-06-28 2021-10-12 成都金融梦工场投资管理有限公司 一种基于边缘计算的物联网设备非共享数据训练方法
CN114186263B (zh) * 2021-12-17 2024-05-03 大连理工大学 一种基于纵向联邦学习的数据回归方法及电子装置
CN114186263A (zh) * 2021-12-17 2022-03-15 大连理工大学 一种基于纵向联邦学习的数据回归方法及电子装置
WO2023124219A1 (zh) * 2021-12-30 2023-07-06 新智我来网络科技有限公司 一种联合学习模型迭代更新方法、装置、系统及存储介质
CN114610994A (zh) * 2022-03-09 2022-06-10 支付宝(杭州)信息技术有限公司 基于联合预测的推送方法和系统
CN114745092A (zh) * 2022-04-11 2022-07-12 浙江工商大学 一种基于联邦学习的金融数据共享隐私保护方法
CN114745092B (zh) * 2022-04-11 2024-05-24 浙江工商大学 一种基于联邦学习的金融数据共享隐私保护方法
CN114662156A (zh) * 2022-05-25 2022-06-24 蓝象智联(杭州)科技有限公司 一种基于匿名化数据的纵向逻辑回归建模方法
CN114662156B (zh) * 2022-05-25 2022-09-06 蓝象智联(杭州)科技有限公司 一种基于匿名化数据的纵向逻辑回归建模方法
CN116451872B (zh) * 2023-06-08 2023-09-01 北京中电普华信息技术有限公司 碳排放预测分布式模型训练方法、相关方法及装置
CN116451872A (zh) * 2023-06-08 2023-07-18 北京中电普华信息技术有限公司 碳排放预测分布式模型训练方法、相关方法及装置
CN117349869A (zh) * 2023-12-05 2024-01-05 深圳市智能派科技有限公司 基于模型用途的切片数据加密处理方法及系统
CN117349869B (zh) * 2023-12-05 2024-04-09 深圳市智能派科技有限公司 基于模型用途的切片数据加密处理方法及系统
CN118133993A (zh) * 2024-05-10 2024-06-04 蓝象智联(杭州)科技有限公司 一种高效联邦逻辑回归算力外包方法、介质及系统

Also Published As

Publication number Publication date
CN111125735A (zh) 2020-05-08
CN111125735B (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021120888A1 (zh) 一种基于隐私数据进行模型训练的方法及系统
WO2021204268A1 (zh) 基于隐私数据进行模型训练
WO2021120855A1 (zh) 一种基于隐私数据进行模型训练的方法及系统
CN111931950B (zh) 一种基于联邦学习进行模型参数更新的方法及系统
CN111931216B (zh) 一种基于隐私保护的方式获取联合训练模型的方法及系统
JP6921233B2 (ja) 秘密分散を使用したロジスティック回帰モデリング方式
US11397831B2 (en) Method and system for double anonymization of data
US20210042645A1 (en) Tensor Exchange for Federated Cloud Learning
JP6899065B2 (ja) ブロックチェーン・データからの分析結果の自動生成のための方法、装置および非一過性コンピュータ可読ストレージ媒体
KR102403295B1 (ko) 동형적으로 암호화된 데이터의 유효성 확인 및 이에 대한 연산을 수행하기 위한 시스템 및 방법
CN111143894B (zh) 一种提升安全多方计算效率的方法及系统
Bharati et al. Federated learning: Applications, challenges and future directions
AU2018310377A1 (en) Method and apparatus for encrypting data, method and apparatus for training machine learning model, and electronic device
CN111310204B (zh) 数据处理的方法及装置
US11907403B2 (en) Dynamic differential privacy to federated learning systems
US20210294580A1 (en) Building segment-specific executable program code for modeling outputs
Paul et al. Privacy-preserving collective learning with homomorphic encryption
Rajesh et al. Association rules and deep learning for cryptographic algorithm in privacy preserving data mining
Siddique et al. Privacy-enhanced pneumonia diagnosis: IoT-enabled federated multi-party computation in industry 5.0
CN111062492B (zh) 一种基于可选隐私数据进行模型训练的方法及系统
CN111079947B (zh) 一种基于可选隐私数据进行模型训练的方法及系统
Pakdel Privacy and Security Enhanced Federated Learning Framework Design
Ramírez et al. Technological Enablers for Privacy Preserving Data Sharing and Analysis
US11451375B2 (en) System, method and apparatus for privacy preserving inference
Huang et al. Multi-party collaborative drug discovery via federated learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20901625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20901625

Country of ref document: EP

Kind code of ref document: A1