CN112016698A - Factorization machine model construction method and device and readable storage medium - Google Patents

Factorization machine model construction method and device and readable storage medium Download PDF

Info

Publication number
CN112016698A
CN112016698A CN202010893538.5A CN202010893538A CN112016698A CN 112016698 A CN112016698 A CN 112016698A CN 202010893538 A CN202010893538 A CN 202010893538A CN 112016698 A CN112016698 A CN 112016698A
Authority
CN
China
Prior art keywords
party
parameter
model
sharing
secret sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010893538.5A
Other languages
Chinese (zh)
Inventor
高大山
鞠策
杨强
谭奔
郑文琛
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010893538.5A priority Critical patent/CN112016698A/en
Publication of CN112016698A publication Critical patent/CN112016698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a factorization machine model construction method, equipment and a readable storage medium, wherein the factorization machine model construction method comprises the following steps: acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to acquire first party secret sharing initial model parameters for the second equipment to determine the second party secret sharing initial model parameters, performing federal interaction with the second equipment based on a first non-zero part in the first sparse data and the first party secret sharing initial model parameters to combine a second non-zero part in the second sparse data acquired by the second equipment and the second party secret sharing initial model parameters, calculating a secret sharing model error, updating the preset initialization model based on the secret sharing model error, and acquiring a longitudinal federal factorization model. The method and the device solve the technical problem of low calculation efficiency in federated learning based on the sparse matrix.

Description

Factorization machine model construction method and device and readable storage medium
Technical Field
The application relates to the field of artificial intelligence of financial technology (Fintech), in particular to a factorization machine model construction method, equipment and a readable storage medium.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of federal learning is more and more extensive, at present, the training data of federal learning is usually dense matrixes, and then the dense matrixes are usually encrypted by a homomorphic encryption method, so that federal learning can be realized on the premise of not revealing data privacy.
Disclosure of Invention
The application mainly aims to provide a factorization machine model construction method, equipment and a readable storage medium, and aims to solve the technical problem of low calculation efficiency in federate learning based on a sparse matrix in the prior art.
In order to achieve the above object, the present application provides a factorization machine model construction method applied to a factorization machine model construction device, the factorization machine model construction method including:
acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to obtain first party secret sharing initial model parameters for the second equipment to determine second party secret sharing initial model parameters;
performing federated interaction with the second device based on a first non-zero part in the first sparse data and the first-party secret sharing initial model parameters to jointly obtain a second non-zero part in second sparse data and the second-party secret sharing initial model parameters by the second device, and calculating a secret sharing model error;
and updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model.
The application also provides a personalized recommendation method, which is applied to personalized recommendation equipment and comprises the following steps:
acquiring sparse data of a user to be recommended by a first party, and performing secret sharing with second equipment to obtain secret sharing model parameters;
performing longitudinal federal prediction interaction with the second device based on a first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result;
performing aggregation interaction with the second device based on the first secret sharing scoring result to combine a second secret sharing scoring result determined by the second device to calculate a target scoring result;
and generating a target recommendation list corresponding to the item to be recommended based on the target scoring result.
The present application further provides a factorization machine model constructing device, which is a virtual device, and is applied to a factorization machine model constructing apparatus, the factorization machine model constructing device includes:
the secret sharing module is used for acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to acquire first party secret sharing initial model parameters so that the second equipment can determine the second party secret sharing initial model parameters;
an error calculation module, configured to perform federated interaction with the second device based on a first non-zero portion in the first sparse data and the first-party secret sharing initial model parameter, so as to jointly calculate a secret sharing model error with a second non-zero portion in second sparse data obtained by the second device and the second-party secret sharing initial model parameter;
and the generating module is used for updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model.
The present application further provides a personalized recommendation device, the personalized recommendation device is a virtual device, and the personalized recommendation device is applied to a personalized recommendation device, the personalized recommendation device includes:
the secret sharing module is used for acquiring sparse data of a user to be recommended by a first party and carrying out secret sharing with second equipment to acquire secret sharing model parameters;
the scoring module is used for carrying out longitudinal federal prediction interaction with the second equipment based on a first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters so as to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result;
the aggregation module is used for carrying out aggregation interaction with the second equipment based on the first secret sharing scoring result so as to combine a second secret sharing scoring result determined by the second equipment to calculate a target scoring result;
and the generating module is used for generating a target recommendation list corresponding to the item to be recommended based on the target scoring result.
The present application further provides a factorizer model constructing device, which is an entity device, and includes: a memory, a processor, and a program of the factorization machine model construction method stored on the memory and executable on the processor, the program of the factorization machine model construction method being executable by the processor to implement the steps of the factorization machine model construction method as described above.
The present application further provides a personalized recommendation device, where the personalized recommendation device is an entity device, and the personalized recommendation device includes: the personalized recommendation method comprises a memory, a processor and a program of the personalized recommendation method stored on the memory and capable of running on the processor, wherein the program of the personalized recommendation method can realize the steps of the personalized recommendation method when being executed by the processor.
The present application also provides a readable storage medium having stored thereon a program for implementing a factorization machine model construction method, which when executed by a processor implements the steps of the factorization machine model construction method as described above.
The present application also provides a readable storage medium, on which a program for implementing a personalized recommendation method is stored, and when executed by a processor, the program for implementing the personalized recommendation method implements the steps of the personalized recommendation method as described above.
Compared with the prior art which adopts a homomorphic encryption-based method for carrying out federal learning, the method comprises the steps of obtaining initialization model parameters corresponding to a preset initialization model and first sparse data, carrying out secret sharing with second equipment, obtaining secret sharing initial model parameters of a first party by the first equipment, obtaining secret sharing initial model parameters of a second party by the second equipment, carrying out federal interaction with the second equipment based on a first non-zero part in the first sparse data and the secret sharing initial model parameters of the first party, and combining a second non-zero part in the second sparse data obtained by the second equipment and the secret sharing initial model parameters of the second party to calculate secret sharing model errors, wherein when carrying out the federal interaction, the method has the advantages that only the first non-zero part in the first sparse data and the second non-zero part in the second sparse data are used for calculation, and further the calculation processes of the zero parts in the first sparse data and the second sparse data are reduced, so that the calculation amount and the calculation complexity in the federal interaction process are greatly reduced, and further the preset initialization model is updated based on secret sharing model errors, so that the longitudinal federal factorization model can be obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a factorizer model construction method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of a factorizer model construction method of the present application;
FIG. 3 is a flowchart illustrating a third embodiment of a personalized recommendation method according to the present application;
FIG. 4 is a schematic structural diagram of a hardware operating environment related to a regression model construction method for a factorizer in the embodiment of the present application;
fig. 5 is a schematic device structure diagram of a hardware operating environment related to a personalized recommendation method according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the factorization machine model construction method of the present application, referring to fig. 1, the factorization machine model construction method is applied to a first device, and the factorization machine model construction method includes:
step S10, acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to acquire first party secret sharing initial model parameters for the second equipment to determine second party secret sharing initial model parameters;
in this embodiment, it should be noted that the first device and the second device are both participants of longitudinal federal learning, the first device possesses first sparse data with sample labels, and the first sparse data can be represented by a first sparse matrix and the sample labels, for example, assuming that the first sparse data is (X)A,Y),XAFor the first sparse matrix, Y is the sample label, and additionally, the second device has second sparse data without sample label, which can be represented by a second sparse matrix, e.g., assuming X is the second sparse matrixB
Additionally, in this embodiment, the factorization machine model is a machine learning model constructed based on longitudinal federal learning, and model parameters of the factorization machine model are commonly held by a first device and a second device, where the model parameters corresponding to the factorization machine model can be represented by a matrix, and after the factorization is initialized, the model parameters corresponding to the factorization machine model include an initialized model parameter belonging to the first device and a second-party initialized model parameter belonging to the second device,the initialization model parameters include a first square first type model parameter vector, a first square second type model parameter matrix, and a first square transpose matrix corresponding to the first square second type model parameter matrix, and the second square initialization model parameters include a second square first type model parameter vector, a second square second type model parameter matrix, and a second square transpose matrix corresponding to the second square second type model parameter matrix, for example, it is assumed that the first square first type model parameter vector is wAThe second-party first-type model parameter vector is wBThe parameter matrix of the first-party second-type model is VAThe second square second type model parameter matrix is VBAnd the model parameter of the factorization model is a first type model parameter w ═ wA,wB]And a second type model parameter V ═ VA,VB]。
Additionally, it should be noted that the process of sharing the data secretly is a process of splitting the data into two pieces of sub-data, and the two pieces of sub-data are held by two parties of the secretly sharing, for example, if the two parties of the secretly sharing are a and B, and then secretly sharing the data X, a holds the first share of the data X [ [ X ] X]]AB holds a second share of data X [ [ X ]]]BAnd X [ [ X ]]]A+[[X]]B
Additionally, it should be noted that the model expression of the factorization machine model is as follows:
z(x)=<w,x>+∑i<j<Vi,Vj>xixj
wherein X is a data matrix corresponding to model input data, wherein the model input data comprises first sparse data (X)AY) and second sparse data XBWherein Y is the sample label, XAHaving dAFirst sparse matrix of characteristic dimensions, XBHaving dBA second sparse matrix of characteristic dimensions, the first type model parameters being w, wherein w is a d-dimensional vector, the second type model parameters being V, wherein V is d x dxAnd w ═ wA,wB]I.e. w is a vector of parameters w of a first type of model from a first partyAAnd a second-party first-type model parameter vector wBComposition of, wherein wAIs dADimension vector, wBIs dBDimension vector, additionally, V ═ VA,VB]Wherein V is a parameter matrix V of a second type model from a first partyAAnd said second-party second-type model parameter matrix VBComposition of, wherein VAIs dA*dXDimension matrix, VBIs dB*dXThe dimension matrix is a matrix of dimensions,<w,x>is the inner product of w and x, ViA column vector of the ith column of V, VjA column vector of j-th column of V, xiA column vector of the ith column of x, xjThe column vector for the j-th column of x.
Acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with a second device based on the initialization model parameters to acquire first party secret sharing initial model parameters for the second device to determine second party secret sharing initial model parameters, specifically, initializing the factorization machine model to acquire the preset initialization model, and acquiring initialization model parameters corresponding to the preset initialization model, wherein the second device acquires second party initialization model parameters and splits the second party initialization model parameters into a first share of the second party initialization model parameters and a second share of the second party initialization model parameters to further acquire first sparse data, and splits the initialization model parameters into a first share of the initialization model parameters and a second share of the initialization model parameters, the initialization model parameter first share comprises a first party first type model parameter vector first share, a first party second type model parameter matrix first share and a first party transpose matrix first share, the initialization model parameter second share comprises a first party first type model parameter vector second share, a first party second type model parameter matrix second share and a first party transpose matrix second share, and then the initialization model parameter is further subjected to parameter initializationSending a second share to the second device for the second device to use the second initialization model second share and the second party initialization model first share as the second party secret sharing model parameters, and further receive a second party initialization model parameter second share sent by the second device, and use the initialization model parameter first share and the second party initialization model parameter second share as the first party secret sharing initial model parameters, wherein the second party initialization model parameter first share includes a second party first type model parameter vector first share, a second party second type model parameter matrix first share, and the second party transpose matrix first share, and the second party initialization model parameter second share includes a second party first type model parameter vector second share, a second party second type model parameter matrix second share, and the second party transpose matrix second share, for example, assume that the initialization model parameter is GAThe second party initializes the model parameters GBThen, after secret sharing, the first device has a first share of the initialization model [ [ G ]A]]AAnd a second party initializing model second share [ [ G ]B]]AThe second device has a second share of the initialization model [ [ G ]A]]BAnd the second party initializing the model first share [ [ G ]B]]BAnd G isA=[[GA]]A+[[GA]]B,GB=[[GB]]A+[[GB]]B
Step S20, performing federated interaction with the second device based on the first non-zero part in the first sparse data and the first-party secret sharing initial model parameter, so as to combine the second non-zero part in the second sparse data acquired by the second device and the second-party secret sharing initial model parameter, and calculate a secret sharing model error;
in this embodiment, it should be noted that the first non-zero part is each column vector in the first sparse matrix, and the second non-zero part is each column vector in the second sparse matrix.
Performing federated interaction with the second device based on a first non-zero part in the first sparse data and the first-party secret sharing initial model parameter to combine a second non-zero part in second sparse data acquired by the second device and the second-party secret sharing initial model parameter, and calculating a secret sharing model error, specifically, performing federated interaction with the second device based on the first non-zero part in the first sparse data and the first-party secret sharing initial model parameter to combine the second non-zero part in the second sparse data determined by the second device and the second-party secret sharing initial model parameter, and calculating each secret sharing error parameter item, and further calculating the secret sharing model error based on each secret sharing parameter item, a secret sharing sample tag and a preset secret sharing model error calculation formula, the secret sharing sample label is obtained by secret sharing the sample label with a second device, wherein the secret sharing sample label is a first share of the sample label, and a second share of the sample label is a secret sharing sample label of a second party and is held by the second device.
Wherein the step of performing federated interaction with the second device based on the first non-zero part in the first sparse data and the first-party secret sharing initial model parameters to combine the second non-zero part in the second sparse data acquired by the second device and the second-party secret sharing initial model parameters comprises the steps of:
step S21, performing federated interaction with the second device based on a preset secret sharing multiplication triple, the first non-zero part, and the first party secret sharing initial model parameter, so as to combine the second non-zero part and the second party secret sharing initial model parameter, and calculate a sparse matrix security inner product and a secret sharing intermediate parameter;
in this embodiment, it should be noted that the secret sharing error parameter items are parameter items used for calculating secret sharing errors, and each of the secret sharing parameter items includes a coefficient matrix security internal product and a secret sharing intermediate parameter.
Performing federated interaction with the second device based on a preset secret-sharing multiplication triple, the first non-zero part and the first-party secret-sharing initial model parameter to combine the second non-zero part and the second-party secret-sharing initial model parameter, and calculating a sparse matrix security inner product and a secret-sharing intermediate parameter, specifically, performing federated interaction with the second device based on the first non-zero part and the first-party secret-sharing initial model parameter to combine the second non-zero part and the second-party secret-sharing initial model parameter, and calculating a sparse matrix security inner product, and performing federated interaction with the second device based on a preset secret-sharing multiplication triple, the first non-zero part and the first-party secret-sharing initial model parameter to combine a preset second-party secret-sharing multiplication triple, a preset second-party secret-sharing multiplication triple corresponding to the preset secret-sharing multiplication triple, The second non-zero portion and the second party secret share initial model parameters, computing secret sharing intermediate parameters.
Wherein the first-party secret sharing initial model parameters comprise first-type sharing model parameters and second-type sharing model parameters, the second-party secret sharing initial model parameters comprise second-party first-type sharing model parameters and second-party second-type sharing model parameters, and the sparse matrix security inner product comprises a first-type sparse matrix security inner product and a second-type sparse matrix security inner product,
the step of calculating sparse matrix security inner-volume and secret sharing intermediate parameters based on a preset secret sharing multiplication triple, the first non-zero part and the first party secret sharing initial model parameters, and performing federated interaction with the second device to combine the second non-zero part and the second party secret sharing initial model parameters comprises:
step S211, performing federated interaction with the second device based on the first-type shared model parameter and the first non-zero part, so as to combine the second-party first-type shared model parameter and the second non-zero part, and calculate the first-type sparse matrix security inner product;
in this embodiment, it should be noted that the first type shared model parameter is a first type model parameter shared by a secret held by a first device, where the first type shared model parameter includes a fifth shared parameter and a sixth shared parameter, where the fifth shared parameter is a first share of a first party first type model parameter vector, the sixth shared parameter is a second share of a second party first type model parameter vector, the second party first type shared model parameter is a first type model parameter shared by a secret held by a second device, where the second party first type shared parameter includes a seventh shared parameter and an eighth shared parameter, where the seventh shared parameter is the second share of the first party first type model parameter vector, and the eighth shared parameter is the first share of the second party first type model parameter vector, the first-type sparse matrix security inner product comprises a third non-zero feature item cross inner product and a fourth non-zero feature item cross inner product, wherein the third non-zero feature item cross inner product is a cross feature item inner product of a first party first-type model parameter vector shared by secrets in the first device and the first non-zero part, that is, the third non-zero feature item cross inner product is an accumulated value of a product of the first party first-type model parameter vector shared by secrets in the first device and each column vector in the first non-zero part, wherein the third non-zero feature item cross inner product can be represented by a vector, the fourth non-zero feature item cross inner product is a cross feature item inner product of a second party first-type model parameter vector shared by secrets in the first device and the second non-zero part, and the fourth non-zero feature item cross inner product is each column direction in the second party first-type parameter vector and the second non-zero part shared by secrets in the first device An accumulated value of a product of quantities, wherein the fourth non-zero feature term cross inner product is representable by a vector.
Performing federated interaction with the second device based on the first-type shared model parameter and the first non-zero part to combine the second-party first-type shared model parameter and the second non-zero part, and calculating the first-type sparse matrix security inner product, specifically, the second device generates a third public key and a third private key corresponding to the third public key, performs homomorphic encryption on the seventh shared parameter based on the third public key to obtain an encrypted seventh shared parameter, sends the third public key and the encrypted seventh shared parameter to the first device, and then the first device receives the third public key and the encrypted seventh shared parameter, and then performs homomorphic encryption on the fifth shared parameter based on the third public key to obtain an encrypted fifth shared parameter, and then calculates the sum of the encrypted fifth shared parameter and the encrypted seventh shared parameter, obtaining an encrypted first party first type model parameter vector, further calculating a product of the encrypted first party first type model parameter vector and each first non-zero column vector in a first non-zero part in the first sparse matrix, obtaining a first vector product corresponding to each first non-zero column vector, wherein each row of the first sparse matrix corresponds to one sample dimension, each column of the first sparse matrix corresponds to one feature dimension, further accumulating the first vector products, obtaining a third encrypted inner product, further constructing uniformly distributed third non-zero feature item cross inner products consistent with the feature dimensions of the third encrypted inner products, and based on the third public key, performing homomorphic encryption on the third non-zero feature item cross inner products, obtaining encrypted third non-zero feature item cross inner products, further calculating a difference value between the third encrypted inner products and the encrypted third non-zero feature item cross inner products, obtaining a third non-zero feature item cross inner product of the encrypted second party, and sending the third non-zero feature item cross inner product of the encrypted second party to the second device, and the second device decrypts the third non-zero feature item cross inner product of the encrypted second party based on the third private key to obtain a third non-zero feature item cross inner product of the second party, wherein the third non-zero feature item cross inner product of the second party is an inner product of a first party first type model parameter vector shared by secrets in the second device and a cross feature item of the first non-zero part, that is, the third non-zero feature item cross inner product of the second party is an accumulated value of a product of the first party first type model parameter vector shared secrets in the second device and each column vector in the first non-zero part, and similarly, the first device generates a fourth public key and a fourth private key corresponding to the fourth public key, and based on the fourth public key, homomorphically encrypting the sixth shared parameter to obtain an encrypted sixth shared parameter, further sending the fourth public key and the encrypted sixth shared parameter to the second device, further receiving the fourth public key and the encrypted sixth shared parameter by the second device, homomorphically encrypting the eighth shared parameter based on the fourth public key to obtain an encrypted eighth shared parameter, further calculating the encrypted sixth shared parameter and the encrypted eighth shared parameter to obtain an encrypted second-party first-type model parameter vector, further calculating a product of the encrypted second-party first-type model parameter vector and each second non-zero column vector in a second non-zero portion in the second sparse matrix, and obtaining a second vector product corresponding to each second non-zero column vector, wherein each row of the second sparse matrix corresponds to one sample dimension, each column of the second sparse matrix corresponds to a feature dimension, and then the products of the second vectors are accumulated to obtain a fourth encryption inner product, so as to construct a second party fourth non-zero feature item cross inner product consistent with the feature dimension of the fourth encryption inner product, wherein the second party fourth non-zero feature item cross inner product is a cross feature item inner product of a second party first type model parameter vector shared by secrets in second equipment and the second non-zero part, the second party fourth non-zero feature item cross inner product is an accumulated value of the products of each column vector in the second party first type model parameter vector shared by secrets and the second non-zero part in the second equipment, and then homomorphic encryption is performed on the second party fourth non-zero feature item cross inner product based on the fourth public key to obtain an encrypted second party fourth non-zero feature item cross inner product, and further calculating a difference value between the fourth encrypted inner product and a fourth non-zero feature item cross inner product of the encrypted second party to obtain an encrypted fourth non-zero feature item cross inner product, and sending the encrypted fourth non-zero feature item cross inner product to the first device, and further receiving the encrypted fourth non-zero feature item cross inner product by the first device, and decrypting the encrypted fourth non-zero feature item cross inner product to obtain a fourth non-zero feature item cross inner product, wherein the third non-zero feature item cross inner product and the fourth non-zero feature item cross inner product can be calculated at the same time or at different times.
Additionally, the expression of the third non-zero feature term cross-inner product and the fourth non-zero feature term cross-inner product is as follows:
Figure BDA0002656267180000111
Figure BDA0002656267180000112
wherein Q is1Is the cross inner product, Q, of the third non-zero feature term2Is the cross inner product, w, of the fourth non-zero feature termAA first type of model parameter vector for said first party,
Figure BDA0002656267180000113
is the first non-zero column vector, XAAs the first sparse matrix, dARepresenting the first sparse matrix with dADimension of a feature, wBFor said second party model parameter vector of the first type,
Figure BDA0002656267180000114
is the second non-zero column vector, XBAs said second sparse matrix, dBRepresenting the first sparse matrix with dBCharacteristic dimension of [ 2 ]]]AThe expression that the content in the parentheses is secret shared data and belongs to the first device, and the cross inner product of the third non-zero feature item of the second party and the cross inner product of the fourth non-zero feature item of the second party is as follows:
Figure BDA0002656267180000115
Figure BDA0002656267180000116
wherein Q is3Is the cross inner product, Q, of the second-party third non-zero feature term4Is the cross inner product of the fourth non-zero characteristic term of the second side [, ]]]BIt is indicated that the content in the parenthesis is data shared secretly and belongs to the second device.
Additionally, the association relationship between the third non-zero feature term cross-inner product and the second-party third non-zero feature term cross-inner product, and the association relationship between the fourth non-zero feature term cross-inner product and the second-party fourth non-zero feature term cross-inner product are as follows:
Figure BDA0002656267180000121
Figure BDA0002656267180000122
step S212, carrying out federal interaction with the second equipment based on the second type shared model parameters and the first non-zero part, so as to combine the second type shared model parameters and the second non-zero part of the second party and calculate the safety inner product of the second type sparse matrix;
in this embodiment, it should be noted that the second-type shared model parameter is a second-type model parameter shared by a secret held by the first device, where the second-type shared model parameter includes a first shared parameter and a third shared parameter, where the first shared parameter is a second share of a second-party second-type model parameter matrix, the third shared parameter is a first share of a first-party second-type model parameter matrix, the second-party second-type shared model parameter is a second-type model parameter shared by a secret held by the second device, where the second-party second-type shared parameter includes a second shared parameter and a fourth shared parameter, where the second shared parameter is a first share of a second-party second-type model parameter matrix, and the fourth shared parameter is a second share of the first-party second-type model parameter matrix, the second-type sparse matrix security inner product comprises a first non-zero feature item cross inner product and a second non-zero feature item cross inner product, wherein the first non-zero feature item cross inner product is a cross feature item inner product of a second party second-type model parameter matrix shared by secrets in the first device and the second non-zero part, namely the first non-zero feature item cross inner product is an accumulated value of a product of each column vector in the second party second-type model parameter matrix shared by secrets in the first device and each column vector in the second non-zero part, the second non-zero feature item cross inner product is a cross feature item inner product of a first party second-type model parameter matrix shared by secrets in the first device and the first non-zero part, and the second non-zero feature item cross inner product is an accumulated value of each column vector in the first party second-type model parameter matrix shared by secrets in the first device and each column vector in the first non-zero part, similarly, based on the second-type shared model parameter and the first non-zero part, performing federated interaction with the second device to combine the second-party second-type shared model parameter and the second non-zero part, and calculating the second-type sparse matrix security inner product, specifically, based on the first shared parameter, performing federated interaction with the second device to combine the second shared parameter and the second non-zero part, and calculating the first non-zero feature item cross inner product, and assisting the second device to calculate a second-party first non-zero feature item cross inner product corresponding to the first non-zero feature item cross inner product, where the second-party first non-zero feature item cross inner product is a cross feature item inner product of a second-party shared by secrets in the second device, that is, a second-type model parameter matrix and the second non-zero part, the second-party first non-zero feature item cross-inner product is an accumulated value of a product of each column vector in a second-party second-type model parameter matrix shared by secrets in a second device and each column vector in a second non-zero part, and is also subjected to federated interaction with the second device based on the third shared parameter and the first non-zero part so as to combine the fourth shared parameter, calculate a second non-zero feature item cross-inner product which is an accumulated value of a product of each column vector in the first-party second-type model parameter matrix shared by secrets in the second device and each column vector in the first non-zero part, and assist the second device in calculating a second-party second non-zero feature item cross-inner product which is an accumulated value of a product of each column vector in the first-party second-type model parameter matrix shared by secrets in the second device and each column vector in the first non-zero part, the first non-zero feature item cross inner product and the second non-zero feature item cross inner product can be calculated at the same time or different times.
Additionally, the expression of the first non-zero feature term cross inner product and the second non-zero feature term cross inner product is as follows:
Figure BDA0002656267180000131
Figure BDA0002656267180000132
wherein R is1Is the cross inner product, R, of the first non-zero feature term2Is the cross inner product, V, of the second non-zero feature termAFor the first party model parameter vector of the second type,
Figure BDA0002656267180000133
is the first non-zero column vector, XAFor the first sparse matrix, VBFor the second-party model parameter vector of the second type,
Figure BDA0002656267180000134
is said secondNon-zero column vector, XBIs the second sparse matrix, [, ]]]AThe expression of the cross-inner product of the first non-zero feature item of the second party and the cross-inner product of the second non-zero feature item of the second party is as follows:
Figure BDA0002656267180000135
Figure BDA0002656267180000136
wherein R is3Is the cross inner product, R, of the second-party first non-zero feature term4Is the cross inner product of the second non-zero characteristic term of the second square [, ]]]BIt is indicated that the content in the parenthesis is data shared secretly and belongs to the second device.
Additionally, the association relationship between the first non-zero feature term cross-inner product and the second-party first non-zero feature term cross-inner product, and the association relationship between the second non-zero feature term cross-inner product and the second-party second non-zero feature term cross-inner product are as follows:
Figure BDA0002656267180000141
Figure BDA0002656267180000142
step S213, performing federated interaction with the second device based on the second-type shared model parameter, the first non-zero part, and the preset secret-sharing multiplication triple, so as to combine the second-party second-type shared model parameter and the second non-zero part to calculate the secret-sharing intermediate parameter.
In this embodiment, it should be noted that the preset secret-sharing multiplication triple is a secret-sharing multiplication triple held by a first device, and the second device holds a second-party secret-sharing multiplication triple corresponding to the preset secret-sharing multiplication triple, where a sum of the preset secret-sharing multiplication triple and the second-party secret-sharing multiplication triple is the multiplication triple, where the multiplication triple is an array composed of three parameters having product relationships, for example, if the multiplication triple is (a, b, c), then the multiplication triple has the product relationship c ═ a ═ b.
Additionally, it should be noted that the second type of shared model parameters include the first shared parameter, a first shared transpose parameter corresponding to the first shared parameter, a third shared parameter, and a second shared transpose parameter corresponding to the third shared parameter, where the first shared transpose parameter is the second-party transpose matrix shared by secrets held by the first device, the second shared transpose parameter is the first-party transpose matrix shared by secrets held by the first device, the second-party second type of shared model parameters include the second shared parameter, a third shared transpose parameter corresponding to the second shared parameter, a fourth shared parameter, and a fourth shared transpose parameter corresponding to the fourth shared parameter, where the third shared transpose parameter is the second-party transpose matrix shared by secrets held by the second device, the fourth shared transpose parameter is the first party transpose matrix shared by secrets held by the second device.
Additionally, it should be noted that the secret sharing intermediate parameters include a first party first secret sharing intermediate parameter and a first party second secret sharing intermediate parameter, where the first party first secret sharing intermediate parameter is a first intermediate parameter shared by a secret held by a first device, and the first party second secret sharing intermediate parameter is a second intermediate parameter shared by a secret held by the first device, where the first intermediate parameter is a secret sharing non-zero feature item cross-inner product formed by a second party second type model parameter matrix, the second party transposed matrix, a second non-zero portion in the second sparse matrix, and a non-zero portion in a transposed matrix corresponding to the second sparse matrix, that is, each value in the first intermediate parameter is a column vector of a second party second type model parameter matrix, a column vector transposed by the second party, a first intermediate parameter, a second parameter, A secret sharing product common to four column vectors of a second non-zero portion in the second sparse matrix and a column vector of a non-zero portion in a transpose to which the second sparse matrix corresponds, and, similarly, the second intermediate parameter is a secret sharing non-zero feature item cross inner product formed by a first party second type model parameter matrix, the first party transposed matrix, a first non-zero part in the first sparse matrix and a non-zero part in the transposed matrix corresponding to the first sparse matrix, that is, each numerical value in the second intermediate parameter is a secret sharing product formed by four common values of a column vector of the first party second type model parameter matrix, a column vector of the first party transposed matrix, a column vector of the first non-zero part in the first sparse matrix and a column vector of the non-zero part in the transposed matrix corresponding to the first sparse matrix.
Federate interaction with the second device based on the second-type shared model parameters, the first non-zero part, and the preset secret-sharing multiplicative triple to combine the second-party second-type shared model parameters and the second non-zero part to compute the secret-sharing intermediate parameters, in particular, federate interaction with the second device based on the preset secret-sharing multiplicative triple, the first shared parameter, and the first shared transpose parameter to combine the second-party secret-sharing multiplicative triple, the second shared parameter, and the third shared transpose parameter to compute a first-party first transpose matrix inner product and assist the second device in computing a second-party first transpose matrix inner product, which in turn, federate interaction with the second device based on the first-party first transpose matrix inner product to combine the second-party first transpose matrix inner product and the second non-zero part, calculating a first party first secret sharing intermediate parameter and assisting the second device in calculating a second party first secret sharing intermediate parameter, and in the same way, performing federated interaction with the second device based on the preset secret sharing multiplication triple, the third sharing parameter and the second sharing transposition parameter to combine the second party secret sharing multiplication triple, the fourth sharing parameter and the fourth sharing transposition parameter, calculating a first party second transposition matrix inner product, and assisting the second device in calculating a second party second transposition matrix inner product, and further performing federated interaction with the second device based on the first party second transposition matrix inner product and the first non-zero part to combine the second party second transposition matrix inner product, calculate a first party second secret sharing intermediate parameter, and assist the second device in calculating a second party second secret sharing intermediate parameter, wherein expressions of the first-party first secret shared intermediate parameter and the first-party second secret shared intermediate parameter are as follows:
Figure BDA0002656267180000151
Figure BDA0002656267180000152
wherein, T1Sharing an intermediate parameter, T, for the first party first secret2Sharing an intermediate parameter, V, for the first party second secretBA second-type model parameter matrix for the second party,
Figure BDA0002656267180000161
is the column vector, X, of the second-party second-type model parameter matrixBFor the purpose of the second sparse matrix is,
Figure BDA0002656267180000162
a column vector, V, of a second non-zero part in the second sparse matrixAA matrix of second type model parameters for said first party,
Figure BDA0002656267180000163
is a column vector, X, of a parameter matrix of the first and second type modelsAFor the purpose of the first sparse matrix,
Figure BDA0002656267180000164
a column vector that is a first non-zero portion in the first sparse matrix.
Wherein the second-type shared model parameters include a second-type secret sharing parameter matrix and a secret sharing transposed parameter matrix corresponding to the second-type secret sharing parameter matrix, and the second-party second-type shared model parameters include a second-party second-type secret sharing parameter matrix and a second-party secret sharing transposed parameter matrix corresponding to the second-party second-type shared parameter matrix,
the step of calculating the secret sharing intermediate parameter based on the second-type shared model parameter, the first non-zero part, and the preset secret sharing multiplicative triple, federately interacting with the second device to join the second-party second-type shared model parameter and the second non-zero part, comprises:
step a10, based on the preset secret sharing multiplication triple, through performing federated interaction with the second device, calculating a secret sharing product between the second-type secret sharing parameter matrix and the secret sharing transposed parameter matrix, and obtaining the secret sharing matrix inner product, so that the second device calculates a secret sharing product between the second-party second-type secret sharing parameter matrix and the second-party secret sharing transposed parameter matrix, and obtains the second-party secret sharing matrix inner product;
in this embodiment, it should be noted that the secret sharing matrix inner product includes a first party first transpose matrix inner product and a first party second transpose matrix inner product, the second party secret sharing matrix inner product includes a second party first transpose matrix inner product and a second party second transpose matrix inner product, the second type secret sharing parameter matrix includes a first sharing parameter matrix and a third sharing parameter matrix, where the first sharing parameter matrix is a matrix representation of a first sharing parameter, the third sharing parameter matrix is a matrix representation of a third sharing parameter, and the secret sharing transpose parameter matrix includes a first sharing transpose parameter matrix and a second sharing transpose parameter matrix, where the first sharing transpose parameter matrix is a matrix representation of the first sharing transpose parameter, and the second sharing transpose parameter matrix is a matrix representation of the second sharing transpose parameter, the second party second type secret sharing parameter matrix includes a second sharing parameter matrix and a fourth sharing parameter matrix, where the second sharing parameter matrix is a matrix representation of a second sharing parameter, the fourth sharing parameter matrix is a matrix representation of a fourth sharing parameter, and the second party secret sharing transposed parameter matrix includes a third sharing transposed parameter matrix and a fourth sharing transposed parameter matrix, where the third sharing transposed parameter matrix is a matrix representation of the third sharing transposed parameter, and the fourth sharing transposed parameter matrix is a matrix representation of the fourth sharing transposed parameter.
Calculating a secret sharing product between the second-type secret sharing parameter matrix and the secret sharing transposed parameter matrix through federated interaction with the second device based on the preset secret sharing multiplication triple, obtaining the secret sharing matrix inner product, so that the second device calculates a secret sharing product between the second-party second-type secret sharing parameter matrix and the second-party secret sharing transposed parameter matrix, and obtains the second-party secret sharing matrix inner product, specifically, the first device blinders the first sharing parameter matrix and the first sharing transposed parameter matrix respectively based on the preset secret sharing multiplication triple, and obtains a first-party first sharing blinding parameter matrix corresponding to the first sharing parameter matrix and a first-party second sharing blinding parameter matrix corresponding to the first sharing parameter matrix, similarly, the second device blinds the second shared parameter matrix and the third shared transposed parameter matrix based on the second-party secret sharing multiplication triple, respectively, to obtain a second-party first shared blinding parameter matrix corresponding to the second shared parameter matrix and a second-party second shared blinding parameter matrix corresponding to the third shared transposed parameter matrix, and sends the second-party first shared blinding parameter matrix and the second-party second shared blinding parameter matrix to the first device, and the first device calculates a sum of the first-party first shared blinding parameter matrix and the second-party first shared blinding parameter matrix to obtain a first blinding parameter matrix, and calculates a sum of the first-party second shared blinding parameter matrix and the second-party second shared blinding parameter matrix to obtain a second blinding parameter matrix, and sending the first party first shared blinding parameter matrix and the first party second shared blinding parameter matrix to the second device, and then the second device calculates the sum of the first party first shared blinding parameter matrix and the second party first shared blinding parameter matrix to obtain a first blinding parameter matrix, calculates the sum of the first party second shared blinding parameter matrix and the second party second shared blinding parameter matrix to obtain a second blinding parameter matrix, further, the first device calculates a first party first transposing matrix inner product which corresponds to the preset secret sharing triple multiplication triple, the first blinding parameter matrix and the second blinding parameter matrix together based on a preset first party transposing matrix inner product calculation formula, and similarly, the second device calculates the second party secret sharing multiplication triple inner product based on a preset second party transposing matrix inner product calculation formula, And the first blinding parameter matrix and the second blinding parameter matrix correspond to a second square first transfer matrix inner product together.
Further, the first device blinds the third sharing parameter matrix and the second sharing transposed parameter matrix respectively based on the preset secret sharing multiplication triple, to obtain a first party third sharing blinding parameter matrix corresponding to the third sharing parameter matrix and a first party fourth sharing blinding parameter matrix corresponding to the second sharing transposed parameter matrix, and similarly, the second device blinds the fourth sharing parameter matrix and the fourth sharing transposed parameter matrix respectively based on the second party secret sharing multiplication triple, to obtain a second party third sharing blinding parameter matrix corresponding to the fourth sharing parameter matrix and a second party fourth sharing blinding parameter matrix corresponding to the fourth sharing parameter matrix, and sends the second party third sharing blinding parameter matrix and the second party fourth sharing blinding parameter matrix to the first device, further, the first device calculates a sum of the first party third shared blinding parameter matrix and the second party third shared blinding parameter matrix to obtain a third blinding parameter matrix, calculates a sum of the first party fourth shared blinding parameter matrix and the second party fourth shared blinding parameter matrix to obtain a fourth blinding parameter matrix, and sends the first party third shared blinding parameter matrix and the first party fourth shared blinding parameter matrix to the second device, further, the second device calculates a sum of the first party third shared blinding parameter matrix and the second party third shared blinding parameter matrix to obtain a third blinding parameter matrix, calculates a sum of the first party fourth shared blinding parameter matrix and the second party fourth shared blinding parameter matrix to obtain a fourth blinding parameter matrix, and further, transposes the first device based on a preset first party matrix inner product calculation formula, calculating a first-party second transpose matrix inner product corresponding to the preset secret sharing multiplication triple, the third blinding parameter matrix and the fourth blinding parameter matrix, and similarly, calculating a second-party second transpose matrix inner product corresponding to the second-party secret sharing multiplication triple, the third blinding parameter matrix and the fourth blinding parameter matrix based on a preset second-party transpose matrix inner product calculation formula by the second device, where the preset first-party first transpose matrix inner product calculation formula and the preset second-party first transpose matrix inner product formula are as follows:
[[x*y]]A=f*[[a]]A+e*[[b]]A+[[c]]A
[[x*y]]B=e*f+f*[[a]]B+e*[[b]]B+[[c]]B
wherein [ [ x ] y]]AFor the first square first transpose matrix inner product [ [ x y ]]]BThe inner product of the second square first transfer matrix, e the first blinded parameter matrix, f the second blinded parameter matrix, and the multiplication triple is (a, b, c), where c is a b, and the preset secret sharing multiplication triple is ([ [ a ] b)]]A,[[b]]A,[[c]]A) The second party secret sharing multiplication triple is ([ [ a ]]]B,[[b]]B,[[c]]B) For example, in one implementation, the calculation of the first square first transpose matrix inner product and the second square first transpose matrix inner product is as follows:
first, assume that a first device possesses a secret shared multiplicative triplet ([ [ a ])]]A,[[b]]A,[[c]]A) The second device possesses a second party secret shared multiplicative triplet ([ [ a ]]]B,[[b]]B,[[c]]B) Wherein [ [ a ]]]A+[[a]]B=a,[[b]]A+[[b]]B=b,[[c]]A+[[c]]BC, a, b, and the first sharing parameter matrix is [ [ x [ ]]]AThe first shared transpose parameter matrix is [ [ y ]]]AThe second shared parameter matrix in the second device is [ [ x ]]]BThe third shared transposed parameter matrix is [ [ y ]]]BWherein [ [ x ]]]A+[[x]]B=x,[[y]]A+[[y]]BY, the first shared blinding parameter matrix to be calculated by the first device is [ [ x y ]]]AThe second device calculates a first party first shared blinding parameter matrix as [ [ x y [ ]]]BAnd must satisfy [ [ x ] y [ ]]]A+[[x*y]]BSpecifically, the calculation flow is as follows:
first, the first device calculates [ [ e ]]]A=[[x]]A-[[a]]AAnd [ [ f ]]]A=[[y]]A-[[b]]AThe second device calculates [ [ e ]]]B=[[x]]B-[[a]]BAnd [ [ f ]]]B=[[y]]B-[[b]]BAnd the first device will then [ [ e ]]]AAnd [ [ f ]]]ASending to the second device, the second device will [ [ e ]]]BAnd [ [ f ]]]BSending the data to the second device, and obtaining e-x-a and f-y-b by the first device and the second device, and calculating [ [ x y ] by the first device]]A=f*[[a]]A+e*[[b]]A+[[c]]AThe second device calculates [ [ x ] y [ ]]]B=e*f+f*[[a]]B+e*[[b]]B+[[c]]BAnd then [ [ x ] y [ ]]]A+[[x*y]]BAnd substituting e-x-a and f-y-b into the calculation expression to obtain [ [ x-y [ ]]]A+[[x*y]]B=x*y。
Step a20, performing federated interaction with the second device based on the secret sharing matrix inner product and the first non-zero part to combine the second party secret sharing matrix inner product and the second non-zero part to calculate the secret sharing intermediate parameter.
In this embodiment, it should be noted that the secret sharing intermediate parameters include a first party first secret sharing intermediate parameter matrix and a first party second secret sharing intermediate parameter matrix, where the first party first secret sharing intermediate parameter matrix is a matrix representation of the first party first secret sharing intermediate parameters, the first party second secret sharing intermediate parameter matrix is a matrix representation of the first party second secret sharing intermediate parameters, and the second device obtains the second party secret sharing intermediate parameters through federate interaction, where the second party secret sharing intermediate parameters include a second party first secret sharing intermediate parameter matrix and a second party second secret sharing intermediate parameter matrix, where the second party first secret sharing intermediate parameter matrix is a matrix representation of the second party first secret sharing intermediate parameters, the second-party second secret shared intermediate parameter matrix is a matrix representation of the second-party second secret shared intermediate parameters.
Performing federated interaction with the second device based on the secret sharing matrix inner product and the first non-zero part to combine the second party secret sharing matrix inner product and the second non-zero part, to calculate the secret sharing intermediate parameter, specifically, generating a fifth public key and a fifth private key corresponding to the fifth public key, and based on the fifth public key, performing homomorphic encryption on the first party first transpose matrix inner product to obtain an encrypted first party first transpose matrix inner product, and sending the fifth public key and the encrypted first party first transpose matrix inner product to the second device, and further the second device performs homomorphic encryption on the second party first transpose matrix inner product based on the fifth public key to obtain an encrypted second party first transpose matrix inner product, and calculates the encrypted first party first transpose matrix inner product and the encrypted second party first transpose matrix inner product, obtaining an encrypted first transpose matrix inner product, wherein the encrypted first transpose matrix inner product is a vector, and then a second device generates a transpose matrix corresponding to the second sparse matrix, obtains a second sparse transpose matrix, and further calculates a first intermediate parameter product between each numerical value in the encrypted transpose matrix inner product, each non-zero column vector of the second sparse matrix, and each non-zero column vector of the second sparse transpose matrix, to obtain a first encrypted intermediate parameter inner product, wherein the first intermediate parameter inner product is an accumulated value of each first intermediate parameter product, each numerical value in the encrypted transpose matrix inner product, each non-zero column vector of the second sparse matrix, and each non-zero column vector of the second sparse transpose matrix correspond one-to-one, and further in a preset first vector space, a vector consistent with a characteristic dimension of the second sparse matrix is constructed as the second square first transpose matrix inner product, and based on the fifth public key, homomorphically encrypting the second party first transpose matrix inner product to obtain an encrypted second party first transpose matrix inner product, further calculating a difference value between the first encrypted intermediate parameter inner product and the encrypted second party second transpose matrix inner product to obtain an encrypted first party first transpose matrix inner product, and sending the encrypted first party first transpose matrix inner product to the first device, and further the first device decrypts the encrypted first party first transpose matrix inner product based on the fifth private key to obtain a first party first transpose matrix inner product, and otherwise, the second device generates a sixth public key and a sixth private key corresponding to the sixth public key, and based on the sixth private key, homomorphically encrypting the second party second transpose matrix inner product to obtain an encrypted second party second transpose inner product, and sends the encrypted second party second transpose matrix inner product and the sixth public key to the first device, and then the first device performs homomorphic encryption on the inner product of the first party and the second transpose matrix based on the sixth public key to obtain an inner product of an encrypted first party and the second transpose matrix, further calculates the sum of the inner product of the encrypted second party and the inner product of the encrypted first party and the second transpose matrix to obtain an inner product of an encrypted second transpose matrix, wherein the inner product of the encrypted second transpose matrix is a vector, further generates a transpose matrix corresponding to the first sparse matrix to obtain a first sparse transpose matrix, and calculates a second intermediate parameter product between each value of the inner product of the encrypted second transpose matrix, each non-zero column vector of the first sparse matrix and each non-zero column vector of the first sparse transpose matrix to obtain a second encrypted intermediate parameter inner product, wherein the second encrypted intermediate parameter inner product is an accumulated value of the products of the second intermediate parameters, and each numerical value in the encrypted second transposed matrix inner product, each non-zero column vector in the first sparse matrix and each non-zero column vector in the first sparse matrix correspond one-to-one, and then in a preset second vector space, a vector consistent with the characteristic dimension of the first sparse matrix is constructed as the first party second transposed matrix inner product, and the first party second transposed matrix inner product is homomorphically encrypted based on the sixth public key to obtain an encrypted first party second transposed matrix inner product, and then the difference value of the second encrypted middle parameter inner product and the encrypted first party second transposed matrix inner product is calculated to obtain an encrypted second party second transposed matrix inner product, and the encrypted second party second transposed matrix inner product is sent to the second device, and then the second device decrypts the encrypted second party second transposed matrix inner product based on the sixth private key, and obtaining the inner product of the second square and the second transpose matrix.
Step S22, calculating the secret sharing model error based on the sparse matrix security inner product, the secret sharing intermediate parameter, and a preset secret sharing model error calculation formula.
In this embodiment, it should be noted that the sparse matrix security inner product includes the first type sparse matrix security inner product and the second type sparse matrix security inner product, where the first type sparse matrix security inner product includes a third non-zero feature item cross inner product and a fourth non-zero feature item cross inner product, the second type sparse matrix security inner product includes a first non-zero feature item cross inner product and a second non-zero feature item cross inner product, and the secret sharing intermediate parameter includes a first party first secret sharing intermediate parameter and a first party second secret sharing intermediate parameter.
Calculating the secret sharing model error based on the sparse matrix security inner product, the secret sharing intermediate parameter and a preset secret sharing model error calculation formula, specifically, calculating a first party first model output corresponding to the fourth non-zero feature term cross inner product, the first non-zero feature term cross inner product and the first party first secret sharing intermediate parameter together based on a model output calculation formula, and calculating a first party second model output corresponding to the third non-zero feature term cross inner product, the second non-zero feature term cross inner product and the first party second secret sharing intermediate parameter, and further substituting a first device into the preset secret sharing model error calculation formula based on the first party first model output, the first party second model output and a secret shared sample tag held by the own party to calculate the secret sharing model error, wherein the computational expression output by the first-party first model is as follows:
Figure BDA0002656267180000211
wherein, [ [ f (X) ]B)]]AIs an output of the first model for the first party,
Figure BDA0002656267180000212
for the fourth non-zero feature term cross inner product,
Figure BDA0002656267180000213
for the first non-zero feature term cross inner product,
Figure BDA0002656267180000221
a calculation table output by a first party second model for the first party secret sharing intermediate parametersThe formula is as follows:
Figure BDA0002656267180000222
wherein, [ [ f (X) ]A)]]AIs an output of the first-party second model,
Figure BDA0002656267180000223
for the third non-zero feature term cross-inner product,
Figure BDA0002656267180000224
for the second non-zero feature term cross inner product,
Figure BDA0002656267180000225
a second secret sharing intermediate parameter for the first party, and a computational expression of the secret sharing model error is as follows:
Figure BDA0002656267180000226
wherein Y is the sample label, [ [ f (X) ]A,XB)-Y]]AIs the secret sharing model error.
Similarly, the second device calculates the second-party secret sharing model error by substituting the second-party first model output and the second-party second model output into the preset secret sharing model error calculation formula, wherein the calculation expression of the second-party first model output is as follows:
Figure BDA0002656267180000227
wherein, [ [ f (X) ]B)]]BIs the output of the first model for the second party,
Figure BDA0002656267180000228
is the cross-inner product of the second-party fourth non-zero feature term,
Figure BDA0002656267180000229
is the second-party first non-zero feature term cross inner product,
Figure BDA00026562671800002210
the first secret sharing intermediate parameters for the second party, and the calculation of the second model output of the second party is expressed as follows:
Figure BDA00026562671800002211
wherein, [ [ f (X) ]A)]]BIs the output of the second-party second model,
Figure BDA0002656267180000231
is the cross-inner product of the second-party third non-zero feature term,
Figure BDA0002656267180000232
is the second-party second non-zero feature term cross inner product,
Figure BDA0002656267180000233
a second party second secret shares intermediate parameters, and the computational expression of the second party secret sharing model error is as follows:
Figure BDA0002656267180000234
wherein Y is the sample label, [ [ f (X) ]A,XB)-Y]]BSecret sharing model error for the second party.
And step S30, updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model.
In this embodiment, it should be noted that the longitudinal federal factorization model includes a first target model parameter belonging to a first device and a second target model parameter belonging to a second device.
Updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federated factorization model, specifically, updating the first-party secret sharing initial model parameter based on the secret sharing model error to obtain a first-party secret sharing initial update parameter, and updating the second-party secret sharing initial model parameter based on the second-party secret sharing model error by the second device to obtain a second-party secret sharing initial update parameter, and judging whether the first-party secret sharing initial update parameter meets a preset iteration update ending condition, if so, the first device takes the first-party secret sharing initial update parameter as a secret sharing update parameter, the second device takes the second-party secret sharing initial update parameter as a second-party secret sharing update parameter, and the first device is based on the secret sharing update parameter, and performing decryption interaction with the second device to combine the secret sharing update parameters of the second party, determine first target model parameters, and assist the second device in determining second target model parameters, wherein the second device provides the secret sharing update parameters of the second party during the decryption interaction, and the preset iteration update ending condition includes loss function convergence, reaching a maximum iteration time threshold value, and the like.
Wherein the updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model comprises:
step S31, based on the secret sharing model error, updating the secret sharing initial model parameter to obtain a secret sharing updating parameter;
in this embodiment, it should be noted that, based on the secret sharing model error, updating the secret sharing initial model parameter to obtain a secret sharing update parameter, specifically, respectively calculating gradient information of the secret sharing model error with respect to the secret sharing initial model parameter, and further, based on the gradient information, iteratively updating the secret sharing initial model parameter until the secret sharing initial model parameter after the iterative update reaches a preset iterative update end condition to obtain the secret sharing update parameter, where in an embodiment, the gradient information includes a first type gradient, a second type gradient, a third type gradient, and a fourth type gradient, where a calculation expression of the first type gradient is as follows:
Figure BDA0002656267180000241
wherein D is1Alpha is a hyper-parameter, the magnitude of which can be set by itself, and is used for controlling the value range of the gradient, wAFor said first party first type model parameter vector, [ [ w ]A]]AA first party first type model parameter vector held by a first device that is shared for secrets, and additionally the computational expression of the second type gradient is as follows:
Figure BDA0002656267180000242
wherein D is2For said second type of gradient, α is a hyperparameter, the magnitude of which can be set by itself, for controlling the range of values of the gradient, VAFor the first party model parameter matrix of the second type [ [ V ]A]]AA first party second type model parameter matrix held by a first device that is secret shared, and further, a computational expression of the third type gradient is as follows:
Figure BDA0002656267180000243
wherein D is3For the third type of gradient, α is a hyperparameter whose magnitude can be set by itself for controlling the range of values of the gradient, wBIs the firstTwo-sided first type model parameter vector, [ [ w ]B]]AA second party first type model parameter vector held by the first device for secret sharing, and additionally, the fourth type gradient is calculated as follows:
Figure BDA0002656267180000244
wherein D is4Alpha is a hyper-parameter of the fourth type gradient, the magnitude of which can be set by oneself and is used for controlling the value range of the gradient, VBFor the second-party second-type model parameter matrix, [ [ V ]B]]AA second-party second-type model parameter matrix held by the second device for secret sharing.
Additionally, it should be noted that the second device may calculate gradient information of the second-party secret sharing model error with respect to the second-party secret sharing initial model parameter, and iteratively update the second-party secret sharing initial model parameter based on the gradient information until a preset iterative training end condition is reached, so as to obtain the second-party secret sharing update parameter.
Further, based on the gradient information, updating a computational expression of the secret sharing initial model parameters as follows:
Figure BDA0002656267180000251
Figure BDA0002656267180000252
Figure BDA0002656267180000253
Figure BDA0002656267180000254
wherein,123and4all are the preset learning rates,
Figure BDA0002656267180000255
updated model parameters corresponding to a first party first type model parameter vector shared for a secret held by the first device,
Figure BDA0002656267180000256
updating model parameters corresponding to the first party second type model parameter matrix shared by the secret held by the first device,
Figure BDA0002656267180000257
updated model parameters corresponding to the first type model parameter vector for the second party of the secret sharing held by the first device,
Figure BDA0002656267180000258
and updating model parameters corresponding to the second-party model parameter matrix shared by the secrets held by the first device.
Step S32, performing decryption interaction with the second device based on the secret sharing update parameter to obtain the first target model parameter, so that the second device can obtain the second target model parameter.
In this embodiment, based on the secret sharing update parameter, performing decryption interaction with the second device to obtain the first target model parameter, so that the second device obtains the second target model parameter, and specifically, based on the secret sharing update parameter, performing decryption interaction with the second device to combine the second party secret sharing update parameter in the second device to calculate the first target model parameter, where when performing decryption interaction, the second device calculates the second target model parameter based on the second party secret sharing update parameter, combined with the secret sharing update parameter in the first device.
Wherein the secret sharing update parameters comprise a first shared first-party model update parameter and a first shared second-party model update parameter,
the step of performing decryption interaction with the second device based on the secret sharing update parameter to obtain the first target model parameter for the second device to obtain the second target model parameter includes:
step S321, sending the first shared second-party model update parameter to the second device, so that the second device calculates the second target model parameter based on the determined second shared second-party model update parameter and the first shared second-party model update parameter;
in this embodiment, it should be noted that the first shared first-party model update parameter is a first-party model update parameter held by the first secret-shared device, the first shared second-party model update parameter is a second-party model update parameter held by the first secret-shared device, and the second shared second-party model update parameter is a second-party model update parameter held by the second secret-shared device.
Sending the first shared second-party model update parameter to the second device, so that the second device calculates the second target model parameter based on the determined second shared second-party model update parameter and the first shared second-party model update parameter, and specifically, sending the first shared second-party model update parameter to the second device, so that the second device calculates the sum of the second shared second-party model update parameter and the first shared second-party model update parameter, and obtains the second target model parameter.
Step S322, receiving a second shared first-party model update parameter sent by the second device, and calculating the first target model parameter based on the second shared first-party model update parameter and the first shared first-party model update parameter.
In this embodiment, it should be noted that the second shared first-party model update parameter is a first-party model update parameter held by the secret-shared second device.
Receiving a second shared first-party model update parameter sent by the second device, and calculating the first target model parameter based on the second shared first-party model update parameter and the first shared first-party model update parameter, specifically, receiving the second shared first-party model update parameter sent by the second device, and calculating the sum of the second shared first-party model update parameter and the first shared first-party model update parameter, to obtain the first target model parameter.
Compared with the existing longitudinal federal learning method, the method has the advantages that when the longitudinal federal factorization model is constructed based on longitudinal federal learning, the calculation of the cross inner product of non-zero characteristic items is carried out by combining a secret sharing mechanism and a homomorphic encryption method, the calculation process of zero parts in a sparse matrix is reduced, the calculation complexity of the construction of the factorization model based on the sparse matrix is reduced, the calculation efficiency of the construction of the longitudinal federal factorization model is improved, and the longitudinal federal factorization model is constructed based on longitudinal federal learning modeling, so that the feature richness of training samples is higher when the longitudinal federal factorization model is constructed, and the model performance of the longitudinal federal factorization model is better, and the personalized recommendation effect of the longitudinal federal factorization model as a recommendation model is better.
Compared with the prior art that a homomorphic encryption-based method is adopted for carrying out federal learning, the factorization model construction method provided by the embodiment of the application carries out secret sharing with second equipment after acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, then the first equipment acquires first party secret sharing initial model parameters, the second equipment acquires second party secret sharing initial model parameters, and then carries out federal interaction with the second equipment on the basis of a first non-zero part in the first sparse data and the first party secret sharing initial model parameters so as to combine a second non-zero part in the second sparse data acquired by the second equipment and the second party secret sharing initial model parameters to calculate secret sharing model errors, wherein when carrying out federal interaction, the method has the advantages that only the first non-zero part in the first sparse data and the second non-zero part in the second sparse data are used for calculation, and further the calculation processes of the zero parts in the first sparse data and the second sparse data are reduced, so that the calculation amount and the calculation complexity in the federal interaction process are greatly reduced, and further the preset initialization model is updated based on secret sharing model errors, so that the longitudinal federal factorization model can be obtained.
Further, referring to fig. 2, based on the first embodiment in the present application, in another embodiment of the present application, the second-type sparse matrix security inner product includes a first non-zero feature term cross inner product and a second non-zero feature term cross inner product,
the step of performing federated interaction with the second device based on the second-type shared model parameters and the first non-zero part to combine the second-party second-type shared model parameters and the second non-zero part to calculate a second-type sparse matrix security inner product includes:
step B10, carrying out federated interaction with the second equipment based on the second type shared model parameters to calculate a cross inner product between the second type shared model parameters and the second non-zero part, and obtaining the first non-zero feature item cross inner product;
in this embodiment, it should be noted that the second-type shared model parameter includes a first shared parameter, where the first shared parameter is a second-party second-type model parameter matrix held by the secret-shared first device, that is, the first shared parameter is a second share of the second-party second-type model parameter matrix, and the second-party second-type shared model parameter includes a second shared parameterA parameter, wherein the second shared parameter is a second-party second-type model parameter matrix held by a second device for secret sharing, that is, the second shared parameter is a first share of the second-party second-type model parameter matrix, for example, assuming that the second-party second-type shared model parameter is VBConstruction of VB=[[VB]]A+[[VB]]BThen [ [ V ]B]]AAs a first sharing parameter, [ [ V ]B]]BIs the second sharing parameter.
Performing federated interaction with the second device based on the second type shared model parameter to calculate a cross inner product between the second type shared model parameter and the second non-zero part, obtain the first non-zero feature item cross inner product, and specifically generate a homomorphic encrypted first public and private key pair, wherein the first public and private key pair includes a first public key and a first private key, and further perform homomorphic encryption on the first shared parameter based on the first public key to obtain an encrypted first shared parameter, and further send the encrypted first shared parameter and the first public key to the second device, and further the second device may perform homomorphic encryption on the second shared parameter based on the first public key to obtain an encrypted second shared parameter, and further calculate the sum of the first encrypted shared parameter and the second encrypted shared parameter, that is, to obtain an encrypted second party second type model parameter matrix of the current round of iteration, further, the encrypted second-party second-type model parameter matrix and the non-zero feature item cross inner product of the second non-zero part are subjected to cross inner product to obtain a second encrypted inner product, further, the second-party first non-zero feature item cross inner product which is uniformly distributed and is positioned in the first target feature dimension is generated based on the first target feature dimension of the second-party second-type model parameter matrix, homomorphic encryption is performed on the second-party first non-zero feature item cross inner product based on the first public key to obtain an encrypted second-party first non-zero feature item cross inner product, the encrypted first non-zero feature item cross inner product is calculated based on the second encrypted inner product and the second-party first non-zero feature item cross inner product, and the encrypted first non-zero feature item cross inner product is sent to the first device, so that the first device can decrypt the encrypted first non-zero feature item cross inner product based on the first private key, obtaining the first non-zero feature term cross-inner product.
Wherein the second-type sharing model parameters comprise first sharing parameters, the second-party second-type sharing model parameters comprise second sharing parameters,
the step of performing federated interaction with the second device based on the second-type shared model parameter to calculate a cross inner product between the second-type shared model parameter and the second non-zero part, and obtaining the first non-zero feature item cross inner product includes:
step B11, generating a first public key, and encrypting the first sharing parameter based on the first public key to obtain an encrypted first sharing parameter;
in this embodiment, it should be noted that the encryption method includes homomorphic encryption.
Step B12, sending the first public key and the encrypted first shared parameter to a second device, so that the second device determines a second party first non-zero feature item cross inner product and an encrypted first non-zero feature item cross inner product based on the first public key, the encrypted first shared parameter, a second shared parameter and the second non-zero part;
in this embodiment, the first public key and the encrypted first shared parameter are sent to a second device, so that the second device determines a second-party first non-zero feature item cross inner product and an encrypted first non-zero feature item cross inner product based on the first public key, the encrypted first shared parameter, a second shared parameter, and the second non-zero part, specifically, the encrypted first shared parameter and the first public key are sent to the second device, and then the second device performs homomorphic encryption on the second shared parameter based on the first public key to obtain an encrypted second shared parameter, and further calculates a sum of the first encrypted shared parameter and the second encrypted shared parameter, that is, an encrypted second-party second-type model parameter matrix of the current iteration can be obtained, and further calculates a product between each column vector in the encrypted second-type model parameter matrix and each column vector in the second non-zero part, and obtaining each first vector product, further accumulating each first vector product to obtain a second encryption inner product, further generating vectors which are consistent with the characteristic dimension of the second-party second-type model parameter matrix and are uniformly distributed as a first non-zero characteristic item cross inner product of the second party, carrying out homomorphic encryption on the first non-zero characteristic item cross inner product of the second party based on the first public key to obtain a first non-zero characteristic item cross inner product of the encryption second party, further calculating the difference value of the second encryption inner product and the first non-zero characteristic item cross inner product of the encryption second party, and obtaining the first non-zero characteristic item cross inner product of the encryption second party.
Step B13, receiving the encrypted first non-zero feature item cross inner product sent by the second device, and decrypting the encrypted first non-zero feature item cross inner product based on a first private key corresponding to the first public key to obtain the first non-zero feature item cross inner product.
And step B20, carrying out federal interaction with the second equipment based on the first non-zero part to calculate a cross inner product between the first non-zero part and the second-party second-type shared model parameter, and obtaining the second non-zero feature item cross inner product.
In this embodiment, it should be noted that the third sharing parameter is a first-party second-type model parameter matrix that is held by the secret-sharing first device, that is, the third sharing parameter is a first share of the first-party second-type model parameter matrix, and the fourth sharing parameter is a first-party second-type parameter matrix that is held by the secret-sharing second device, that is, the fourth sharing parameter is a second share of the secret-sharing second device, for example, it is assumed that the first-party second-type sharing model parameter is VAConstruction of VA=[[VA]]A+[[VA]]BThen [ [ V ]A]]AAs a third shared parameter, [ [ V ]A]]BIs the fourth sharing parameter.
Performing federated interaction with the second device based on the first non-zero part to calculate a cross inner product between the first non-zero part and a second type of shared model parameter of the second party, obtaining the second non-zero feature item cross inner product, specifically, generating a second public and private key pair by the second device, wherein the public and private key pair includes a second private key and a second public key, further encrypting the fourth shared parameter based on the second public key to obtain an encrypted fourth shared parameter, and sending the second public key and the encrypted fourth shared parameter to the first device, further encrypting the third shared parameter based on the second public key by the first device to obtain an encrypted third shared parameter, further calculating an encrypted first party second type model parameter matrix based on the encrypted third shared parameter and the encrypted fourth shared parameter, further calculating a second type model parameter matrix of the encrypted first party and a second type model parameter matrix of the first non-zero part, and obtaining a first encrypted inner product, further, constructing a second non-zero feature item cross inner product which is uniformly distributed and is in a second target feature dimension based on a second target feature dimension of the second type model parameter matrix of the encrypted first party, encrypting the second non-zero feature item cross inner product based on the second public key, obtaining an encrypted second non-zero feature item cross inner product, further obtaining an encrypted second party second non-zero feature item cross inner product based on the first encrypted inner product and the encrypted second non-zero feature item cross inner product, sending the encrypted second party second non-zero feature item cross inner product to the second device, and further decrypting the encrypted second party second non-zero feature item cross inner product by the second device based on the second private key, and obtaining a second-party second non-zero feature item cross inner product.
Wherein the second-type sharing model parameters comprise third sharing parameters, the second-party second-type sharing model parameters comprise fourth sharing parameters,
the step of performing federated interaction with the second device based on the first non-zero part to calculate a cross-inner product between the first non-zero part and the second-party second-type shared model parameter, and obtaining the second non-zero feature item cross-inner product includes:
step B21, receiving a second public key sent by the second device and an encrypted fourth sharing parameter sent by the second device, where the encrypted fourth sharing parameter is the fourth sharing parameter encrypted by the second device based on the second public key;
in this embodiment, the method for encrypting the fourth shared parameter by the second device includes homomorphic encryption.
Step B22, calculating a second non-zero feature item cross inner product and an encrypted second party second non-zero feature item cross inner product based on the second public key, the encrypted fourth sharing parameter, the first non-zero part and the third sharing parameter;
in this embodiment, a second non-zero feature item cross inner product and an encrypted second party second non-zero feature item cross inner product are calculated based on the second public key, the encrypted fourth shared parameter, the first non-zero part and the third shared parameter, specifically, the third shared parameter is homomorphic encrypted based on the second public key to obtain an encrypted third shared parameter, an encrypted first party second type model parameter matrix is calculated based on the encrypted third shared parameter and the encrypted fourth shared parameter, an encrypted first party second type model parameter matrix is further calculated to further calculate the encrypted first party second type model parameter matrix and the first non-zero feature item cross inner product of the first non-zero part to obtain a first encrypted inner product, and further, a second non-zero feature item cross inner product which is uniformly distributed and is in the second target feature dimension is constructed based on the second target feature dimension of the encrypted first party second type model parameter matrix, and encrypting the second non-zero feature item cross inner product based on the second public key to obtain an encrypted second non-zero feature item cross inner product, and further obtaining an encrypted second non-zero feature item cross inner product based on the first encrypted inner product and the encrypted second non-zero feature item cross inner product.
Wherein the step of calculating a second non-zero feature item cross inner product and an encrypted second party second non-zero feature item cross inner product based on the second public key, the encrypted fourth shared parameter, the first non-zero part, and the third shared parameter comprises:
step B221, based on the second public key, encrypting the third sharing parameter to obtain an encrypted third sharing parameter, and calculating an encryption model parameter corresponding to the encrypted third sharing parameter and the encrypted fourth sharing parameter;
in this embodiment, it should be noted that the encryption model parameter is a sum of the encryption third sharing parameter and the encryption fourth sharing parameter, and the encryption model parameter is the encryption first-party second-type model parameter matrix.
Step B222, calculating a cross inner product between each column vector in the encryption model parameters and each column vector in the first non-zero part to obtain the first encryption inner product;
in this embodiment, a cross inner product between each column vector in the encryption model parameter and each column vector in the first non-zero portion is calculated to obtain the first encryption inner product, specifically, a product between each column vector in the encryption model parameter and each column vector in the first non-zero portion is calculated to obtain each second vector product, and then each second vector product is accumulated to obtain the first encryption inner product.
Step B223, constructing the second non-zero feature item cross inner product based on the feature dimension of the encryption model parameter, and calculating a second non-zero feature item cross inner product of the encryption second party which is corresponding to the second non-zero feature item cross inner product and the first encryption inner product;
in this embodiment, based on the feature dimension of the encryption model parameter, the second non-zero feature item cross inner product is constructed, and the second non-zero feature item cross inner product of the encryption second party corresponding to the second non-zero feature item cross inner product and the first encryption inner product together is calculated, specifically, a uniformly distributed vector consistent with the feature dimension of the encryption model parameter is constructed as the second non-zero feature item cross inner product, and then homomorphic encryption is performed on the second non-zero feature item cross inner product based on the second public key to obtain an encrypted second non-zero feature item cross inner product, and based on a difference value between the first encrypted inner product and the encrypted second non-zero feature item cross inner product, the encrypted second non-zero feature item cross inner product of the encryption second party is obtained.
Step B23, sending the encrypted second non-zero feature item cross inner product to the second device, so that the second device decrypts the encrypted second non-zero feature item cross inner product based on a second private key corresponding to the second public key, to obtain a second non-zero feature item cross inner product of the second party.
In this embodiment, it should be noted that the second public key and the second private key are a homomorphic encrypted public-private key pair.
The embodiment provides a method for calculating a second-type sparse matrix security inner product of secret sharing based on a method combining secret sharing and homomorphic encryption, that is, based on a second-type shared model parameter, performing federated interaction with a second device to calculate a cross inner product between the second-type shared model parameter and a second non-zero part, obtaining a first non-zero feature item cross inner product, and based on the first non-zero part, performing federated interaction with the second device to calculate a cross inner product between the first non-zero part and a second-party second-type shared model parameter, obtaining a second non-zero feature item cross inner product, wherein only a first non-zero part in first sparse data and a second non-zero part in second sparse data are used for calculation in the whole process, so that the calculation process of zero parts in the first sparse data and the second sparse data is reduced, compared with the prior art which adopts a homomorphic encryption-based method to carry out the federal learning, the technical defect of low calculation efficiency when the homomorphic encryption method is used to carry out the federal learning based on the sparse matrix is overcome, and the calculation efficiency when the homomorphic encryption method is used to carry out the federal learning based on the sparse matrix is improved.
Further, referring to fig. 3, based on the first embodiment and the second embodiment in the present application, in another embodiment of the present application, the personalized recommendation method is applied to the first device, and the personalized recommendation method includes:
step C10, acquiring sparse data of a user to be recommended by a first party, and performing secret sharing with second equipment to acquire secret sharing model parameters;
in this embodiment, it should be noted that the second device includes second-party to-be-recommended user sparse data, the second-party to-be-recommended user sparse data is a second sparse matrix corresponding to a plurality of user data, the first-party to-be-recommended user sparse data is a first sparse matrix corresponding to a plurality of user data, each vector of the first sparse matrix and each vector of the second-party sparse matrix are encoding vectors corresponding to user data to be recommended of one user, and most of encoding values in the first sparse matrix and encoding values in the second sparse matrix are 0, for example, the first sparse matrix represents click results of different users on different articles, an encoding value 1 represents that a user has clicked an article, an encoding value 0 represents that a user has not clicked an article, and since a user usually clicks only a small portion of articles, most of the code values in the first sparse matrix are 0, and further it should be noted that the first device and the second device are both parties of longitudinal federal learning, and before secret sharing is performed, the first device and the second device have trained a preset personalized recommendation model based on secret sharing and longitudinal federal learning, wherein the preset personalized recommendation model is a trained factorization machine regression model for predicting the scores of the corresponding articles of the user, and further a model expression of the preset personalized recommendation model is as follows:
Figure BDA0002656267180000331
wherein x is the model input data, w and V are the model parameters, and f (x) is the model output, that is, the score output by the preset personalized recommendation model or the vector consisting of the scores output by the preset personalized recommendation model.
Acquiring sparse data of a user to be recommended by a first party, and performing secret sharing with a second device to acquire secret sharing model parameters, specifically, acquiring first party model parameters and first party sparse data of a preset personalized recommendation model, and simultaneously acquiring second party model parameters and second party sparse data of the preset personalized recommendation model by the second device, wherein as the preset personalized recommendation model is constructed based on longitudinal federal learning, a part of model parameters of the preset personalized recommendation model held by the first device are first party model parameters, a part of model parameters of the preset personalized recommendation model held by the second device are second party model parameters, the first party sparse data of the user to be recommended is associated data of the user collected by the first device, and the second party sparse data of the user to be recommended is associated data of the user collected by the second device, wherein, the associated data of the user includes user interest and hobby data, historical score data of the user on articles, and the like, and in a longitudinal federal scenario, the first party to-be-recommended user sparse data and the second party to-be-recommended user sparse data correspond to the same to-be-recommended user group, and the first party to-be-recommended user sparse data and the second party to-be-recommended user sparse data can both be represented by vectors or matrices, for example, it is assumed that the first party to-be-recommended user sparse data is a vector (1, 0, 1, 0), where a code 1 represents that the user clicks the corresponding article, a code 0 represents that the user does not click the corresponding article, and then a vector (1, 0, 1, 0) represents that the user clicks article a and article C, and does not click article B and article D, and further, secret sharing is performed with the second device based on the first party model parameter, wherein the second device provides the second party model parameters in secret sharing, whereby the first device obtains the secret sharing model parameters and the second device obtains the second party secret sharing model parameters, wherein the secret sharing model parameters comprise first shared first party model parameters and first shared second party model parameters, the second-party secret sharing model parameters comprising second shared first-party model parameters and second shared second-party model parameters, wherein the first shared first party model parameter is a first share of the first party model parameter, the second shared first-party model parameter is a second share of the first-party model parameter, the first shared second-party model parameter is a first share of the second model parameter, and the second shared second-party model parameter is a second share of the second model parameter.
Step C20, performing longitudinal federal prediction interaction with the second device based on a first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters, so as to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result;
in this embodiment, it should be noted that the first non-zero portion is a remaining portion of the first sparse matrix excluding the zero vector.
And specifically, based on the first non-zero part in the sparse data of the user to be recommended by the first party and the secret sharing model parameters, longitudinal federal prediction interaction is performed on the second device to score the object to be recommended corresponding to the sparse data of the user to be recommended by the first party, so as to obtain a first secret sharing score result, and specifically, based on the first non-zero part in the sparse data of the user to be recommended by the first party and the secret sharing model parameters, longitudinal federal prediction interaction is performed on the second device to combine the second non-zero part in the sparse data of the user to be recommended by the second party and the secret sharing model parameters of the second party, and the first secret sharing score result is calculated through preset secret sharing multiplication and a preset homomorphic encryption algorithm.
Wherein the secret sharing model parameters comprise first party first sharing type model parameters and first party second type sharing model parameters, the second device comprises second party to-be-recommended user sparse data and second party secret sharing model parameters, the second party secret sharing model parameters comprise second party first sharing type model parameters and second party second sharing type model parameters, the first secret sharing scoring results comprise first type sharing scoring results and second type sharing scoring results,
the step of performing longitudinal federal prediction interaction with the second device based on the first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result includes:
step C21, performing longitudinal federal prediction interaction with the second device based on the first non-zero part and the first party first sharing type model parameter, so as to combine the second party first sharing type model parameter to calculate the first type sharing scoring result;
in this embodiment, it should be noted that the first-party first sharing type model parameter is a first-party model parameter of secret sharing held by a first device, the second-party first sharing type model parameter is a first-party model parameter of secret sharing held by a second device, the first-type sharing score result at least includes a first sharing first-party score, and a first sharing first-party score corresponds to a user, where a first secret sharing score calculation formula corresponding to the first sharing first-party score is as follows:
Figure BDA0002656267180000351
wherein, [ [ f (X) ]A)]]AScoring the first shared first party, XASparse data, w, for the first party to be recommended usersAAnd VAAre all the parameters of the first-party model,
Figure BDA0002656267180000352
is a first safety inner product in the same way as said third non-zero feature term cross inner product is calculated during model training,
Figure BDA0002656267180000353
is a second safety inner product in the same way as the second non-zero feature term cross inner product is calculated during model training,
Figure BDA0002656267180000361
is a third safety inner product [ solution ] in the same manner as the first party second secret sharing intermediate parameter calculation during model training]]For a symbol representing secret sharing, [, ]]]AIs a secret sharing symbol indicating that data shared in secret belongs to the first device.
And performing longitudinal federal prediction interaction with the second device based on the first non-zero part and the first party first sharing type model parameter to combine the second party first sharing type model parameter to calculate the first type sharing score result, specifically, performing longitudinal federal prediction interaction with the second device based on the first non-zero part and the first party first sharing type model parameter to combine the second party first sharing type model parameter to calculate a first safety inner product, a second safety inner product and a third safety inner product respectively, and substituting the first safety inner product, the second safety inner product and the third safety inner product into the first secret sharing score calculation formula to obtain the first type sharing score result.
Additionally, it should be noted that the second device may calculate a first-type sharing score result of the second party based on a first secret sharing score calculation formula of the second party, where the first-type sharing score result of the second party at least includes a second sharing first party score, and the second sharing first party score corresponds to a user, where the first secret sharing score calculation formula of the second party is as follows:
Figure BDA0002656267180000362
wherein, [ [ f (X) ]A)]]BScoring the second shared first party, XASparse data, w, for the first party to be recommended usersAAnd VAAre all the parameters of the first-party model,
Figure BDA0002656267180000363
a second-party first safety inner product which is the same as the calculation way of the second-party third non-zero feature item cross inner product in the model training process,
Figure BDA0002656267180000364
a second-party second safety inner product which is the same as the calculation way of the second-party second non-zero feature item cross inner product in the model training process,
Figure BDA0002656267180000365
a second party's third safety inner product, [ alpha ], in the same manner as the second party's second secret sharing intermediate parameters during model training]]For a symbol representing secret sharing, [, ]]]BIs a secret sharing symbol indicating that data shared in secret belongs to the first device.
Step C22, performing longitudinal federal prediction interaction with the second device based on the first party second sharing type model parameter, so as to combine the second party second sharing type model parameter and a second non-zero part of the second party to-be-recommended user sparse data, and calculate the second type sharing scoring result.
In this embodiment, it should be noted that the first-party second sharing type model parameter is a second-party model parameter of secret sharing held by the first device, the second-party second sharing type model parameter is a second-party model parameter of secret sharing held by the second device, the second-type sharing score result at least includes a first sharing second-party score, and a first sharing second-party score corresponds to a user, where a second secret sharing score calculation formula corresponding to the first sharing second-party score is as follows:
Figure BDA0002656267180000371
wherein, [ [ f (X) ]B)]]AScore first share second party, XBSparse data for the second party user to be predicted, wBAnd VBAre all the parameters of the second square model,
Figure BDA0002656267180000372
is a fourth safety inner product computed in the same way as the fourth non-zero feature term cross inner product,
Figure BDA0002656267180000373
is a fifth safety inner product calculated in the same way as the first non-zero feature term cross inner product,
Figure BDA0002656267180000374
is a sixth security inner product, [ deg. ] in the same manner as the first party first secret shared intermediate parameter is calculated]]For a symbol representing secret sharing, [, ]]]AIs a secret sharing symbol indicating that data shared in secret belongs to the first device.
And performing longitudinal federal prediction interaction with the second equipment based on the first party second shared type model parameter to combine the second party second shared type model parameter and a second non-zero part of the second party to-be-recommended user sparse data, and calculating a second type shared scoring result, specifically, performing longitudinal federal prediction interaction with the second equipment based on the first party second shared type model parameter to combine the second party second shared type model parameter and the second non-zero part of the second party to-be-recommended user sparse data, respectively calculating a fourth safety inner product, a fifth safety inner product and a sixth safety inner product, and substituting the fourth safety inner product, the fifth safety inner product and the sixth safety inner product into the second secret shared scoring calculation formula to obtain the second type shared scoring result.
Additionally, it should be noted that the second device may calculate a second party second-type sharing score result based on a second party second secret sharing score calculation formula, where the second party second-type sharing score result at least includes a second sharing second party score, and the second sharing second party score corresponds to a user, where the second party second secret sharing score calculation formula is as follows:
Figure BDA0002656267180000375
wherein, [ [ f (X) ]B)]]BScore second party for second share, XBSparse data for the second party user to be predicted, wBAnd VBAre all the parameters of the second square model,
Figure BDA0002656267180000381
a second-party fourth safety inner product which is the same as the calculation way of the second-party fourth non-zero feature item cross inner product in the model training process,
Figure BDA0002656267180000382
a second-party fifth safety inner product which is the same as the calculation way of the second-party first non-zero feature item cross inner product in the model training process,
Figure BDA0002656267180000383
a sixth safety inner product of a second party, [ alpha ], [ for ] in the same manner as the calculation of the first secret shared intermediate parameter of the second party during model training]]For a symbol representing secret sharing, [, ]]]BIs a secret sharing symbol indicating that the data shared in secret belongs to the second device.
Step C30, performing aggregation interaction with the second device based on the first secret sharing score result to combine a second secret sharing score result determined by the second device to calculate a target score result;
in this embodiment, based on the first secret sharing score result, performing aggregation interaction with the second device to combine a second secret sharing score result determined by the second device to calculate a target score result, and specifically, based on the first secret sharing score result, performing aggregation interaction with the second device to aggregate the first secret sharing score result and the second secret sharing score result to obtain the target score result
Wherein the first secret sharing score result at least comprises a first sharing first party score and a first sharing second party score, the second secret sharing score result at least comprises a second sharing first party score and a second sharing second party score, and the target score result at least comprises a target score,
the step of performing aggregation interaction with the second device based on the first secret sharing score result to combine a second secret sharing score result determined by the second device, and calculating a target score result includes:
step C31, receiving the second sharing first party score and the second sharing second party score sent by the second device;
step C32, calculating a first party score based on said first shared first party score and said second shared second party score;
in this embodiment, a first party score is calculated based on the first sharing first party score and the second sharing second party score, specifically, a sum of the first sharing first party score and the second sharing second party score is calculated to obtain a first party score.
Step C33, calculating a second party score based on said first shared second party score and said second shared second party score;
in this embodiment, a second party score is calculated based on the first sharing second party score and the second sharing second party score, specifically, a sum of the first sharing second party score and the second sharing second party score is calculated, and a second party score is obtained.
Step C34, aggregating the first party score and the second party score to obtain the target score;
in this embodiment, the first party score and the second party score are aggregated to obtain the target score, specifically, the first party score and the second party score are aggregated to obtain the target score based on a preset aggregation rule, where the preset aggregation rule includes summation, weighting, and averaging.
And step C40, generating a target recommendation list corresponding to the item to be recommended based on the target scoring result.
In this embodiment, a target recommendation list corresponding to the item to be recommended is generated based on the target scoring result, specifically, based on the size of each target score in the target scoring result, each target user is ranked, and a recommendation user list of the item to be recommended is generated, where each target score is a score of a same item to be recommended by different target users, and the recommendation user list is used as the target recommendation list.
In another implementable scheme, a target recommendation list corresponding to the user to be recommended is generated based on the target scoring result, specifically, based on each target score in the target scoring result, where each target score is a score of a same target user for different items to be recommended, and further based on the size of each target score, the items to be recommended are sorted, a recommended item list of the user to be recommended is generated, and the recommended item list is used as the target recommendation list.
The embodiment provides a method for predicting click rate based on secret sharing and longitudinal federal learning, which includes the steps of obtaining sparse data of a user to be recommended by a first party, performing secret sharing with a second device to obtain secret sharing model parameters, performing longitudinal federal prediction interaction with the second device based on a first non-zero part in the sparse data of the user to be recommended by the first party and the secret sharing model parameters to score an object to be recommended corresponding to the sparse data of the user to be recommended by the first party, obtaining a first secret sharing score result, performing aggregation interaction with the second device based on the first secret sharing score result to combine a second secret sharing score result determined by the second device, calculating a target score result, and generating a target recommendation list corresponding to the object to be recommended based on the target score result, when the first device and the second device interact, the sent or received data are secret shared data, a public and private key generated by a third party is not needed for data encryption, all data transmission processes are carried out between two parties participating in longitudinal federal learning, the privacy of the data is protected, and meanwhile, the complex calculation processes of encryption and decryption of the data are reduced, and because only simple mathematical operation process is needed when secret sharing and decryption corresponding to secret sharing are carried out, the computational complexity is reduced, and when the user data is sparse data, the application is only based on the non-zero part of the user data, the generation of the target recommendation list can be completed, thereby reducing the calculation amount of zero part of the user data, and further, the calculation complexity is further reduced, so that the calculation efficiency of the factorization machine regression model during personalized recommendation is improved.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the factorization model construction apparatus may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the factorization model construction device may further include a rectangular user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
It will be understood by those skilled in the art that the factoring model construction apparatus configuration shown in fig. 4 does not constitute a limitation of the factoring model construction apparatus and may comprise more or less components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 4, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a factoring model construction program. The operating system is a program that manages and controls the hardware and software resources of the factorable model building machine, supporting the operation of the factorable model building program as well as other software and/or programs. The network communication module is used to implement communication between the components within the memory 1005 and with other hardware and software in the factoring model construction system.
In the factorization machine model construction device shown in fig. 4, the processor 1001 is configured to execute a factorization machine model construction program stored in the memory 1005, and implement the steps of any of the factorization machine model construction methods described above.
The specific implementation of the factorization model construction device in the present application is substantially the same as that of each embodiment of the factorization model construction method, and is not described herein again.
Referring to fig. 5, fig. 5 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 5, the personalized recommendation device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the personalized recommendation device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the personalized recommendation device architecture shown in fig. 5 does not constitute a limitation of the personalized recommendation device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a personalized recommendation program. The operating system is a program for managing and controlling hardware and software resources of the personalized recommendation device and supports the running of the personalized recommendation program and other software and/or programs. The network communication module is used for realizing communication among the components in the memory 1005 and with other hardware and software in the personalized recommendation system.
In the personalized recommendation device shown in fig. 5, the processor 1001 is configured to execute a personalized recommendation program stored in the memory 1005 to implement the steps of the personalized recommendation method described in any one of the above.
The specific implementation manner of the personalized recommendation device of the application is basically the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The embodiment of the present application further provides a factorization machine model constructing device, where the factorization machine model constructing device is applied to a factorization machine model constructing device, and the factorization machine model constructing device includes:
the secret sharing module is used for acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to acquire first party secret sharing initial model parameters so that the second equipment can determine the second party secret sharing initial model parameters;
an error calculation module, configured to perform federated interaction with the second device based on a first non-zero portion in the first sparse data and the first-party secret sharing initial model parameter, so as to jointly calculate a secret sharing model error with a second non-zero portion in second sparse data obtained by the second device and the second-party secret sharing initial model parameter;
and the generating module is used for updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model.
Optionally, the error calculation module includes:
the first calculation sub-module is used for carrying out federated interaction with the second device based on a preset secret sharing multiplication triple, the first non-zero part and the first party secret sharing initial model parameter so as to combine the second non-zero part and the second party secret sharing initial model parameter and calculate a sparse matrix security inner product and a secret sharing intermediate parameter;
and the second calculation submodule is used for calculating the secret sharing model error based on the sparse matrix security inner product, the secret sharing intermediate parameter and a preset secret sharing model error calculation formula.
Optionally, the first computing module comprises:
a first calculating unit, configured to perform federated interaction with the second device based on the first-type shared model parameter and the first non-zero part, so as to combine the second-party first-type shared model parameter and the second non-zero part, and calculate the first-type sparse matrix safety inner product;
a second calculating unit, configured to perform federated interaction with the second device based on the second-type shared model parameter and the first non-zero part, so as to combine the second-party second-type shared model parameter and the second non-zero part, and calculate the second-type sparse matrix security inner product;
a third calculating unit, configured to perform federated interaction with the second device based on the second-type shared model parameter, the first non-zero part, and the preset secret-sharing multiplication triple, so as to combine the second-party second-type shared model parameter and the second non-zero part to calculate the secret-sharing intermediate parameter.
Optionally, the second computing unit includes:
the first interaction calculation subunit is configured to perform federated interaction with the second device based on the second-type shared model parameter, so as to calculate a cross inner product between the second-type shared model parameter and the second non-zero part, and obtain the first non-zero feature item cross inner product;
and the second interaction calculation subunit is used for carrying out federated interaction with the second equipment based on the first non-zero part so as to calculate a cross inner product between the first non-zero part and the second party second type shared model parameter and obtain the second non-zero feature item cross inner product.
Optionally, the first interaction calculation subunit is further configured to:
generating a first public key, and encrypting the first sharing parameter based on the first public key to obtain an encrypted first sharing parameter;
sending the first public key and the encrypted first sharing parameter to a second device, so that the second device determines a second-party first non-zero feature item cross inner product and an encrypted first non-zero feature item cross inner product based on the first public key, the encrypted first sharing parameter, a second sharing parameter and the second non-zero part;
and receiving the encrypted first non-zero feature item cross inner product sent by the second device, and decrypting the encrypted first non-zero feature item cross inner product based on a first private key corresponding to the first public key to obtain the first non-zero feature item cross inner product.
Optionally, the second interaction calculation subunit is further configured to:
receiving a second public key sent by the second device and a sent encrypted fourth sharing parameter, wherein the encrypted fourth sharing parameter is the fourth sharing parameter encrypted by the second device based on the second public key;
calculating a second non-zero feature item cross inner product and an encrypted second party second non-zero feature item cross inner product based on the second public key, the encrypted fourth sharing parameter, the first non-zero part and the third sharing parameter;
and sending the encrypted second non-zero feature item cross inner product to the second equipment so that the second equipment can decrypt the encrypted second non-zero feature item cross inner product based on a second private key corresponding to the second public key to obtain a second non-zero feature item cross inner product of the second party.
Optionally, the second interaction calculation subunit is further configured to:
encrypting the third sharing parameter based on the second public key to obtain an encrypted third sharing parameter, and calculating an encryption model parameter corresponding to the encrypted third sharing parameter and the encrypted fourth sharing parameter;
calculating the cross inner product between each column vector in the encryption model parameters and each column vector in the first non-zero part to obtain the first encryption inner product;
and constructing the second non-zero feature item cross inner product based on the feature dimension of the encryption model parameter, and calculating the second non-zero feature item cross inner product of the encryption second party which is commonly corresponding to the second non-zero feature item cross inner product and the first encryption inner product.
Optionally, the third computing unit further comprises:
a third interaction calculating subunit, configured to calculate, based on the preset secret sharing multiplication triple, a secret sharing product between the second-type secret sharing parameter matrix and the secret sharing transposed parameter matrix through federated interaction with the second device, and obtain the secret sharing matrix inner product, so that the second device calculates a secret sharing product between the second-party second-type secret sharing parameter matrix and the second-party secret sharing transposed parameter matrix, and obtains the second-party secret sharing matrix inner product;
and the fourth interaction calculating subunit is configured to perform federated interaction with the second device based on the secret sharing matrix inner product and the first non-zero part, so as to jointly calculate the secret sharing intermediate parameter by using the second party secret sharing matrix inner product and the second non-zero part.
Optionally, the generating module includes:
the updating submodule is used for updating the secret sharing initial model parameters based on the secret sharing model error to obtain secret sharing updating parameters;
and the decryption submodule is used for carrying out decryption interaction with the second equipment based on the secret sharing update parameter to obtain the first target model parameter so that the second equipment can obtain the second target model parameter.
Optionally, the decryption sub-module includes:
the first interaction calculation unit is used for sending the first shared second-party model updating parameter to the second equipment so that the second equipment can calculate the second target model parameter based on the determined second shared second-party model updating parameter and the first shared second-party model updating parameter;
and the second interactive calculation unit is used for receiving a second shared first-party model updating parameter sent by the second equipment and calculating the first target model parameter based on the second shared first-party model updating parameter and the first shared first-party model updating parameter.
The specific implementation of the factorization machine model construction device of the present application is substantially the same as that of each embodiment of the factorization machine model construction method, and is not described herein again.
The embodiment of the present application further provides a personalized recommendation device, where the personalized recommendation device is applied to a personalized recommendation device, and the personalized recommendation device includes:
the secret sharing module is used for acquiring sparse data of a user to be recommended by a first party and carrying out secret sharing with second equipment to acquire secret sharing model parameters;
the scoring module is used for carrying out longitudinal federal prediction interaction with the second equipment based on a first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters so as to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result;
the aggregation module is used for carrying out aggregation interaction with the second equipment based on the first secret sharing scoring result so as to combine a second secret sharing scoring result determined by the second equipment to calculate a target scoring result;
and the generating module is used for generating a target recommendation list corresponding to the item to be recommended based on the target scoring result.
Optionally, the aggregation module comprises:
a receiving unit, configured to receive the second sharing first party score and the second sharing second party score sent by the second device;
a first calculating unit for calculating a first party score based on the first shared first party score and the second shared second party score;
a second calculating unit for calculating a second party score based on the first sharing second party score and the second sharing second party score;
and the aggregation unit is used for aggregating the first party score and the second party score to obtain the target score.
Optionally, the scoring module comprises:
a first joint calculation unit, configured to perform longitudinal federal prediction interaction with the second device based on the first non-zero part and the first party first sharing type model parameter, so as to combine the second party first sharing type model parameter to calculate the first type sharing scoring result;
and the second joint calculation unit is used for carrying out longitudinal joint prediction interaction with the second equipment based on the first party second sharing type model parameter so as to calculate a second type sharing scoring result by combining the second party second sharing type model parameter and a second non-zero part of the second party to-be-recommended user sparse data.
The specific implementation manner of the personalized recommendation device of the present application is substantially the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The present embodiment provides a readable storage medium, and the readable storage medium stores one or more programs, which are also executable by one or more processors for implementing the steps of the factorization model construction method described in any one of the above.
The specific implementation manner of the readable storage medium of the present application is substantially the same as that of each embodiment of the factorization model construction method, and is not described herein again.
The embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs are further executable by one or more processors for implementing the steps of the personalized recommendation method according to any one of the above items.
The specific implementation manner of the readable storage medium of the present application is substantially the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (17)

1. A factorization machine model construction method applied to a first device, the factorization machine model construction method comprising:
acquiring initialization model parameters and first sparse data corresponding to a preset initialization model, and performing secret sharing with second equipment based on the initialization model parameters to obtain first party secret sharing initial model parameters for the second equipment to determine second party secret sharing initial model parameters;
performing federated interaction with the second device based on a first non-zero part in the first sparse data and the first-party secret sharing initial model parameters to jointly obtain a second non-zero part in second sparse data and the second-party secret sharing initial model parameters by the second device, and calculating a secret sharing model error;
and updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model.
2. The factorization machine model construction method of claim 1, wherein the step of calculating a secret sharing model error based on a first non-zero portion of the first sparse data and the first-party secret sharing initial model parameters for federated interaction with the second device to combine a second non-zero portion of the second sparse data obtained by the second device and the second-party secret sharing initial model parameters comprises:
performing federated interaction with the second device based on a preset secret sharing multiplication triple, the first non-zero part and the first party secret sharing initial model parameter to combine the second non-zero part and the second party secret sharing initial model parameter, and calculating a sparse matrix security inner product and a secret sharing intermediate parameter;
and calculating the secret sharing model error based on the sparse matrix security inner product, the secret sharing intermediate parameter and a preset secret sharing model error calculation formula.
3. The factorizer model construction method of claim 2, wherein the first-party secret-sharing initial model parameters comprise first-type shared model parameters and second-type shared model parameters, the second-party secret-sharing initial model parameters comprise second-party first-type shared model parameters and second-party second-type shared model parameters, the sparse matrix security inner product comprises a first-type sparse matrix security inner product and a second-type sparse matrix security inner product,
the step of calculating sparse matrix security inner-volume and secret sharing intermediate parameters based on a preset secret sharing multiplication triple, the first non-zero part and the first party secret sharing initial model parameters, and performing federated interaction with the second device to combine the second non-zero part and the second party secret sharing initial model parameters comprises:
performing federated interaction with the second device based on the first-type shared model parameters and the first non-zero part to combine the second-party first-type shared model parameters and the second non-zero part to calculate the first-type sparse matrix security inner product;
performing federated interaction with the second device based on the second-type shared model parameters and the first non-zero part to combine the second-party second-type shared model parameters and the second non-zero part to calculate the second-type sparse matrix security inner product;
and performing federated interaction with the second device based on the second-type shared model parameters, the first non-zero part and the preset secret-sharing multiplication triple to jointly calculate the secret-sharing intermediate parameters by the second party with the second-type shared model parameters and the second non-zero part.
4. A factoring machine model construction method as in claim 3 wherein the second type sparse matrix safe inner product comprises a first non-zero eigenterm cross inner product and a second non-zero eigenterm cross inner product,
the step of performing federated interaction with the second device based on the second-type shared model parameters and the first non-zero part to combine the second-party second-type shared model parameters and the second non-zero part to calculate a second-type sparse matrix security inner product includes:
performing federated interaction with the second device based on the second type shared model parameter to calculate a cross inner product between the second type shared model parameter and the second non-zero part, and obtaining the first non-zero feature item cross inner product;
and carrying out federal interaction with the second equipment based on the first non-zero part to calculate a cross inner product between the first non-zero part and the second-party second-type shared model parameter, and obtaining the second non-zero feature item cross inner product.
5. A factoring machine model construction method as claimed in claim 4 wherein said second type of shared model parameters comprises first shared parameters and said second party second type of shared model parameters comprises second shared parameters,
the step of performing federated interaction with the second device based on the second-type shared model parameter to calculate a cross inner product between the second-type shared model parameter and the second non-zero part, and obtaining the first non-zero feature item cross inner product includes:
generating a first public key, and encrypting the first sharing parameter based on the first public key to obtain an encrypted first sharing parameter;
sending the first public key and the encrypted first sharing parameter to a second device, so that the second device determines a second-party first non-zero feature item cross inner product and an encrypted first non-zero feature item cross inner product based on the first public key, the encrypted first sharing parameter, a second sharing parameter and the second non-zero part;
and receiving the encrypted first non-zero feature item cross inner product sent by the second device, and decrypting the encrypted first non-zero feature item cross inner product based on a first private key corresponding to the first public key to obtain the first non-zero feature item cross inner product.
6. The factorizer model construction method of claim 4 wherein the second type of shared model parameter comprises a third shared parameter, the second party second type of shared model parameter comprises a fourth shared parameter,
the step of performing federated interaction with the second device based on the first non-zero part to calculate a cross-inner product between the first non-zero part and the second-party second-type shared model parameter, and obtaining the second non-zero feature item cross-inner product includes:
receiving a second public key sent by the second device and a sent encrypted fourth sharing parameter, wherein the encrypted fourth sharing parameter is the fourth sharing parameter encrypted by the second device based on the second public key;
calculating a second non-zero feature item cross inner product and an encrypted second party second non-zero feature item cross inner product based on the second public key, the encrypted fourth sharing parameter, the first non-zero part and the third sharing parameter;
and sending the encrypted second non-zero feature item cross inner product to the second equipment so that the second equipment can decrypt the encrypted second non-zero feature item cross inner product based on a second private key corresponding to the second public key to obtain a second non-zero feature item cross inner product of the second party.
7. The factorizer model construction method of claim 6, wherein the step of calculating a second non-zero feature cross-inner product and encrypting a second-party second non-zero feature cross-inner product based on the second public key, the encrypted fourth shared parameter, the first non-zero portion, and the third shared parameter comprises:
encrypting the third sharing parameter based on the second public key to obtain an encrypted third sharing parameter, and calculating an encryption model parameter corresponding to the encrypted third sharing parameter and the encrypted fourth sharing parameter;
calculating the cross inner product between each column vector in the encryption model parameters and each column vector in the first non-zero part to obtain the first encryption inner product;
and constructing the second non-zero feature item cross inner product based on the feature dimension of the encryption model parameter, and calculating the second non-zero feature item cross inner product of the encryption second party which is commonly corresponding to the second non-zero feature item cross inner product and the first encryption inner product.
8. The factorizer model construction method of claim 3, wherein the second type of shared model parameters comprise a second type of secret sharing parameter matrix and a secret sharing transposed parameter matrix corresponding to the second type of secret sharing parameter matrix, the second party second type of shared model parameters comprise a second party second type of secret sharing parameter matrix and a second party secret sharing transposed parameter matrix corresponding to the second party second type of shared parameter matrix,
the step of calculating the secret sharing intermediate parameter based on the second-type shared model parameter, the first non-zero part, and the preset secret sharing multiplicative triple, federately interacting with the second device to join the second-party second-type shared model parameter and the second non-zero part, comprises:
calculating a secret sharing product between the second-type secret sharing parameter matrix and the secret sharing transposed parameter matrix through federated interaction with the second device based on the preset secret sharing multiplication triple, and obtaining the secret sharing matrix inner product, so that the second device can calculate the secret sharing product between the second-party second-type secret sharing parameter matrix and the second-party secret sharing transposed parameter matrix, and obtain the second-party secret sharing matrix inner product;
and performing federated interaction with the second device based on the secret sharing matrix inner product and the first non-zero part to jointly calculate the secret sharing intermediate parameter by the second party secret sharing matrix inner product and the second non-zero part.
9. The method of constructing a factorizer model of claim 1, wherein the longitudinal federated factorizer model comprises first target model parameters belonging to the first device and second target model parameters belonging to the second device,
the step of updating the preset initialization model based on the secret sharing model error to obtain a longitudinal federal factorization model comprises the following steps:
updating the secret sharing initial model parameters based on the secret sharing model error to obtain secret sharing updating parameters;
and carrying out decryption interaction with the second equipment based on the secret sharing update parameter to obtain the first target model parameter so that the second equipment can obtain the second target model parameter.
10. A method of factoring model construction as in claim 9 wherein the secret shared update parameters comprise a first shared first party model update parameter and a first shared second party model update parameter,
the step of performing decryption interaction with the second device based on the secret sharing update parameter to obtain the first target model parameter for the second device to obtain the second target model parameter includes:
sending the first shared second-party model update parameter to the second device, so that the second device calculates the second target model parameter based on the determined second shared second-party model update parameter and the first shared second-party model update parameter;
and receiving a second shared first-party model update parameter sent by the second device, and calculating the first target model parameter based on the second shared first-party model update parameter and the first shared first-party model update parameter.
11. A personalized recommendation method is applied to a first device, and comprises the following steps:
acquiring sparse data of a user to be recommended by a first party, and performing secret sharing with second equipment to obtain secret sharing model parameters;
performing longitudinal federal prediction interaction with the second device based on a first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result;
performing aggregation interaction with the second device based on the first secret sharing scoring result to combine a second secret sharing scoring result determined by the second device to calculate a target scoring result;
and generating a target recommendation list corresponding to the item to be recommended based on the target scoring result.
12. The personalized recommendation method of claim 11, wherein the first secret sharing score result comprises at least a first sharing first party score and a first sharing second party score, the second secret sharing score result comprises at least a second sharing first party score and a second sharing second party score, and the goal score result comprises at least a goal score,
the step of performing aggregation interaction with the second device based on the first secret sharing score result to combine a second secret sharing score result determined by the second device, and calculating a target score result includes:
receiving the second sharing first party score and the second sharing second party score sent by the second device;
calculating a first party score based on the first shared first party score and the second shared second party score;
calculating a second party score based on the first shared second party score and the second shared second party score;
and aggregating the first party score and the second party score to obtain the target score.
13. The personalized recommendation method of claim 11, wherein the secret sharing model parameters comprise a first party first sharing type model parameter and a first party second type sharing model parameter, the second device comprises a second party to-be-recommended user sparse data and a second party secret sharing model parameter, wherein the second party secret sharing model parameter comprises a second party first sharing type model parameter and a second party second sharing type model parameter, the first secret sharing score result comprises a first type sharing score result and a second type sharing score result,
the step of performing longitudinal federal prediction interaction with the second device based on the first non-zero part in the sparse data of the first party to-be-recommended user and the secret sharing model parameters to score the to-be-recommended articles corresponding to the sparse data of the first party to-be-recommended user and obtain a first secret sharing scoring result includes:
performing longitudinal federal prediction interaction with the second device based on the first non-zero part and the first party first sharing type model parameter to combine the second party first sharing type model parameter to calculate the first type sharing scoring result;
and performing longitudinal federal prediction interaction with the second equipment based on the first party second sharing type model parameter so as to combine the second party second sharing type model parameter and a second non-zero part of the second party to-be-recommended user sparse data, and calculating a second type sharing scoring result.
14. A factorizer model construction apparatus, comprising: a memory, a processor, and a program stored on the memory for implementing the factorization machine model construction method,
the memory is used for storing a program for realizing the factorization machine model construction method;
the processor is configured to execute a program for implementing the factorization machine model construction method to implement the steps of the factorization machine model construction method according to any one of claims 1 to 10.
15. A readable storage medium having stored thereon a program for implementing a factorization machine model construction method, the program being executed by a processor to implement the steps of the factorization machine model construction method according to any one of claims 1 to 10.
16. A personalized recommendation device, characterized in that the personalized recommendation device comprises: a memory, a processor and a program stored on the memory for implementing the personalized recommendation method,
the memory is used for storing a program for realizing the personalized recommendation method;
the processor is used for executing the program for implementing the personalized recommendation method to implement the steps of the personalized recommendation method according to any one of claims 11 to 13.
17. A readable storage medium, wherein a program for implementing a personalized recommendation method is stored on the readable storage medium, and the program for implementing the personalized recommendation method is executed by a processor to implement the steps of the personalized recommendation method according to any one of claims 11 to 13.
CN202010893538.5A 2020-08-28 2020-08-28 Factorization machine model construction method and device and readable storage medium Pending CN112016698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010893538.5A CN112016698A (en) 2020-08-28 2020-08-28 Factorization machine model construction method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010893538.5A CN112016698A (en) 2020-08-28 2020-08-28 Factorization machine model construction method and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112016698A true CN112016698A (en) 2020-12-01

Family

ID=73503139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010893538.5A Pending CN112016698A (en) 2020-08-28 2020-08-28 Factorization machine model construction method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112016698A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434878A (en) * 2021-06-25 2021-09-24 平安科技(深圳)有限公司 Modeling and application method, device, equipment and storage medium based on federal learning
CN113516253A (en) * 2021-07-02 2021-10-19 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN117973488A (en) * 2024-03-29 2024-05-03 蓝象智联(杭州)科技有限公司 Large language model training and reasoning method and system with privacy protection

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048369A1 (en) * 1995-02-13 2002-04-25 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
CN105677701A (en) * 2015-12-24 2016-06-15 苏州大学 Social recommendation method based on oblivious transfer
US20170041296A1 (en) * 2015-08-05 2017-02-09 Intralinks, Inc. Systems and methods of secure data exchange
CN110288094A (en) * 2019-06-10 2019-09-27 深圳前海微众银行股份有限公司 Model parameter training method and device based on federation's study
US10607027B1 (en) * 2018-12-05 2020-03-31 Cyberark Software Ltd. Secretless secure data distribution and recovery process
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111046433A (en) * 2019-12-13 2020-04-21 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111259446A (en) * 2020-01-16 2020-06-09 深圳前海微众银行股份有限公司 Parameter processing method, equipment and storage medium based on federal transfer learning
CN111291417A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Method and device for protecting data privacy of multi-party combined training object recommendation model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020048369A1 (en) * 1995-02-13 2002-04-25 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
US20170041296A1 (en) * 2015-08-05 2017-02-09 Intralinks, Inc. Systems and methods of secure data exchange
CN105677701A (en) * 2015-12-24 2016-06-15 苏州大学 Social recommendation method based on oblivious transfer
US10607027B1 (en) * 2018-12-05 2020-03-31 Cyberark Software Ltd. Secretless secure data distribution and recovery process
CN110288094A (en) * 2019-06-10 2019-09-27 深圳前海微众银行股份有限公司 Model parameter training method and device based on federation's study
CN110955907A (en) * 2019-12-13 2020-04-03 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111046433A (en) * 2019-12-13 2020-04-21 支付宝(杭州)信息技术有限公司 Model training method based on federal learning
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111259446A (en) * 2020-01-16 2020-06-09 深圳前海微众银行股份有限公司 Parameter processing method, equipment and storage medium based on federal transfer learning
CN111291417A (en) * 2020-05-09 2020-06-16 支付宝(杭州)信息技术有限公司 Method and device for protecting data privacy of multi-party combined training object recommendation model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAIZE DING ET AL.: "Graph Neural Networks with High-order Feature Interactions", 《ARXIV》, 19 August 2019 (2019-08-19), pages 9 - 12 *
符玥,: "基于区块链的可靠存储及安全分享算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, vol. 2020, no. 4, 15 April 2020 (2020-04-15), pages 137 - 12 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113434878A (en) * 2021-06-25 2021-09-24 平安科技(深圳)有限公司 Modeling and application method, device, equipment and storage medium based on federal learning
CN113434878B (en) * 2021-06-25 2023-07-07 平安科技(深圳)有限公司 Modeling and application method, device, equipment and storage medium based on federal learning
CN113516253A (en) * 2021-07-02 2021-10-19 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN113516253B (en) * 2021-07-02 2022-04-05 深圳市洞见智慧科技有限公司 Data encryption optimization method and device in federated learning
CN117973488A (en) * 2024-03-29 2024-05-03 蓝象智联(杭州)科技有限公司 Large language model training and reasoning method and system with privacy protection
CN117973488B (en) * 2024-03-29 2024-06-07 蓝象智联(杭州)科技有限公司 Large language model training and reasoning method and system with privacy protection

Similar Documents

Publication Publication Date Title
Perifanis et al. Federated neural collaborative filtering
WO2022089256A1 (en) Method, apparatus and device for training federated neural network model, and computer program product and computer-readable storage medium
CN112733967B (en) Model training method, device, equipment and storage medium for federal learning
CN112016698A (en) Factorization machine model construction method and device and readable storage medium
CN112200321B (en) Inference method, system, device and medium based on knowledge federation and graph network
WO2021092977A1 (en) Vertical federated learning optimization method, appartus, device and storage medium
WO2022016964A1 (en) Vertical federated modeling optimization method and device, and readable storage medium
CN113159327A (en) Model training method and device based on federal learning system, and electronic equipment
CN114696990B (en) Multi-party computing method, system and related equipment based on fully homomorphic encryption
CN112000987A (en) Factorization machine classification model construction method and device and readable storage medium
CN112818374A (en) Joint training method, device, storage medium and program product of model
WO2022142366A1 (en) Method and apparatus for updating machine learning model
CN112000988A (en) Factorization machine regression model construction method and device and readable storage medium
Miao et al. Federated deep reinforcement learning based secure data sharing for Internet of Things
CN112926073A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
Zhang et al. Privacy-preserving deep learning based on multiparty secure computation: A survey
WO2021227959A1 (en) Data privacy protected multi-party joint training of object recommendation model
CN113051586B (en) Federal modeling system and method, federal model prediction method, medium, and device
CN111985573A (en) Factorization machine classification model construction method and device and readable storage medium
CN111767411A (en) Knowledge graph representation learning optimization method and device and readable storage medium
CN111291273A (en) Recommendation system optimization method, device, equipment and readable storage medium
US20220270299A1 (en) Enabling secure video sharing by exploiting data sparsity
CN114492850A (en) Model training method, device, medium, and program product based on federal learning
CN111859440B (en) Sample classification method of distributed privacy protection logistic regression model based on mixed protocol
US20230325718A1 (en) Method and apparatus for joint training logistic regression model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination