CN112000988A - Factorization machine regression model construction method and device and readable storage medium - Google Patents

Factorization machine regression model construction method and device and readable storage medium Download PDF

Info

Publication number
CN112000988A
CN112000988A CN202010893497.XA CN202010893497A CN112000988A CN 112000988 A CN112000988 A CN 112000988A CN 202010893497 A CN202010893497 A CN 202010893497A CN 112000988 A CN112000988 A CN 112000988A
Authority
CN
China
Prior art keywords
secret sharing
sharing
parameter
party
secret
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010893497.XA
Other languages
Chinese (zh)
Inventor
高大山
鞠策
杨强
郑文琛
谭奔
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010893497.XA priority Critical patent/CN112000988A/en
Publication of CN112000988A publication Critical patent/CN112000988A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Bioethics (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Medical Informatics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses a factorization machine regression model construction method, equipment and a readable storage medium, wherein the factorization machine regression model construction method comprises the following steps: the method comprises the steps of carrying out secret sharing with a second device to obtain secret sharing model parameters and secret sharing training data, carrying out longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, calculating secret sharing regression errors, determining first target regression model parameters based on the secret sharing regression errors, and assisting the second device to determine second target regression model parameters to construct a longitudinal federated factorization machine regression model. The method and the device solve the technical problem that data privacy of all participants cannot be protected when the regression model is built based on longitudinal federated learning modeling.

Description

Factorization machine regression model construction method and device and readable storage medium
Technical Field
The application relates to the field of artificial intelligence of financial technology (Fintech), in particular to a method and equipment for constructing a regression model of a factorization machine and a readable storage medium.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of federal learning is more and more extensive, at present, a non-encrypted two-party federal learning method or a homomorphic encrypted two-party longitudinal federal learning modeling method is generally adopted for the construction of a regression model, however, for the non-encrypted two-party federal learning method, the risk of data leakage exists, and the data privacy of each participant of the longitudinal federal learning modeling cannot be protected, for the homomorphic encrypted two-party longitudinal federal learning modeling method, a third party is needed to generate a key pair to provide encryption and decryption services, the third party must be required to be trusted, if the third party is not trusted or the credibility is low, the risk of data leakage still exists, and the data privacy of each participant of the longitudinal federal learning modeling cannot be protected.
Disclosure of Invention
The application mainly aims to provide a method, equipment and a readable storage medium for constructing a regression model of a factorization machine, and aims to solve the technical problem that data privacy of each participant cannot be protected when the regression model is constructed based on longitudinal federated learning modeling in the prior art.
In order to achieve the above object, the present application provides a method for constructing a regression model of a factorization machine, which is applied to a device for constructing a regression model of a factorization machine, and the method for constructing a regression model of a factorization machine includes:
secret sharing is carried out with the second equipment, and secret sharing model parameters and secret sharing training data are obtained;
performing longitudinal federal learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculating a secret sharing regression error;
and determining a first target regression model parameter based on the secret sharing regression error, and assisting the second equipment to determine a second target regression model parameter so as to construct a longitudinal Federal factorization machine regression model.
The application also provides a personalized recommendation method, which is applied to personalized recommendation equipment and comprises the following steps:
secret sharing is carried out with the second equipment, and secret sharing to-be-recommended user data and secret sharing model parameters are obtained;
inputting the secret sharing user data to be recommended into a preset scoring model, and scoring the to-be-recommended articles corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result;
performing federated interaction with the second device based on the first secret sharing scoring result to calculate a target score in combination with a second secret sharing scoring result determined by the second device;
and generating a target recommendation list corresponding to the item to be recommended based on the target score.
The present application further provides a factorization machine regression model building device, the factorization machine regression model building device is virtual device, just the factorization machine regression model building device is applied to factorization machine regression model building equipment, the factorization machine regression model building device includes:
the secret sharing module is used for carrying out secret sharing with the second equipment to obtain secret sharing model parameters and secret sharing training data;
the longitudinal federation module is used for carrying out longitudinal federation learning modeling with the second equipment based on the secret sharing training data and the secret sharing model parameters and calculating a secret sharing regression error;
and the determining module is used for determining a first target regression model parameter based on the secret sharing regression error and assisting the second equipment to determine a second target regression model parameter so as to construct a longitudinal Federal factorization machine regression model.
The present application further provides a personalized recommendation device, the personalized recommendation device is a virtual device, and the personalized recommendation device is applied to a personalized recommendation device, the personalized recommendation device includes:
the secret sharing module is used for carrying out secret sharing with the second equipment to obtain secret sharing to-be-recommended user data and secret sharing model parameters;
the scoring module is used for inputting the secret sharing user data to be recommended into a preset scoring model so as to score the to-be-recommended articles corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result;
the calculation module is used for carrying out federal interaction with the second equipment based on the first secret sharing scoring result so as to combine a second secret sharing scoring result determined by the second equipment to calculate a target score;
and the generating module is used for generating a target recommendation list corresponding to the item to be recommended based on the target score.
The present application further provides a factorization machine regression model construction device, the factorization machine regression model construction device is an entity device, the factorization machine regression model construction device includes: a memory, a processor, and a program of the factorizer regression model construction method stored on the memory and executable on the processor, the program of the factorizer regression model construction method when executed by the processor being operable to implement the steps of the factorizer regression model construction method as described above.
The present application further provides a personalized recommendation device, where the personalized recommendation device is an entity device, and the personalized recommendation device includes: the personalized recommendation method comprises a memory, a processor and a program of the personalized recommendation method stored on the memory and capable of running on the processor, wherein the program of the personalized recommendation method can realize the steps of the personalized recommendation method when being executed by the processor.
The present application also provides a readable storage medium having stored thereon a program for implementing a method of factoring machine regression model, the program implementing the steps of the method of factoring machine regression model as described above when executed by a processor.
The present application also provides a readable storage medium, on which a program for implementing a personalized recommendation method is stored, and when executed by a processor, the program for implementing the personalized recommendation method implements the steps of the personalized recommendation method as described above.
Compared with the technical means of constructing the regression model by adopting a two-party federal learning method based on non-encryption or a two-party longitudinal federal learning modeling method based on homomorphic encryption in the prior art, the method, the device and the readable storage medium have the advantages that secret sharing is carried out with second equipment to obtain secret sharing model parameters and secret sharing training data, then longitudinal federal learning modeling is carried out with the second equipment based on the secret sharing training data and the secret sharing model parameters, secret sharing regression errors are calculated, then the secret sharing model parameters are updated based on the secret sharing regression errors to obtain secret sharing regression model updating parameters, wherein when the method, the device and the readable storage medium are interacted with the second equipment, the sent or received data are secret sharing data, and the data encryption is not required to be carried out by a public and private key generated by a third party, all data transmission processes are carried out between two parties participating in longitudinal federal learning modeling, the privacy of the data is protected, further updating parameters based on the secret shared regression model, by performing a decryption interaction with the second device, the first target regression model parameters may be determined and assist the second device in determining the second target regression model parameters, the construction of the regression model of the longitudinal federated factorization machine can be completed, the technical defect that the data privacy of each participant of longitudinal federated learning modeling cannot be protected due to the fact that the regression model is constructed by adopting a two-party federated learning method based on non-encryption or homomorphic encryption in the prior art is overcome, therefore, the technical problem that data privacy of all participants cannot be protected when a regression model is built based on longitudinal federated learning modeling is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a method for constructing a regression model of a factorizer of the present application;
FIG. 2 is a schematic flow chart of a second embodiment of a factorization machine regression model construction method of the present application;
FIG. 3 is a flowchart illustrating a third embodiment of a personalized recommendation method according to the present application;
FIG. 4 is a schematic structural diagram of a hardware operating environment related to a regression model construction method for a factorizer in the embodiment of the present application;
fig. 5 is a schematic device structure diagram of a hardware operating environment related to a personalized recommendation method according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the method for constructing a regression model of a factorization machine of the present application, referring to fig. 1, the method for constructing a regression model of a factorization machine is applied to a first device, and the method for constructing a regression model of a factorization machine includes:
step S10, carrying out secret sharing with the second device to obtain secret sharing model parameters and secret sharing training data;
in this embodiment, it should be noted that the first device and the second device are both longitudinal federal learning participants, the first device possesses first party training label data with sample labels, and the first party training label data can be represented by a first party training data matrix and the sample labels, for example, assuming that the first party training label data is (X)A,Y),XATraining a matrix of data for the first party, Y being the sample label, and additionally the second device having second party training data without sample labels, the second party training data being representable by a matrix of second party training data, e.g. assuming that the matrix of second party training data is XB
Additionally, in this embodiment, the factorization machine regression model is a machine learning model constructed based on longitudinal federated learning, and model parameters of the factorization machine regression model are commonly held by the first device and the second device, where the factorization machine regression model includes first type model parameters and second type model parameters, the first type model parameters include first party first type model parameters and second party first type model parameters, and the second type model parameters include first party second type model parameters and second party second type model parameters, for example, if the first type model parameters are w and the second type model parameters are V, the first party first type model parameters are wAThe second square first type model parameter is wBThe first party is the second partyThe type model parameter is VAThe second-party second-type model parameter is VB
Additionally, it should be noted that the process of sharing the data secretly is a process of splitting the data into two pieces of sub-data, and the two pieces of sub-data are held by two parties of the secretly sharing, for example, if the two parties of the secretly sharing are a and B, and then secretly sharing the data X, a holds the first share of the data X [ [ X ] X]]AB holds a second share of data X [ [ X ]]]BAnd X [ [ X ]]]A+[[X]]B
Additionally, it should be noted that the model expression of the factorization machine regression model is as follows:
z(x)=<w,x>+∑i<j<Vi,Vj>xixj
wherein X is a data matrix corresponding to model input data, wherein the model input data comprises first party training label data (X)AY) and second party training data XBWherein Y is the sample label, XAHaving dACharacteristic dimension, XBHaving dBAnd characteristic dimensions, wherein the first type model parameter is w, the second type model parameter is V, the first type model parameter is d-dimensional vector, the second type model parameter is V, the V is d x d matrix, and w ═ wA,wB]I.e. w is a first type of model parameter w from a first partyAAnd a second party first type model parameter wBComposition of, wherein wAIs dADimension vector, wBIs dBDimension vector, additionally, V ═ VA,VB]Wherein V is a second type model parameter V from the first partyAAnd said second party second type model parameter VBComposition of, wherein VAIs dA*dXIs a matrix, VBIs dB*dXThe dimension matrix is a matrix of dimensions,<w,x>is the inner product of w and x, ViA column vector of the ith column of V, VjA column vector of j-th column of V, xiA column vector of the ith column of x, xjThe column vector for the j-th column of x.
Performing secret sharing with a second device to obtain secret sharing model parameters and secret sharing training data, specifically, obtaining an initialization model and first party training label data corresponding to the factorization machine regression model, and obtaining first party first type model parameters and corresponding first party second type model parameters corresponding to the initialization model, and similarly, obtaining, by the second device, second party training data, second party first type model parameters and corresponding second party second type model parameters corresponding to the initialization model before performing secret sharing, and further performing secret sharing with the first device and the second device, wherein, when performing secret sharing, the first device provides the first party training label data, the first party first type model parameters and the first party second type model parameters, and the second device provides the second party training data, A second party first type model parameter and the second party second type model parameter, whereby the first device obtains the secret sharing model parameter and secret sharing training data, and the second device obtains a second party secret sharing model parameter and second party secret sharing training data owned by the own party, wherein the secret sharing model parameter comprises a first share of the first party first type model parameter, a first share of the first party second type model parameter, a second share of the second party first type model parameter, and a second share of the second party second type model parameter, the second party secret sharing model parameter comprises a second share of the first party first type model parameter, a second share of the first party second type model parameter, a first share of the second party first type model parameter, and a first share of the second party second type model parameter, the secret shared training data includes a first share of first-party training tag data and a second share of second-party training data, the second-party secret shared training data including a second share of first-party training tag data and a first share of second-party training data.
Wherein the secret sharing model parameters include a first sharing parameter and a second sharing parameter, the secret sharing training data includes a first sharing training data and a second sharing training data,
the step of performing secret sharing with the second device to obtain secret sharing model parameters and secret sharing training data comprises:
step S11, obtaining a first party model parameter and first party training label data, and taking a first share of the first party model parameter as the first sharing parameter;
in this embodiment, it should be noted that the first party model parameters include first party first type model parameters and first party second type model parameters, and the second party secret sharing model parameters include third sharing parameters and fourth sharing parameters.
The method comprises the steps of obtaining first party model parameters and first party training label data, taking a first share of the first party model parameters as a first sharing parameter, specifically, splitting the first party first type model parameters, the first party second type model parameters and the first party training label data into two shares, and taking the first share of the first party first type model parameters and the first share of the first party second type model parameters as the first sharing parameter.
Step S12, sending the second share of the first party model parameter to the second device, so that the second device determines a third sharing parameter;
in this embodiment, the second share of the first party model parameter is sent to the second device, so that the second device determines a third shared parameter, specifically, the second share of the first party first type model parameter and the second share of the first party second type model parameter are both sent to the second device, and then the second device shares the third shared parameter with the second share of the first party first type model parameter and the second share of the first party second type model parameter.
Step S13, receiving a second sharing parameter sent by the second device, where the second sharing parameter is a second share of a second-party model parameter obtained by the second device, and a first share of the second-party model parameter is a fourth sharing parameter of the second device;
in this embodiment, a second sharing parameter sent by the second device is received, where the second sharing parameter is a second share of a second-party model parameter obtained by the second device, and a first share of the second-party model parameter is a fourth sharing parameter of the second device, specifically, the second device splits the second-party first-type model parameter and the second-party second-type model parameter into two shares, respectively, further takes the first share of the second-party first-type model parameter and the first share of the second-party second-type model parameter as a fourth sharing parameter, and sends both the second share of the second-party first-type model parameter and the second share of the second-party second-type model parameter to the first device, further the first device receives the second share of the second-party first-type model parameter and the second share of the second-party second-type model parameter, and the second share of the second party first type model parameters and the second share of the second party second type model parameters are taken together as the second shared parameters.
Step S14, using the first share of the first party training label data as the first shared training data, and sending the second share of the first party training label data to the second device, so that the second device determines third shared training data;
in this embodiment, it should be noted that the secret shared training data of the second party includes third shared training data and fourth shared training data.
The first share of the first party training label data is used as the first shared training data, and the second share of the first party training label data is sent to the second device, so that the second device determines third shared training data.
Step S15, receiving second shared training data sent by a second device, where the second shared training data is a second share of second-party training data acquired by the second device, and a first share of the second-party training data is fourth shared training data of the second device.
In this embodiment, second shared training data sent by a second device is received, where the second shared training data is a second share of second-party training data acquired by the second device, and a first share of the second-party training data is fourth shared training data of the second device, and specifically, the second device splits the second-party training data into two shares, takes the first share of the second-party training data as the fourth shared training data, sends the second share of the second-party training data to the first device, and takes the first share of the second-party training data as the second shared training data by the first device.
Step S20, performing longitudinal federate learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculating a secret sharing regression error;
in this embodiment, it should be noted that the secret sharing training data includes first sharing reference training data and second sharing training data, where the first sharing training data is a first share of first party training label data, the second sharing training data is a second share of the second party training data, and the secret sharing model parameter includes a first sharing parameter and a second sharing parameter, where the first sharing parameter includes a first share of a first party first type model parameter and then a first share of a first party second type model parameter, and the second sharing parameter includes a second share of a second party first type model parameter and a second share of the second party second type model parameter.
Additionally, the second device provides second-party secret sharing training data and second-party secret sharing model parameters when performing longitudinal federated learning modeling, wherein the second party secret shared training data comprises third shared training data and fourth shared training data, wherein the third shared training data is a second share of first party training label data, the fourth shared training data is a first share of the second party training data, the secret sharing model parameters include a third sharing parameter and a fourth sharing parameter, wherein the third shared parameter comprises a second share of the first party first type model parameter and a second share of the first party second type model parameter, the fourth shared parameter comprises a first share of a second party first type model parameter and a first share of the second party second type model parameter.
And performing longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameter, and calculating a secret sharing regression error, specifically, performing federated interaction with the second device based on the first sharing parameter, the second sharing model parameter, the first sharing training data and the second sharing training, wherein the second device provides a third sharing parameter, a fourth sharing model parameter, a third sharing training data and a fourth sharing training data during the federated interaction, calculates a secret sharing intermediate parameter, and further calculates a secret sharing regression error through a preset secret sharing regression error calculation formula based on the secret sharing intermediate parameter.
Wherein the secret sharing model parameters comprise a first type of sharing parameter and a second type of sharing parameter, the secret sharing training data comprises secret sharing tag data,
the step of performing longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculating a secret sharing regression error includes:
step S21, based on a preset secret sharing mechanism, through carrying out federal interaction with the second device, calculating a secret sharing cross feature item inner product corresponding to the second type sharing parameter and the secret sharing training data;
in this embodiment, it should be noted that the preset secret sharing mechanism includes secret sharing addition and secret sharing multiplication, the first type sharing parameter includes a first share of a first party first type model parameter and a second share of a second party first type model parameter, the second type sharing parameter includes a first share of a first party second type model parameter and a second share of the second party second type model parameter, and the second device possesses the second party first type sharing parameter and the second party second type sharing parameter, wherein the second party first type sharing parameter includes a second share of the first party first type model parameter and a first share of the second party first type model parameter, the second party second type sharing parameter includes a second share of the first party second type model parameter and a first share of the second party second type model parameter, the secret shared tag data is a secret shared sample tag.
Calculating a secret sharing cross feature item inner product which is commonly corresponding to the second type sharing parameter and the secret sharing training data by carrying out federal interaction with the second equipment based on a preset secret sharing mechanism, specifically, calculating a cross inner product between each parameter element in the second type sharing parameter and each training data element in the secret sharing training data by carrying out federal interaction with the second equipment based on secret sharing multiplication, wherein one cross inner product exists between one parameter element and one training data element, accumulating the cross inner products to obtain the secret sharing cross feature item inner product, and additionally, calculating a second party parameter element in the second party second type sharing parameter and each second training data element in the second party secret sharing training data by the second equipment based on secret sharing multiplication during the federal interaction And cross inner product, obtaining the inner product of secret sharing cross feature item of the second party.
Wherein the second type of shared parameters comprises first and second shared second type model parameters, the secret sharing training data comprises first and second shared training data, the secret sharing cross feature term inner product comprises first and second cross feature term inner products, the preset secret sharing mechanism comprises secret sharing multiplication,
the step of calculating the secret sharing cross feature item inner product which is jointly corresponding to the second type sharing parameter and the secret sharing training data through carrying out federal interaction with the second equipment based on a preset secret sharing mechanism comprises the following steps:
step S211, based on the secret sharing multiplication, calculating a cross inner product between each element in the first shared second type model parameter and each element in the first shared training data by performing federal interaction with the second device, and obtaining each first element cross inner product;
in this embodiment, it should be noted that the first shared second-type model parameter may be a parameter in a matrix form, where the first shared second-type model parameter is a second share of a second-party second-type model parameter, each column of the first shared second-type model parameter is a first parameter element, and the first shared training data may be training data in a matrix form, where the first shared training data is a second share of the second-party training data, and each column of the first shared training data is a first training data element.
Based on the secret sharing multiplication, calculating cross inner products between each element in the first shared second type model parameters and each element in the first shared training data through carrying out federated interaction with the second device, and obtaining each first element cross inner product, specifically, obtaining a first secret sharing multiplication triple corresponding to the secret sharing multiplication, and further carrying out federated interaction with the second device through the secret sharing multiplication based on the first secret sharing multiplication triple, and calculating an inner product between each first parameter element and each first training data element, and obtaining each first element cross inner product, wherein when the second device carries out federated interaction with the first device, the second device calculates a second-party first element cross inner product corresponding to each first element cross inner product.
Wherein in one implementable scenario it is assumed that the first device possesses a secret shared multiplicative triple ([ [ a ]]]A,[[b]]A,[[c]]A) The second device possesses a secret shared multiplicative triplet ([ [ a ]]]B,[[b]]B,[[c]]B) Wherein [ [ a ]]]A+[[a]]B=a,[[b]]A+[[b]]B=b,[[c]]A+[[c]]BC, a, b, and the first parameter element is secret shared [ [ x ]]]AThe first training data element is [ [ y ]]]AThe parameter element corresponding to the first parameter element in the second device is [ [ x ]]]BThe training data element corresponding to the first training data element is [ [ y ]]]BWherein [ [ x ]]]A+[[x]]B=x,[[y]]A+[[y]]BY, then the first element cross inner product computed by the first device is secret shared [ [ x y ]]]AThe second device calculates the second square first element cross inner product as [ [ x y [ ]]]BAnd [ [ x ] y]]A+[[x*y]]BFurther, specifically, the calculation flow is as follows:
first, the first device calculates [ [ e ]]]A=[[x]]A-[[a]]AAnd [ [ f ]]]A=[[y]]A-[[b]]AThe second device calculates [ [ e ]]]B=[[x]]B-[[a]]BAnd [ [ f ]]]B=[[y]]B-[[b]]BAnd the first device will then [ [ e ]]]AAnd [ [ f ]]]ASending to the second device, the second device will [ [ e ]]]BAnd [ [ f ]]]BSending the data to the second device, and obtaining e-x-a and f-y-b by the first device and the second device, and calculating [ [ x y ] by the first device]]A=f*[[a]]A+e*[[b]]A+[[c]]AThe second device calculates [ [ x ] y [ ]]]B=e*f+f*[[a]]B+e*[[b]]B+[[c]]BAnd then [ [ x ] y [ ]]]A+[[x*y]]BAnd substituting e-x-a and f-y-b into the calculation expression to obtain [ [ x-y [ ]]]A+[[x*y]]BAnd x y, namely, the first element cross inner product and the second element cross inner product are calculated.
Step S212, based on the secret sharing multiplication, calculating cross inner products between each element in the second sharing second type model parameter and each element in the second sharing training data through carrying out federal interaction with the second device, and obtaining each second element cross inner product;
in this embodiment, it should be noted that the second shared second-type model parameter may be a parameter in a matrix form, where the second shared second-type model parameter is a first share of a first-party second-type model parameter, each column of the second shared second-type model parameter is a second parameter element, and the second shared training data may be training data in a matrix form, where the second shared training data is a first share of the first-party training label data, and each column of the second shared training data is a second training data element.
And on the basis of the secret sharing multiplication, calculating cross inner products between each element in the second shared second type model parameter and each element in the second shared training data through carrying out federal interaction with the second device to obtain each second element cross inner product, specifically, obtaining a second secret sharing multiplication triple corresponding to the secret sharing multiplication, further carrying out federal interaction with the second device through secret sharing multiplication on the basis of the second secret sharing multiplication triple, calculating an inner product between each second parameter element and each second training data element to obtain each second element cross inner product.
Step S213, respectively accumulating each first element cross inner product and each second element cross inner product to obtain a first cross feature term inner product corresponding to each first element cross inner product and a second cross feature term inner product corresponding to each second element cross inner product.
In this embodiment, each of the first element cross inner products and each of the second element cross inner products are respectively accumulated to obtain the first cross feature term inner product corresponding to each of the first element cross inner products and the second cross feature term inner product corresponding to each of the second element cross inner products, specifically, each of the first element cross inner products is accumulated to obtain the first cross feature term inner product, and each of the second element cross inner products is accumulated to obtain the second cross feature term inner product, where a calculation expression of the first cross feature term inner product is as follows:
Figure BDA0002656592270000121
wherein,
Figure BDA0002656592270000122
the first parameter element shared for the secret owned by the first device is also a column vector in the second share of the second-party second-type model parameters,
Figure BDA0002656592270000123
a first training data element shared by the secret owned by the first device, i.e. a column vector in the second share of the second party training data, and additionally the formula for calculating the inner product of said second feature cross feature term is as follows:
Figure BDA0002656592270000131
wherein,
Figure BDA0002656592270000132
the second parameter element of the secret share owned by the first device is also the column vector in the first share of the first party second type model parameters,
Figure BDA0002656592270000133
the second training data element shared by the secret owned by the first device is also the column vector in the first share of the tag data for the first party to be trained.
Additionally, the calculation formula of the second device calculating the inner product of the second cross feature term of the second party is as follows:
Figure BDA0002656592270000134
wherein,
Figure BDA0002656592270000135
a second party first parameter element shared for a secret owned by a second device, i.e. a column vector in a first share of said second party second type model parameters,
Figure BDA0002656592270000136
the second-party second training data elements shared by the secret owned by the second device, i.e., the column vectors in the second share of the training label data for the first party, and additionally the second device calculates a second-party second cross-feature term inner product calculation formula as follows:
Figure BDA0002656592270000137
wherein,
Figure BDA0002656592270000138
the second party second parameter elements shared for the secret owned by the second device are also column vectors in the second share of the first party second type model parameters,
Figure BDA0002656592270000139
the second party's second training data elements, i.e., column vectors in the second share of the tag data, are trained for the first party by the second device's owned secret sharing.
Step S22, based on the preset secret sharing mechanism, through carrying out federal interaction with the second device, calculating a secret sharing intermediate parameter corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameter and the second type sharing parameter;
in this embodiment, based on the preset secret sharing mechanism, by performing federated interaction with the second device, a secret sharing intermediate parameter that is commonly corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameter, and the second type sharing parameter is calculated, specifically, based on a third secret sharing multiplication triple corresponding to the secret sharing multiplication, calculating a first intermediate parameter item based on the first type of sharing parameter and the secret sharing training data through federated interaction with the second device, and computing a second intermediate parameter term based on the second type of sharing parameter, the secret sharing cross feature term inner product, and the secret sharing training data, and further calculating the sum of the first intermediate parameter item and the second intermediate parameter item to obtain the secret sharing intermediate parameter.
Wherein the preset secret sharing mechanism comprises a secret sharing multiplication and a secret sharing addition,
the step of calculating a secret sharing intermediate parameter corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameter and the second type sharing parameter together by performing federal interaction with the second device based on the preset secret sharing mechanism comprises:
step S221, based on the secret sharing multiplication, through carrying out federal interaction with the second device, calculating a first intermediate parameter item corresponding to the first type sharing parameter and the secret sharing training data;
in this embodiment, it should be noted that the first intermediate parameter item includes a first shared intermediate parameter item and a second shared intermediate parameter item, the first type shared parameter includes a first shared first type parameter and a second shared first type parameter, where the first shared first type parameter is a second share of the second party first type parameter, the second shared first type parameter is a first share of the first party first type parameter, and the secret shared training data includes first shared training data and second shared training data, where the first shared training data is a second share of the second party training data, and the second shared training data is a first share of the first party training data.
Calculating a first intermediate parameter item corresponding to the first type of shared parameter and the secret sharing training data together by performing federated interaction with the second device based on the secret sharing multiplication, and specifically, calculating inner products of respective column vectors of the first type of shared first type of model parameter and the first sharing training data respectively by performing federated interaction with the second device based on the third secret sharing multiplication triple, obtaining respective first intermediate parameter inner products, and accumulating the respective first intermediate parameter inner products to obtain the first shared intermediate parameter item, and calculating inner products of respective column vectors of the second type of shared first type of model parameter and the second sharing training data respectively to obtain respective second intermediate parameter inner products, and accumulating the respective second intermediate parameter inner products to obtain the second shared intermediate parameter item, wherein, the calculation expression of the first shared intermediate parameter item is as follows:
Figure BDA0002656592270000141
wherein M is1For the first shared intermediate parameter item, dBRepresents XBHas a characteristic dimension of dB,[[wB]]AA first shared first type model parameter being a secret sharing of the first device, i.e. a second share of the second party first type model parameter, [ [ X ]B]]AA column vector, X, of a second share of first shared training data shared for the secret of the first device, i.e. of a second share of second party training dataBFor the first shared training data, additionally, a computational expression of the second shared intermediate parameter term is as follows:
Figure BDA0002656592270000151
wherein M is2For the second shared intermediate parameter item, dARepresents XAHas a characteristic dimension of dA,[[wA]]AA second shared first type model parameter being a secret sharing of the first device, i.e. a first share of the first type model parameter for the first party, [ [ X ]A]]ATraining elements in the second shared training data, i.e. column vectors, X, of the first share of the tag data for the first party, for the secret of the first deviceATraining data for the second share.
Additionally, when it needs to be described, the second device calculates a second-party first shared intermediate parameter item based on a first share of a second-party first type model parameter and a first share of the second-party training data, where the second-party first shared intermediate parameter item is consistent with a calculation manner of the first shared intermediate parameter item, and the second device calculates a second-party second shared intermediate parameter item based on a second share of the first-party first type model parameter and a second share of the first-party training label data, where the second-party second shared intermediate parameter item is consistent with a calculation manner of the second shared intermediate parameter item.
Step S222, calculating a second intermediate parameter item corresponding to the secret sharing cross feature item inner product, the secret sharing training data and the second type sharing parameter based on the secret sharing addition and the secret sharing multiplication;
in this embodiment, it should be noted that the second intermediate parameter item includes a third shared intermediate parameter item and a fourth shared intermediate parameter item.
Calculating a second intermediate parameter item corresponding to the secret sharing cross feature item inner product, the secret sharing training data and the second type sharing parameter together based on the secret sharing addition and the secret sharing multiplication, specifically, obtaining a first transpose matrix corresponding to a first sharing second type model parameter and a second transpose matrix corresponding to the first sharing training data, further calculating an inner product of the first sharing second type model parameter, the first transpose matrix, the first sharing training data and the second transpose matrix by performing federal interaction with a second device based on the secret sharing multiplication, obtaining a first inner product item, calculating a third sharing intermediate parameter item based on the first cross feature item inner product and the first inner product item, and similarly obtaining a third transpose matrix corresponding to a second sharing second type model parameter and a fourth transpose matrix corresponding to the second sharing training data, further, based on the secret sharing multiplication, an inner product of the second shared second type model parameter, a third transpose matrix, the second shared training data, and the fourth transpose matrix is calculated through federated interaction with a second device, a second inner product item is obtained, and a fourth shared intermediate parameter item is calculated based on the second cross feature item inner product and the second inner product item, where an expression of the third shared intermediate parameter item is as follows:
Figure BDA0002656592270000161
wherein [, ]]]AData representing a partial share owned by the first device after secret sharing, VBFor second party second type sharing parameters, XBThe data is trained for the second party and,
Figure BDA0002656592270000162
is a VBA column vector of, and VBHaving dxA number of column vectors, each of which is,
Figure BDA0002656592270000163
is XBIs a column vector of, and XBHaving dBA column vector, additionally, the fourth shared intermediate parameter term has the following expression:
Figure BDA0002656592270000164
wherein [, ]]]AData representing a partial share owned by the first device after secret sharing, VAFor the first party, the second type of model parameters, XAThe label data is trained for the first party,
Figure BDA0002656592270000165
is a VAA column vector of, and VAHaving dxA number of column vectors, each of which is,
Figure BDA0002656592270000166
is XAIs a column vector of, and XAHaving dAA column vector.
Additionally, it should be noted that the second device will calculate the second-party third shared intermediate parameter item and the second-party fourth shared intermediate parameter item based on the data of the partial share owned by the second device after secret sharing, and the calculation manner is consistent with the calculation manner in the first device, and the second-party third shared intermediate parameter item and the second-party fourth shared intermediate parameter item are as follows:
Figure BDA0002656592270000167
Figure BDA0002656592270000168
wherein [, ]]]BData representing a partial share owned by the second device after the secret sharing.
Step S223, calculating the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item.
In this embodiment, the secret sharing intermediate parameter includes a first secret sharing intermediate parameter and a second secret sharing intermediate parameter.
Calculating the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item, specifically, calculating a sum of the first sharing intermediate parameter item and the third sharing intermediate parameter item to obtain a first secret sharing intermediate parameter, and calculating a sum of the second sharing intermediate parameter item and the fourth sharing intermediate parameter item to obtain a second secret sharing intermediate parameter, where a calculation expression of the first secret sharing intermediate parameter is as follows:
Figure BDA0002656592270000171
wherein,
Figure BDA0002656592270000172
for the first secret shared intermediate parameter, additionally, the computational expression of the second secret shared intermediate parameter is as follows:
Figure BDA0002656592270000173
wherein, [ [ f (X) ]A)]]For the first secret shared intermediate parameter, additionally, the second device calculates a sum of the second-party first shared intermediate parameter item and the second-party third shared intermediate parameter item to obtain a second-party first secret shared intermediate parameter, and calculates a sum of the second-party second shared intermediate parameter item and the second-party fourth shared intermediate parameter item to obtain a second-party second secret shared intermediate parameter.
And step S23, substituting the secret sharing intermediate parameter and the secret sharing tag data into a preset regression error calculation formula, and calculating the secret sharing regression error.
In this embodiment, it should be noted that the secret shared tag data is a sample tag of a partial share owned by the first device after secret sharing, and the second device owns the secret shared tag data of the second party.
Substituting the secret sharing intermediate parameter and the secret sharing tag data into a preset regression error calculation formula to calculate the secret sharing regression error, specifically substituting the first secret sharing intermediate parameter, the second secret sharing intermediate parameter and the secret sharing tag data into a preset regression error calculation formula to calculate the secret sharing regression error, wherein the preset regression error calculation formula is as follows:
Figure BDA0002656592270000174
and Y is the sample tag, and similarly, the second device substitutes the first secret sharing intermediate parameter of the second party, the second secret sharing intermediate parameter of the second party and the secret sharing tag data of the second party into the preset regression error calculation formula to calculate the secret sharing regression error of the second party.
Step S30, determining a first target regression model parameter based on the secret sharing regression error, and assisting the second device to determine a second target regression model parameter, so as to construct a longitudinal federal factorization machine regression model.
In this embodiment, based on the secret sharing regression error, determining a first target regression model parameter, and assisting the second device to determine a second target regression model parameter, so as to construct a longitudinal federated factorization machine regression model, specifically, repeatedly performing the calculation of the secret sharing regression error, so as to perform iterative update on the secret sharing model parameter until reaching a preset model training end condition, thereby obtaining a first secret sharing target parameter, and similarly, the second device repeatedly performing the calculation of the second party secret sharing regression error, so as to perform iterative update on the second party secret sharing model parameter until reaching a preset model training end condition, thereby obtaining a second secret sharing target parameter, and further receiving a second shared first party target parameter in the second secret sharing target parameters sent by the second device, and obtaining a first shared first party target parameter in the first secret shared target parameters, calculating the sum of the first shared first party target parameter and the second shared first party target parameter, obtaining a first target regression model parameter, and sending a second shared second party target parameter in the first secret shared target parameters to a second device, so that the second device calculates the sum of the second shared second party target parameter and a first shared second party target parameter in the second secret shared target parameter, obtaining a second target regression model parameter, that is, determining a first type model parameter and a second type model parameter after model training is finished, and further determining the longitudinal federated factorization machine regression model.
In addition, compared with the existing longitudinal federal learning method, the longitudinal federal factorization machine regression model does not need a homomorphic encryption and decryption process when being constructed based on longitudinal federal learning, so that the calculation amount during longitudinal federal learning modeling is reduced, further improving the calculation efficiency when constructing the longitudinal federal factorization machine regression model, and because the longitudinal federal factorization machine regression model is constructed based on the longitudinal federal learning modeling, further, the feature richness of the training sample is higher when the regression model of the longitudinal federal factorization machine is constructed, and the model performance of the longitudinal federal factorization machine regression model is better, and the personalized recommendation effect of the longitudinal federal factorization machine regression model as a recommendation model is better.
Compared with the technical means of constructing the regression model by using the unencrypted two-party federated learning method or the homomorphic encrypted two-party longitudinal federated learning modeling method in the prior art, the method for constructing the regression model by using the factorizer regression model obtains the secret sharing model parameters and the secret sharing training data by secret sharing with the second device, further performs longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, calculates the secret sharing regression error, further updates the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model updating parameters, wherein when interacting with the second device, the sent or received data are secret sharing data without encrypting the data by using a public and private key generated by a third party, all data transmission processes are carried out between two parties participating in longitudinal federated learning modeling, privacy of data is protected, parameters are updated based on the secret shared regression model, first target regression model parameters can be determined through decryption interaction with the second equipment, the second equipment is assisted to determine second target regression model parameters, and then construction of a longitudinal federated factorization machine regression model can be completed.
Further, referring to fig. 2, in another embodiment of the present application, based on the first embodiment of the present application, the step of determining a first target regression model parameter based on the secret sharing regression error and assisting the second device to determine a second target regression model parameter includes:
step S31, updating the secret sharing model parameter based on the secret sharing regression error, to obtain the secret sharing regression model update parameter;
in this embodiment, the secret sharing model parameter is updated based on the secret sharing regression error to obtain the secret sharing regression model update parameter, specifically, model gradient information corresponding to the secret sharing model parameter is calculated based on the secret sharing regression error, and then the secret sharing model parameter is updated based on the model gradient information to obtain the secret sharing regression model update parameter.
Wherein the secret sharing model parameters include first secret sharing model parameters and second secret sharing model parameters, the secret sharing regression model update parameters include first sharing regression model parameters and second sharing regression model parameters,
the step of updating the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model update parameters comprises:
step S311, calculating first gradient information of the secret sharing regression error with respect to the first secret sharing model parameter, and calculating second gradient information of the secret sharing regression error with respect to the first shared second type model parameter;
in this embodiment, it should be noted that the first secret sharing model parameter includes a first share of a first-type model parameter of the first party and a first share of a second-type model parameter of the first party, and the first gradient information includes a first-type gradient and a second-type gradient, where the first-type gradient is a secret sharing gradient corresponding to the first share of the first-type model parameter of the first party, and the second-type gradient is a secret sharing gradient set for each column vector in the first share of the second-type model parameter of the first party.
Additionally, it should be noted that the second secret sharing model parameter includes a second share of the second-party first-type model parameter and a second share of the second-party second-type model parameter, and the second gradient information includes a third-type gradient and a fourth-type gradient, where the third-type gradient is a secret sharing gradient corresponding to the second share of the second-party first-type model parameter, and the fourth-type gradient is a gradient set of secret sharing for each column of vectors in the second share of the second-party second-type model parameter.
Calculating a partial derivative of the secret sharing regression error with respect to the first secret sharing model parameter, obtaining first gradient information, in particular, calculating a partial derivative of the secret sharing regression error with respect to a first share of the first party first type model parameter, obtaining the first type gradient, and calculating a partial derivative of the secret sharing regression error with respect to each column vector in the first share of the first party second type model parameter, obtaining a second type gradient, wherein a calculation expression of the first type gradient is as follows:
Figure BDA0002656592270000201
wherein, T1Alpha is a hyper-parameter, the magnitude of which can be set by itself, and is used for controlling the value range of the gradient, wAFor said first party first type model parameter, [ [ w ]A]]AFor a first contribution of said first party first type of model parameter, additionally, the computational expression of said second type of gradient is as follows:
Figure BDA0002656592270000202
wherein, T2For said second type of gradient, α is a hyperparameter, the magnitude of which can be set by itself, for controlling the range of values of the gradient, VAFor the first party second type model parameters, [ [ V ]A]]AA first share of the first party model parameters of a second type,
Figure BDA0002656592270000203
calculating a partial derivative of the secret sharing regression error with respect to a second share of the second-party first-type model parameters for the column vector of the first share of the first-party second-type model parameters, obtaining a third-type gradient, and calculating a partial derivative of the secret sharing regression error with respect to each column vector in the second share of the second-party second-type model parameters, obtaining a fourth-type gradient, wherein the third-type gradient is calculated as follows:
Figure BDA0002656592270000211
wherein, T3For the third type of gradient, α is a hyperparameter whose magnitude can be set by itself for controlling the range of values of the gradient, wBIs the first party of the second partyType model parameters, [ [ w ]B]]AFor a second contribution of said second party first type model parameter, additionally, the computational expression of said fourth type gradient is as follows:
Figure BDA0002656592270000212
wherein, T4Alpha is a hyper-parameter of the fourth type gradient, the magnitude of which can be set by oneself and is used for controlling the value range of the gradient, VBFor the second-party second-type model parameters [ [ V ]B]]AA second share of the second party model parameters of the second type,
Figure BDA0002656592270000213
a column vector of a second share of the second-party second-type model parameters.
It should be noted that, alternatively, the second device may also calculate a partial derivative of the second party secret sharing regression error with respect to the first contribution of the second party first type model parameter, obtain a fifth type gradient, and calculating a partial derivative of the second party secret sharing regression error with respect to each column vector in the first share of the second party second type model parameters, obtaining a sixth type gradient, further calculating a partial derivative of the second party secret sharing regression error with respect to a second share of the first party model parameter of the first party, obtaining the seventh type gradient, and calculating partial derivatives of the second party secret sharing regression error with respect to each column vector in the second share of the first party second type model parameters, obtaining an eighth type gradient, wherein the way of calculating the gradient in the second device coincides with the way of calculating the gradient in the first device.
Step S312, updating the first secret sharing model parameter based on the first gradient information and a preset first learning parameter until a preset federal learning end condition is met, and obtaining the first sharing regression model parameter;
in this embodiment, it should be noted that the preset federal learning end condition includes a loss function convergence, a preset iteration threshold is reached, and the preset first learning parameter includes a first learning rate and a second learning rate.
Updating the first secret shared model parameter based on the first gradient information and a preset first learning parameter until a preset federal learning end condition is met, obtaining the first shared regression model parameter, specifically, calculating a product of the first type gradient and the first learning rate to obtain a first gradient descending value, further calculating a difference value between a first share of the first type model parameter of the first party and the first gradient descending value to obtain a first updating parameter, calculating a product of the second type gradient and the second learning rate to obtain a second gradient descending value, further calculating a difference value between the first share of the second type model parameter of the first party and the second gradient descending value to obtain a second updating parameter, further judging whether the first updating parameter and the second updating parameter meet a preset federal learning end condition or not, if the first update parameter and the second update parameter are both used as the first shared regression model parameter, if the first update parameter and the second update parameter are not used as the first shared regression model parameter, recalculating gradient information to iteratively update the first secret shared model parameter until a preset federal learning end condition is met, wherein a calculation expression for calculating the first update parameter is as follows:
Figure BDA0002656592270000221
wherein,1in order to be the first learning rate,
Figure BDA0002656592270000222
for the first update parameter, additionally, a calculation expression for calculating the second update parameter is as follows:
Figure BDA0002656592270000223
wherein,2in order to be the second learning rate,
Figure BDA0002656592270000224
and updating the parameter for the second updating.
Step 313, updating the second secret sharing model parameter based on the second gradient information and a preset second learning parameter until the preset federal learning end condition is met, and obtaining the second sharing regression model parameter.
In this embodiment, it should be noted that the preset second learning parameter includes a third learning rate and a fourth learning rate.
And updating the second secret sharing model parameter based on the second gradient information and a preset second learning parameter until the preset federal learning end condition is met, and obtaining the second sharing regression model parameter. Specifically, a product of the third type gradient and the third learning rate is calculated to obtain a third gradient descending value, and then a difference value between a second share of the second-party first type model parameter and the third gradient descending value is calculated to obtain a third updating parameter, and a product of the fourth type gradient and the fourth learning rate is calculated to obtain a fourth gradient descending value, and then a difference value between a second share of the second-party second type model parameter and the fourth gradient descending value is calculated to obtain a fourth updating parameter, and then whether the third updating parameter and the fourth updating parameter satisfy a preset federal learning end condition is judged, if so, the third updating parameter and the fourth updating parameter are jointly used as the second shared regression model parameter, and if not, the gradient information is recalculated to iteratively update the second secret shared model parameter, until a preset federal learning end condition is met, wherein a calculation expression for calculating the third updating parameter is as follows:
Figure BDA0002656592270000231
wherein,3is the firstThe learning rate of the three types of learning,
Figure BDA0002656592270000232
for the third update parameter, additionally, a calculation expression for calculating the fourth update parameter is as follows:
Figure BDA0002656592270000233
wherein,4in order to be the fourth learning rate,
Figure BDA0002656592270000234
updating the parameter for the fourth update.
Additionally, it should be noted that the second device will calculate a fifth update parameter based on a fifth type gradient and a preset fifth learning rate
Figure BDA0002656592270000235
Calculating a sixth update parameter based on the sixth type gradient and a preset sixth learning rate
Figure BDA0002656592270000236
Calculating a seventh update parameter based on the seventh type gradient and a preset seventh learning rate
Figure BDA0002656592270000237
Calculating an eighth update parameter based on the eighth type gradient and a preset eighth learning rate
Figure BDA0002656592270000238
Wherein the second device calculates each gradient in a manner consistent with the first device.
Step S32, determining the first target regression model parameter by performing decryption interaction with the second device based on the secret shared regression model update parameter, so that the second device may determine the second target regression model parameter.
In this embodiment, the first target regression model parameter is determined through decryption interaction with the second device based on the secret sharing regression model update parameter, so that the second device determines the second target regression model parameter, specifically, a seventh update parameter and an eighth update parameter sent by the second device are received, the first target regression model parameter is calculated based on the first update parameter, the second update parameter, the seventh update parameter and the eighth update parameter, and the third update parameter and the fourth update parameter are sent to the second device, so that the second device calculates the second target regression model parameter based on the third update parameter, the fourth update parameter, the fifth update parameter and the sixth update parameter.
Wherein the secret shared regression model update parameters comprise a first share of first party model update parameters and a second share of second party model update parameters,
the step of determining the first target regression model parameters for the second device to determine the second target regression model parameters by performing a decryption interaction with the second device based on the secret shared regression model update parameters comprises:
step S321, receiving a second share of the first-party model update parameters determined by the second device based on longitudinal federal learning modeling, and sending the second share of the second-party model update parameters to the second device, so that the second device determines the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the longitudinal federal learning modeling;
in this embodiment, it should be noted that the first share of the first party model update parameter includes the first update parameter and the second update parameter, the second share of the second party model update parameter includes the third update parameter and the fourth update parameter, the first share of the second party model update parameter includes the fifth update parameter and the sixth update parameter, and the second share of the first party model update parameter includes the seventh update parameter and the eighth update parameter.
Receiving a second share of the first-party model update parameters determined by the second device based on longitudinal federated learning modeling, and sending the second share of the second-party model update parameters to the second device, so that the second device determines the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the longitudinal federated learning modeling, specifically, receiving a seventh update parameter and an eighth update parameter sent by the second device, and sending a third update parameter and a fourth update parameter to the second device, so that the second device calculates a sum of the third update parameter and the fifth update parameter, obtains a second-party first-type model update parameter, calculates a sum of the fourth update parameter and the fifth update parameter, and obtains a second-party second-type model update parameter, and using the second party first type model updating parameter and the second party second type model updating parameter as the second target regression model parameter, wherein the calculation expression of the second party first type model updating parameter is as follows:
Figure BDA0002656592270000241
wherein,
Figure BDA0002656592270000242
updating parameters for the second party first type model, and additionally, calculating and expressing the second party second type model updating parameters as follows:
Figure BDA0002656592270000243
wherein,
Figure BDA0002656592270000244
parameters are updated for the second party second type model.
Step S322, aggregating the first share of the first-party model update parameters and the second share of the first-party model update parameters to obtain the first target regression model parameter.
In this embodiment, the first share of the first party model update parameter and the second share of the first party model update parameter are aggregated to obtain the first target regression model parameter, specifically, a sum of the first update parameter and the seventh update parameter is calculated to obtain a first party first type model update parameter, a sum of the second update parameter and the eighth update parameter is calculated to obtain a first party second type model update parameter, and the first party first type model update parameter and the first party second type model update parameter are used together as the first target regression model parameter, where a calculation expression of the first party first type model update parameter is as follows:
Figure BDA0002656592270000251
wherein,
Figure BDA0002656592270000252
updating parameters for the first party first type model, and additionally, calculating and expressing the first party second type model updating parameters as follows:
Figure BDA0002656592270000253
wherein,
Figure BDA0002656592270000254
parameters are updated for the first-party second-type model.
The embodiment provides a method for updating model parameters of a longitudinal federated factorization machine regression model based on a secret sharing regression error, that is, first, a first device updates secret sharing model parameters by a gradient calculation method based on the secret sharing regression error to obtain secret sharing model update parameters of the current iteration, and a second device updates the secret sharing model parameters of the second party based on the secret sharing regression error of the second party to obtain secret sharing model update parameters of the second party of the current iteration until a preset federated learning end condition is reached, based on a secret sharing mechanism, the first device and the second device perform decryption interaction, the first device assists the second device to determine second target regression model parameters based on the secret sharing model update parameters of the second party based on the secret sharing model update parameters, meanwhile, the second equipment assists the first equipment to determine a first target regression model parameter based on the secret shared regression model updating parameter of the second party, and further construction of the longitudinal federal factorization machine regression model can be completed, and further a foundation is laid for overcoming the technical defect that data privacy of each participant of longitudinal federal learning modeling cannot be protected due to the fact that the regression model is constructed by a non-encrypted two-party federal learning method or a homomorphic encrypted two-party longitudinal federal learning modeling method in the prior art.
Further, referring to fig. 3, based on the first embodiment and the second embodiment in the present application, in another embodiment of the present application, the personalized recommendation method is applied to the first device, and the personalized recommendation method includes:
step A10, secret sharing is carried out with a second device, and secret sharing to-be-recommended user data and secret sharing model parameters are obtained;
in this embodiment, it should be noted that the first device and the second device are both participants of longitudinal federal learning, and before secret sharing is performed, a preset scoring model has been trained by the first device and the second device based on secret sharing and longitudinal federal learning, where the preset scoring model is a trained factorization machine regression model used for predicting the scoring of an item corresponding to a user, and a model expression of the preset scoring model is as follows:
z(x)=<w,x>+∑i<j<Vi,Vj>xixj
wherein x is the model input data, w and V are the model parameters, and z (x) is the model output, i.e. the user's score for the item.
The method comprises the steps of carrying out secret sharing with a second device to obtain secret sharing to-be-recommended user data and secret sharing model parameters, and specifically obtaining first party scoring model parameters and first party to-be-recommended user data of a preset scoring model, wherein the to-be-recommended data are associated data of a to-be-recommended user, such as interests and hobbies of the to-be-recommended user, historical scoring data of the to-be-recommended user on articles, and meanwhile, the second device obtains second party scoring model parameters and second party to-be-recommended user data of the preset scoring model, wherein as the preset scoring model is built based on longitudinal federal learning, a part of model parameters of the preset scoring model held by the first device are first party scoring model parameters, a part of model parameters of the preset scoring model held by the second device are second party scoring model parameters, and the first party to-be-recommended user data are associated data of the to-be-recommended user collected by the first device, the second-party to-be-recommended user data is data associated with an item by a to-be-recommended user collected by a second device, where the first-party to-be-recommended user data and the second-party to-be-recommended user data may both be represented by a vector, for example, assuming that the first-party to-be-recommended user data is a vector (1, 0, 1, 0), where a code 1 represents that a user clicks a corresponding item, and a code 0 represents that a user does not click a corresponding item, then vectors (1, 0, 1, 0) represent that a user clicks an item a and an item C, and does not click an item B and an item D, and further, performing secret sharing with the second device based on the first-party scoring model parameter and the first-party to-be-recommended user data, where the second-party scoring model parameter and the second-party to-be-recommended user data are provided by the second device in secret sharing, further, the first device obtains secret sharing model parameters and secret sharing to-be-recommended user data, the second device obtains second party secret sharing model parameters and second party secret sharing to-be-recommended user data, wherein the secret sharing model parameters comprise first sharing first party model parameters and first sharing second party model parameters, the secret sharing to-be-recommended user data comprise first sharing first party to-be-recommended user data and first sharing second party to-be-recommended user data, the second party secret sharing model parameters comprise second sharing first party model parameters and second sharing second party model parameters, the second party secret sharing to-be-recommended user data comprise second sharing first party to-be-recommended user data and second sharing second party to-be-recommended user data, and the first sharing first party model parameters are first share of the first party scoring model parameters, the second shared first-party model parameter is a second share of the first-party scoring model parameter, the first shared second-party model parameter is a first share of the second scoring model parameter, the second shared second-party model parameter is a second share of the second scoring model parameter, the first shared first-party to-be-recommended user data is a first share of the first-party to-be-recommended user data, the second shared first-party to-be-recommended user data is a second share of the first-party to-be-recommended user data, the first shared second-party to-be-recommended user data is a first share of the second-party to-be-recommended user data, and the second shared second-party to-be-recommended user data is a second share of the second-party to-be-recommended user data.
Step A20, inputting the secret sharing user data to be recommended into a preset scoring model, so as to score the articles to be recommended corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters, and obtain a first secret sharing scoring result;
in this embodiment, the secret sharing to-be-recommended user data is input into a preset scoring model, so as to score the to-be-recommended item corresponding to the secret sharing to-be-recommended user data based on the secret sharing model parameter, and obtain a first secret sharing scoring result, specifically, the first sharing first party to-be-recommended user data and the first sharing second party to-be-recommended user data are respectively input into the preset scoring model, so as to substitute the first sharing first party to-be-recommended user data and the first sharing first party model parameter into a model expression of the preset scoring model, calculate a first sharing first party score through secret sharing multiplication, and substitute the first sharing second party to-be-recommended user data and the first sharing second party model parameter into a model expression of the preset scoring model, calculate a first sharing second party score through secret sharing multiplication, and taking the first sharing first party score and the first sharing second party score as the first secret sharing score result, wherein the first sharing first party score and the first sharing second party score are both model output values, and similarly, the second device calculates the second sharing first party score through secret sharing multiplication based on the second sharing first party to-be-recommended user data and the second sharing first party model parameter, and calculates the second sharing second party score through secret sharing multiplication based on the second sharing second party to-be-recommended user data and the second sharing second party model parameter.
Step A30, performing federated interaction with the second device based on the first secret sharing scoring result, so as to calculate a target score in combination with a second secret sharing scoring result determined by the second device.
In this embodiment, based on the first secret sharing score result, performing federated interaction with the second device to combine a second secret sharing score result determined by the second device, and calculating a target score, specifically, based on the first secret sharing score result, performing federated interaction with the second device to aggregate the first secret sharing score result and the second secret sharing score result, and obtaining the target score, where it is to be noted that the target score is a score of an item to be recommended by a user calculated by a preset score model.
Wherein the first secret shared personalized recommendation result comprises a first shared first party score and a first shared second party score, the second secret shared personalized recommendation result comprises a second shared first party score and a second shared second party score,
the step of calculating a target score based on the first secret sharing score result and the second secret sharing score result determined by the second device through federated interaction with the second device includes:
step a31, receiving the second sharing first party score and the second sharing second party score sent by the second device;
step a32, calculating a first party score based on said first shared first party score and said second shared first party score;
in this embodiment, a first party score is calculated based on the first shared first party score and the second shared first party score, specifically, a sum of the first shared first party score and the second shared first party score is calculated, and the first party score is obtained.
Step a33, calculating a second party score based on said first shared second party score and said second shared second party score;
in this embodiment, a second party score is calculated based on the first sharing second party score and the second sharing second party score, specifically, a sum of the first sharing second party score and the second sharing second party score is calculated, and a second party score is obtained.
Step A34, aggregating the first party score and the second party score to obtain the target score.
In this embodiment, the first party score and the second party score are aggregated to obtain the target score, specifically, the first party score and the second party score are aggregated to obtain the target score based on a preset aggregation rule, where the preset aggregation rule includes summation, weighting, and averaging.
Step A40, generating a target recommendation list corresponding to the item to be recommended based on the target score.
In this embodiment, a target recommendation list corresponding to the to-be-recommended item is generated based on the target scores, specifically, the target scores are repeatedly obtained to obtain target scores of different target users for the same to-be-recommended item, and then the target users are ranked based on the magnitude of each target score to generate a recommendation user list of the to-be-recommended item, and the recommendation user list is used as the target recommendation list,
in another implementable scheme, a target recommendation list corresponding to a to-be-recommended user corresponding to the secret shared to-be-recommended user data is generated based on the target scores, specifically, the target scores are repeatedly acquired, the target scores of the same target user for different to-be-recommended articles are obtained, further, based on the size of each target score, the to-be-recommended articles are sorted, a recommended article list of the to-be-recommended user is generated, and the recommended article list is used as the target recommendation list.
The embodiment provides a method for personalized recommendation based on secret sharing and longitudinal federal learning, which includes the steps of conducting secret sharing with a second device to obtain secret sharing to-be-recommended user data and secret sharing model parameters, further inputting the secret sharing to-be-recommended user data into a preset scoring model, scoring to-be-recommended articles corresponding to the secret sharing to-be-recommended user data based on the secret sharing model parameters to obtain a first secret sharing scoring result, further conducting federal interaction with the second device based on the first secret sharing scoring result to combine a second secret sharing scoring result determined by the second device to calculate a target score, and further generating a target recommendation list corresponding to the to-be-recommended articles based on the target score, wherein when the first device and the second device interact in a personalized recommendation process, the sent or received data are secret shared data, a public and private key generated by a third party is not needed to encrypt the data, all data transmission processes are carried out between two parties participating in longitudinal federal learning, the privacy of the data is protected, meanwhile, the complex encryption and decryption calculation processes of the data are reduced, and only a simple mathematical operation process is needed when the secret sharing process and the decryption process corresponding to secret sharing are carried out, the calculation complexity is reduced, and further, the calculation efficiency when the factorization machine classification model carries out personalized recommendation is improved.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the factorization machine regression model construction device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the factorization machine regression model building device may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the factoring machine regression model construction device configuration illustrated in FIG. 4 does not constitute a limitation of a factoring machine regression model construction device and may include more or fewer components than illustrated, or some components in combination, or a different arrangement of components.
As shown in fig. 4, a memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a factoring machine regression model building program. The operating system is a program that manages and controls the hardware and software resources of the factoring machine regression model building device and supports the operation of the factoring machine regression model building program as well as other software and/or programs. The network communication module is used for realizing communication among the components in the memory 1005 and communication with other hardware and software in the factorization machine regression model building system.
In the factorizer regression model construction device shown in fig. 4, the processor 1001 is configured to execute a factorizer regression model construction program stored in the memory 1005 to implement the steps of any of the factorizer regression model construction methods described above.
The specific implementation manner of the regression model construction device of the factorization machine of the present application is substantially the same as that of each embodiment of the regression model construction method of the factorization machine, and is not described herein again.
Referring to fig. 5, fig. 5 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 5, the personalized recommendation device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the personalized recommendation device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the personalized recommendation device architecture shown in fig. 5 does not constitute a limitation of the personalized recommendation device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a personalized recommendation program. The operating system is a program for managing and controlling hardware and software resources of the personalized recommendation device and supports the running of the personalized recommendation program and other software and/or programs. The network communication module is used for realizing communication among the components in the memory 1005 and with other hardware and software in the personalized recommendation system.
In the personalized recommendation device shown in fig. 5, the processor 1001 is configured to execute a personalized recommendation program stored in the memory 1005 to implement the steps of the personalized recommendation method described in any one of the above.
The specific implementation manner of the personalized recommendation device of the application is basically the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The embodiment of the present application further provides a factorization machine regression model building device, the factorization machine regression model building device is applied to the factorization machine regression model building equipment, the factorization machine regression model building device includes:
the secret sharing module is used for carrying out secret sharing with the second equipment to obtain secret sharing model parameters and secret sharing training data;
the longitudinal federation module is used for carrying out longitudinal federation learning modeling with the second equipment based on the secret sharing training data and the secret sharing model parameters and calculating a secret sharing regression error;
and the determining module is used for determining a first target regression model parameter based on the secret sharing regression error and assisting the second equipment to determine a second target regression model parameter so as to construct a longitudinal Federal factorization machine regression model.
Optionally, the longitudinal federation module includes:
the first calculation sub-module is used for calculating a secret sharing cross feature item inner product which corresponds to the second type sharing parameter and the secret sharing training data together through carrying out federal interaction with the second equipment on the basis of a preset secret sharing mechanism;
a second calculation sub-module, configured to calculate, based on the preset secret sharing mechanism, a secret sharing intermediate parameter corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameter, and the second type sharing parameter through federated interaction with the second device;
and the third calculation submodule is used for substituting the secret sharing intermediate parameter and the secret sharing tag data into a preset regression error calculation formula to calculate the secret sharing regression error.
Optionally, the first computation submodule includes:
a first calculating unit, configured to calculate, based on the secret sharing multiplication, a cross inner product between each element in the first shared second type model parameter and each element in the first shared training data by performing federated interaction with the second device, and obtain each first element cross inner product;
a second calculating unit, configured to calculate, based on the secret sharing multiplication, a cross inner product between each element in the second-party second-type sharing parameter and each element in the second sharing training data by performing federal interaction with the second device, and obtain each second element cross inner product;
and a third calculating unit, configured to accumulate each first element cross inner product and each second element cross inner product respectively to obtain a first cross feature term inner product corresponding to each first element cross inner product and a second cross feature term inner product corresponding to each second element cross inner product.
Optionally, the second computation submodule includes:
a fourth calculating unit, configured to calculate, based on the secret sharing multiplication, a first intermediate parameter item that corresponds to the first type sharing parameter and the secret sharing training data together through federated interaction with the second device;
a fifth calculating unit, configured to calculate, based on the secret sharing addition and the secret sharing multiplication, a second intermediate parameter item that corresponds to the secret sharing cross feature item inner product, the secret sharing training data, and the second type sharing parameter in common;
a sixth calculating unit configured to calculate the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item.
Optionally, the secret sharing module comprises:
the acquisition submodule is used for acquiring a first party model parameter and first party training label data and taking a first share of the first party model parameter as the first sharing parameter;
a first sending submodule, configured to send a second share of the first party model parameter to the second device, so that the second device determines a third sharing parameter;
a first receiving submodule, configured to receive a second sharing parameter sent by the second device, where the second sharing parameter is a second share of a second-party model parameter obtained by the second device, and a first share of the second-party model parameter is a fourth sharing parameter of the second device;
a second sending submodule, configured to use a first share of the first-party training label data as the first shared training data, and send a second share of the first-party training label data to the second device, so that the second device determines third shared training data;
the second receiving submodule is configured to receive second shared training data sent by a second device, where the second shared training data is a second share of second-party training data obtained by the second device, and a first share of the second-party training data is fourth shared training data of the second device.
Optionally, the determining module includes:
the updating submodule is used for updating the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model updating parameters;
and the decryption submodule is used for determining the first target regression model parameter through decryption interaction with the second equipment based on the secret shared regression model updating parameter so that the second equipment can determine the second target regression model parameter.
Optionally, the update sub-module includes:
a seventh calculating unit for calculating first gradient information of the secret-sharing regression error with respect to the first secret-sharing model parameter, and calculating second gradient information of the secret-sharing regression error with respect to the first shared second-type model parameter;
a first updating unit, configured to update the first secret shared model parameter based on the first gradient information and a preset first learning parameter until a preset federal learning end condition is met, and obtain the first shared regression model parameter;
and the second updating unit is used for updating the second secret sharing model parameter based on the second gradient information and a preset second learning parameter until the preset federal learning end condition is met, and obtaining the second sharing regression model parameter.
Optionally, the decryption sub-module includes:
the assistance decryption unit is used for receiving a second share of the first-party model update parameters determined by the second equipment based on longitudinal federated learning modeling, and sending the second share of the second-party model update parameters to the second equipment, so that the second equipment determines the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the longitudinal federated learning modeling;
and the aggregation unit is used for aggregating the first share of the first party model update parameters and the second share of the first party model update parameters to obtain the first target regression model parameters.
The specific implementation of the apparatus for constructing a regression model of a factorization machine of the present application is substantially the same as that of each embodiment of the method for constructing a regression model of a factorization machine, and is not described herein again.
The embodiment of the present application further provides a personalized recommendation device, where the personalized recommendation device is applied to a personalized recommendation device, and the personalized recommendation device includes:
the secret sharing module is used for carrying out secret sharing with the second equipment to obtain secret sharing to-be-recommended user data and secret sharing model parameters;
the scoring module is used for inputting the secret sharing user data to be recommended into a preset scoring model so as to score the to-be-recommended articles corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result;
the calculation module is used for carrying out federal interaction with the second equipment based on the first secret sharing scoring result so as to combine a second secret sharing scoring result determined by the second equipment to calculate a target score;
and the generating module is used for generating a target recommendation list corresponding to the item to be recommended based on the target score.
Optionally, the calculation module comprises:
a receiving unit, configured to receive the second sharing first party score and the second sharing second party score sent by the second device;
a first calculating unit for calculating a first party score based on the first shared first party score and the second shared second party score;
a second calculating unit for calculating a second party score based on the first sharing second party score and the second sharing second party score;
and the aggregation unit is used for aggregating the first party score and the second party score to obtain the target score.
The specific implementation manner of the personalized recommendation device of the present application is substantially the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, which are also executable by one or more processors for implementing the steps of the factorization machine regression model construction method described in any one of the above.
The specific implementation manner of the readable storage medium of the present application is substantially the same as that of each embodiment of the above factorization machine regression model construction method, and is not described herein again.
The embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs are further executable by one or more processors for implementing the steps of the personalized recommendation method according to any one of the above items.
The specific implementation manner of the readable storage medium of the present application is substantially the same as that of each embodiment of the personalized recommendation method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (14)

1. A factorization machine regression model construction method is applied to first equipment, and comprises the following steps:
secret sharing is carried out with the second equipment, and secret sharing model parameters and secret sharing training data are obtained;
performing longitudinal federal learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculating a secret sharing regression error;
and determining a first target regression model parameter based on the secret sharing regression error, and assisting the second equipment to determine a second target regression model parameter so as to construct a longitudinal Federal factorization machine regression model.
2. The method of factoring machine regression model construction of claim 1, wherein the secret sharing model parameters comprise a first type of sharing parameter and a second type of sharing parameter, the secret sharing training data comprises secret sharing tag data,
the step of performing longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculating a secret sharing regression error includes:
based on a preset secret sharing mechanism, calculating a secret sharing cross feature item inner product which corresponds to the second type sharing parameter and the secret sharing training data together through carrying out federal interaction with the second equipment;
based on the preset secret sharing mechanism, calculating secret sharing intermediate parameters corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameters and the second type sharing parameters through carrying out federal interaction with the second equipment;
substituting the secret sharing intermediate parameter and the secret sharing tag data into a preset regression error calculation formula to calculate the secret sharing regression error.
3. The method of claim 2, wherein the second type of shared parameters comprises a first shared second type of model parameters and a second shared second type of model parameters, the secret-shared training data comprises a first shared training data and a second shared training data, the secret-shared cross-feature term inner product comprises a first cross-feature term inner product and a second cross-feature term inner product, the preset secret-sharing mechanism comprises a secret-sharing multiplication,
the step of calculating the secret sharing cross feature item inner product which is jointly corresponding to the second type sharing parameter and the secret sharing training data through carrying out federal interaction with the second equipment based on a preset secret sharing mechanism comprises the following steps:
based on the secret sharing multiplication, calculating cross inner products between each element in the first sharing second type model parameters and each element in the first sharing training data through carrying out federal interaction with the second equipment, and obtaining each first element cross inner product;
based on the secret sharing multiplication, calculating cross inner products between each element in the second sharing second type model parameters and each element in the second sharing training data through carrying out federal interaction with the second equipment, and obtaining each second element cross inner product;
and accumulating the first element cross inner products and the second element cross inner products respectively to obtain the first cross feature term inner products corresponding to the first element cross inner products and the second cross feature term inner products corresponding to the second element cross inner products.
4. The method of claim 2, wherein the pre-defined secret sharing mechanism comprises a secret sharing multiplication and a secret sharing addition,
the step of calculating a secret sharing intermediate parameter corresponding to the secret sharing cross feature item inner product, the secret sharing training data, the first type sharing parameter and the second type sharing parameter together by performing federal interaction with the second device based on the preset secret sharing mechanism comprises:
calculating a first intermediate parameter item corresponding to the first type of shared parameter and the secret sharing training data together through federated interaction with the second device based on the secret sharing multiplication;
calculating a second intermediate parameter item corresponding to the secret sharing cross feature item inner product, the secret sharing training data and the second type sharing parameter in common based on the secret sharing addition and the secret sharing multiplication;
calculating the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item.
5. The method of claim 1, wherein the secret-sharing model parameters include a first shared parameter and a second shared parameter, the secret-sharing training data includes a first shared training data and a second shared training data,
the step of performing secret sharing with the second device to obtain secret sharing model parameters and secret sharing training data comprises:
acquiring a first party model parameter and first party training label data, and taking a first share of the first party model parameter as the first sharing parameter;
sending a second share of the first party model parameters to the second device for the second device to determine third sharing parameters;
receiving a second sharing parameter sent by the second device, wherein the second sharing parameter is a second share of a second-party model parameter obtained by the second device, and a first share of the second-party model parameter is a fourth sharing parameter of the second device;
using a first share of the first party training label data as the first shared training data, and sending a second share of the first party training label data to the second device, so that the second device determines third shared training data;
receiving second shared training data sent by second equipment, wherein the second shared training data is a second share of second-party training data acquired by the second equipment, and the first share of the second-party training data is fourth shared training data of the second equipment.
6. The method of claim 1, wherein determining a first target regression model parameter based on the secret shared regression error and assisting the second device in determining a second target regression model parameter comprises:
updating the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model updating parameters;
determining the first target regression model parameter by performing a decryption interaction with the second device based on the secret shared regression model update parameter for the second device to determine the second target regression model parameter.
7. The method of claim 6, wherein the secret sharing model parameters comprise first secret sharing model parameters and second secret sharing model parameters, the secret sharing regression model update parameters comprise first sharing regression model parameters and second sharing regression model parameters,
the step of updating the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model update parameters comprises:
calculating first gradient information of the secret-sharing regression error with respect to the first secret-sharing model parameter, and calculating second gradient information of the secret-sharing regression error with respect to the first shared second-type model parameter;
updating the first secret sharing model parameter based on the first gradient information and a preset first learning parameter until a preset federal learning end condition is met, and obtaining the first sharing regression model parameter;
and updating the second secret sharing model parameter based on the second gradient information and a preset second learning parameter until the preset federal learning end condition is met, and obtaining the second sharing regression model parameter.
8. The method of factoring machine regression model construction of claim 6, wherein the secret shared regression model update parameters comprise a first share of a first party model update parameter and a second share of a second party model update parameter,
the step of determining the first target regression model parameters for the second device to determine the second target regression model parameters by performing a decryption interaction with the second device based on the secret shared regression model update parameters comprises:
receiving a second share of the first-party model update parameters determined by the second equipment based on longitudinal federated learning modeling, and sending the second-party model update parameters to the second equipment, so that the second equipment determines the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the longitudinal federated learning modeling;
and aggregating the first share of the first party model update parameters and the second share of the first party model update parameters to obtain the first target regression model parameters.
9. A personalized recommendation method is applied to a first device, and comprises the following steps:
secret sharing is carried out with the second equipment, and secret sharing to-be-recommended user data and secret sharing model parameters are obtained;
inputting the secret sharing user data to be recommended into a preset scoring model, and scoring the to-be-recommended articles corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result;
performing federated interaction with the second device based on the first secret sharing scoring result to calculate a target score in combination with a second secret sharing scoring result determined by the second device;
and generating a target recommendation list corresponding to the item to be recommended based on the target score.
10. The personalized recommendation method of claim 9, wherein the first secret sharing score result comprises a first shared first party score and a first shared second party score, the second secret sharing score result comprises a second shared first party score and a second shared second party score,
the step of calculating a target score based on the first secret sharing score result and the second secret sharing score result determined by the second device through federated interaction with the second device includes:
receiving the second sharing first party score and the second sharing second party score sent by the second device;
calculating a first party score based on the first shared first party score and the second shared second party score;
calculating a second party score based on the first shared second party score and the second shared second party score;
and aggregating the first party score and the second party score to obtain the target score.
11. A factorizer regression model construction device, the factorizer regression model construction device comprising: a memory, a processor, and a program stored on the memory for implementing the factorizer regression model construction method,
the memory is used for storing a program for realizing the factor decomposition machine regression model construction method;
the processor is configured to execute a program implementing the method for factorization machine regression model construction to implement the steps of the method for factorization machine regression model construction as claimed in any one of claims 1 to 8.
12. A readable storage medium having stored thereon a program for implementing a factorizer regression model construction method, the program being executable by a processor for implementing the steps of the factorizer regression model construction method as defined in any one of claims 1 to 8.
13. A personalized recommendation device, characterized in that the personalized recommendation device comprises: a memory, a processor and a program stored on the memory for implementing the personalized recommendation method,
the memory is used for storing a program for realizing the personalized recommendation method;
the processor is used for executing the program for implementing the personalized recommendation method to implement the steps of the personalized recommendation method according to any one of claims 9 to 10.
14. A readable storage medium, wherein a program for implementing a personalized recommendation method is stored on the readable storage medium, and the program for implementing the personalized recommendation method is executed by a processor to implement the steps of the personalized recommendation method according to any one of claims 9 to 10.
CN202010893497.XA 2020-08-28 2020-08-28 Factorization machine regression model construction method and device and readable storage medium Pending CN112000988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010893497.XA CN112000988A (en) 2020-08-28 2020-08-28 Factorization machine regression model construction method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010893497.XA CN112000988A (en) 2020-08-28 2020-08-28 Factorization machine regression model construction method and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112000988A true CN112000988A (en) 2020-11-27

Family

ID=73465476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010893497.XA Pending CN112000988A (en) 2020-08-28 2020-08-28 Factorization machine regression model construction method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112000988A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240184A (en) * 2021-05-21 2021-08-10 浙江大学 Building space unit cold load prediction method and system based on federal learning
CN113505894A (en) * 2021-06-02 2021-10-15 北京航空航天大学 Longitudinal federated learning linear regression and logistic regression model training method and device
CN113536667A (en) * 2021-06-22 2021-10-22 同盾科技有限公司 Federal model training method and device, readable storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018174873A1 (en) * 2017-03-22 2018-09-27 Visa International Service Association Privacy-preserving machine learning
CN110288094A (en) * 2019-06-10 2019-09-27 深圳前海微众银行股份有限公司 Model parameter training method and device based on federation's study
CN110709863A (en) * 2019-01-11 2020-01-17 阿里巴巴集团控股有限公司 Logistic regression modeling scheme using secret sharing
CN111079939A (en) * 2019-11-28 2020-04-28 支付宝(杭州)信息技术有限公司 Machine learning model feature screening method and device based on data privacy protection
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111259446A (en) * 2020-01-16 2020-06-09 深圳前海微众银行股份有限公司 Parameter processing method, equipment and storage medium based on federal transfer learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018174873A1 (en) * 2017-03-22 2018-09-27 Visa International Service Association Privacy-preserving machine learning
CN110709863A (en) * 2019-01-11 2020-01-17 阿里巴巴集团控股有限公司 Logistic regression modeling scheme using secret sharing
CN110288094A (en) * 2019-06-10 2019-09-27 深圳前海微众银行股份有限公司 Model parameter training method and device based on federation's study
CN111079939A (en) * 2019-11-28 2020-04-28 支付宝(杭州)信息技术有限公司 Machine learning model feature screening method and device based on data privacy protection
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111259446A (en) * 2020-01-16 2020-06-09 深圳前海微众银行股份有限公司 Parameter processing method, equipment and storage medium based on federal transfer learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DASHAN GAO ET AL.: "Privacy Threats Against Federated Matrix Factorization", 《ARXIV》, 3 July 2020 (2020-07-03), pages 1 - 6 *
SENCI YING: "Shared MF: A privacy-preserving recommendation system", 《ARXIV》, 18 August 2020 (2020-08-18), pages 1 - 3 *
唐春明等: "基于安全两方计算的具有隐私性的回归算法", 《信息网络安全》, no. 10, 10 October 2018 (2018-10-10), pages 10 - 16 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240184A (en) * 2021-05-21 2021-08-10 浙江大学 Building space unit cold load prediction method and system based on federal learning
CN113240184B (en) * 2021-05-21 2022-06-24 浙江大学 Building space unit cold load prediction method and system based on federal learning
CN113505894A (en) * 2021-06-02 2021-10-15 北京航空航天大学 Longitudinal federated learning linear regression and logistic regression model training method and device
CN113505894B (en) * 2021-06-02 2023-12-15 北京航空航天大学 Longitudinal federal learning linear regression and logistic regression model training method and device
CN113536667A (en) * 2021-06-22 2021-10-22 同盾科技有限公司 Federal model training method and device, readable storage medium and equipment
CN113536667B (en) * 2021-06-22 2024-03-01 同盾科技有限公司 Federal model training method, federal model training device, readable storage medium and federal model training device

Similar Documents

Publication Publication Date Title
Perifanis et al. Federated neural collaborative filtering
KR102337168B1 (en) Logistic Regression Modeling Method Using Secret Sharing
WO2022089256A1 (en) Method, apparatus and device for training federated neural network model, and computer program product and computer-readable storage medium
CN112000987A (en) Factorization machine classification model construction method and device and readable storage medium
CN112000988A (en) Factorization machine regression model construction method and device and readable storage medium
US20230078061A1 (en) Model training method and apparatus for federated learning, device, and storage medium
Li et al. A privacy-preserving high-order neuro-fuzzy c-means algorithm with cloud computing
CN112085159B (en) User tag data prediction system, method and device and electronic equipment
Vu Privacy-preserving Naive Bayes classification in semi-fully distributed data model
CN112016698B (en) Factorization machine model construction method, factorization machine model construction equipment and readable storage medium
Miao et al. Federated deep reinforcement learning based secure data sharing for Internet of Things
CN112818374A (en) Joint training method, device, storage medium and program product of model
CN112926073A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
CN113761350B (en) Data recommendation method, related device and data recommendation system
CN111291273A (en) Recommendation system optimization method, device, equipment and readable storage medium
Zhang et al. PPNNP: A privacy-preserving neural network prediction with separated data providers using multi-client inner-product encryption
US20220270299A1 (en) Enabling secure video sharing by exploiting data sparsity
CN111985573A (en) Factorization machine classification model construction method and device and readable storage medium
Deng et al. Non-interactive and privacy-preserving neural network learning using functional encryption
Zheng et al. PPSFL: Privacy-Preserving Split Federated Learning for heterogeneous data in edge-based Internet of Things
CN112949866A (en) Poisson regression model training method and device, electronic equipment and storage medium
Chen et al. SHOSVD: Secure outsourcing of high-order singular value decomposition
CN112598127A (en) Federal learning model training method and device, electronic equipment, medium and product
US20230325718A1 (en) Method and apparatus for joint training logistic regression model
KR20150115762A (en) Privacy protection against curious recommenders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination