CN110837653A - Label prediction method, device and computer readable storage medium - Google Patents

Label prediction method, device and computer readable storage medium Download PDF

Info

Publication number
CN110837653A
CN110837653A CN201911083212.XA CN201911083212A CN110837653A CN 110837653 A CN110837653 A CN 110837653A CN 201911083212 A CN201911083212 A CN 201911083212A CN 110837653 A CN110837653 A CN 110837653A
Authority
CN
China
Prior art keywords
demander
model
provider
parameter
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911083212.XA
Other languages
Chinese (zh)
Other versions
CN110837653B (en
Inventor
吴玙
马国强
张�杰
范涛
魏文斌
陈天健
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201911083212.XA priority Critical patent/CN110837653B/en
Publication of CN110837653A publication Critical patent/CN110837653A/en
Application granted granted Critical
Publication of CN110837653B publication Critical patent/CN110837653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Medical Informatics (AREA)
  • Accounting & Taxation (AREA)
  • Computing Systems (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a label prediction method, which comprises the following steps: the method comprises the steps that a demander obtains a first parameter after updating of a demander model, a first characteristic quantity of a demander prediction sample and a first exposure of the demander prediction sample; the demander determines a first predicted value of a demander model based on the first parameter, the first characteristic quantity and the first exposure; the demander acquires a second predicted value of the provider model and a Poisson calculation rule to determine the second predicted value; and the demand side determines the prediction label quantity of the demand side prediction sample based on the first prediction value, the second prediction value and the Poisson calculation rule. The invention also discloses a label prediction device and a computer readable storage medium. According to the method, a Poisson regression implementation scheme is combined, a demand side model and a provider side model in a longitudinal federated learning model are trained, the predicted label quantity corresponding to a demand side predicted sample can be accurately predicted, and the problem that accurate label data cannot be predicted in the prior art is solved.

Description

Label prediction method, device and computer readable storage medium
Technical Field
The present invention relates to the field of financial technology (Fintech), and in particular, to a method and an apparatus for label prediction and a computer-readable storage medium.
Background
With the development of computer technology, more and more technologies (big data, distributed, Blockchain, artificial intelligence, etc.) are applied to the financial field, and the traditional financial industry is gradually changing to financial technology (Fintech), but higher requirements are also put forward on the technologies due to the requirements of security and real-time performance of the financial industry. For example, federal learning is a very widely applied technology in the financial field, and efficient machine learning is performed among multiple participants or multiple computing nodes on the premise that information security during big data exchange is guaranteed, terminal data and personal data privacy are protected, and legal compliance is guaranteed by combining different participants to perform machine learning. The longitudinal federated learning in the federated learning is a method of segmenting the data sets of the two participants according to the longitudinal direction (namely feature dimension) under the condition that the users of the data sets of the two participants overlap more and the user features overlap less, and extracting the part of data which is the same for the users and has not the same user features for training.
In a longitudinal federal learning scene in the prior art, a party A, a party B and a party C are set, a partner B has a label with the highest commercial value, the partner A has certain characteristics which the party B does not have, the party C is a coordinator, and data of the part A which is the same as the party B but has not the same user characteristics is taken out to perform federal learning modeling and prediction, so that the result of label data can be predicted to be correct or wrong, and accurate label data cannot be predicted.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a label prediction method, a label prediction device and a computer readable storage medium, and aims to solve the technical problem that an accurate data result cannot be predicted.
In order to achieve the above object, the present invention provides a label prediction method, including the steps of:
the method comprises the steps that a demander obtains a first parameter after a demander model is updated, a first characteristic quantity of a demander prediction sample and a first exposure of the demander prediction sample;
the demander determines a first predicted value of the demander model based on the first parameter, the first feature amount and the first exposure amount;
the demand side acquires a second predicted value and a Poisson calculation rule of a provider model, wherein the provider is used for acquiring a second parameter after the provider model is updated and a second characteristic quantity of a provider prediction sample, and the second predicted value is determined based on the second parameter and the second characteristic quantity;
and the demander determines the prediction label quantity of the demander prediction sample based on the first prediction value, the second prediction value and the Poisson calculation rule.
Optionally, before the step of obtaining the updated first parameter of the demander model, the first feature quantity of the demander prediction sample, and the first exposure quantity of the demander prediction sample, the method further includes:
the demander acquires a third parameter before the demander model is updated, a third characteristic quantity of a demander training sample and a second exposure of the demander training sample;
the demander determines a third predicted value of the demander model based on the third parameter, the third feature quantity and the second exposure;
the provider is used for acquiring a fourth parameter before the provider model is updated and a fourth feature quantity of the provider training sample, and the provider determines a fourth predicted value of the provider model based on the fourth parameter and the fourth feature quantity;
the demander determines fifth parameters of the demander model based on the third predicted values to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the fourth predicted values to update provider model parameters and train the provider model.
Optionally, the determining, by the demander, a fifth parameter of the demander model based on the third predicted value to update demander model parameters and train the demander model, and the determining, by the provider, a sixth parameter of the provider model based on the fourth predicted value to update provider model parameters and train the provider model comprises:
the requiring party acquires the label amount of the training sample of the requiring party, the public key information provided by the coordinating party and the intermediate encryption amount of the providing party model, wherein the providing party acquires the fourth predicted value and the public key information provided by the coordinating party, and the providing party determines the intermediate encryption amount based on the fourth predicted value and the public key information;
the demander determines the encryption residual quantity of the demander model based on the third predicted value, the label quantity and the intermediate encryption quantity;
the demander determines a fifth parameter of the demander model based on the encryption residual quantity and the public key information to update a parameter of the demander model and train the demander model, and the provider determines a sixth parameter of the provider model based on the encryption residual quantity and the public key information to update a parameter of the provider model and train the provider model.
Optionally, the step of the demander determining a fifth parameter of the demander model based on the encryption residual amount and the common key information to update parameters of a demander model and train the demander model, and the step of the provider determining a sixth parameter of the provider model based on the encryption residual amount and the common key information to update parameters of a provider model and train the provider model comprises:
the demander determines a first encryption gradient of the demander model based on the third feature quantity, the encryption residual quantity and the public key information;
the provider determines a second encryption gradient of the provider model based on the fourth feature quantity, the encryption residual quantity, and the common key information;
the demander determines fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model.
Optionally, the determining, by the demander, fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the determining, by the provider, sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model, comprise:
the coordinator is used for acquiring a first encryption gradient of the demander model, a second encryption gradient of the provider model and private key information corresponding to the public key information;
the coordinating party is used for determining a first decryption gradient corresponding to the requiring party model based on the first encryption gradient and the private key information;
the coordinator is used for determining a second decryption gradient corresponding to the provider model based on the second encryption gradient and the private key information;
the demander determines fifth parameters of the demander model based on the first decryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second decryption gradient to update provider model parameters and train the provider model.
Optionally, after the steps of determining, by the demander, fifth parameters of the demander model based on the first decryption gradient to update parameters of the demander model and training the demander model, and determining, by the provider, sixth parameters of the provider model based on the second decryption gradient to update parameters of the provider model and training the provider model, the method further includes:
the demander determines the encryption loss variation of the demander model based on the third predicted value, the intermediate encryption amount and the second exposure amount;
the coordinator is used for acquiring the encryption loss variation of the demand side model, and detecting whether the encryption loss variation is smaller than or equal to a first preset threshold value or not;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the demander updates the parameter of the demander model, acquires the fifth parameter, and takes the fifth parameter as the first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the provider updates the parameter of the provider model, and the provider acquires the sixth parameter, and takes the sixth parameter as a second parameter to train the provider model;
and if the encryption loss variation is larger than the first preset threshold, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
Optionally, before the step of acquiring, by the demander, the third parameter before updating the demander model, the third feature quantity of the demander training sample, and the second exposure quantity of the demander training sample, the method further includes:
the demander acquires the training sample of the demander, and the provider acquires the training sample amount of each training sample provided by the demander;
the demander determines a third characteristic quantity of the demander training sample and a second exposure quantity of the demander training sample based on the demander training sample;
the provider is used for determining the provider training sample matched with the demander training sample based on the training sample amount;
the provider is used for determining a fourth feature quantity of the provider training sample based on the provider training sample.
Optionally, after the steps of determining, by the demander, fifth parameters of the demander model based on the first decryption gradient to update parameters of the demander model and training the demander model, and determining, by the provider, sixth parameters of the provider model based on the second decryption gradient to update parameters of the provider model and training the provider model, the method further includes:
the coordinator is used for obtaining the number of model training rounds of the demand side model and detecting whether the number of model training rounds is larger than or equal to a second preset threshold value;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the demander updates the parameters of the demander model, acquires the fifth parameter, and takes the fifth parameter as a first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the provider updates the parameters of the provider model, and the provider acquires the sixth parameter which is used as a second parameter to train the provider model;
and if the number of model training rounds is smaller than a second preset threshold value, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
In order to achieve the above object, the present invention also provides a label prediction apparatus, including: a memory, a processor and a tag prediction program stored on the memory and executable on the processor, the tag prediction program when executed by the processor implementing the steps of the tag prediction method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a tag prediction program, which when executed by a processor, implements the steps of the tag prediction method as described above.
The method comprises the steps that a first parameter after a demand side model is updated, a first characteristic quantity of a demand side prediction sample and a first exposure quantity of the demand side prediction sample are obtained through a demand side; the demander determines a first predicted value of the demander model based on the first parameter, the first feature amount and the first exposure amount; the demand side acquires a second predicted value and a Poisson calculation rule of a provider model, wherein the provider is used for acquiring a second parameter after the provider model is updated and a second characteristic quantity of a provider prediction sample, and the second predicted value is determined based on the second parameter and the second characteristic quantity; the method comprises the steps that a demand side determines a predicted label quantity of a demand side predicted sample based on a first predicted value, a second predicted value and a Poisson calculation rule, a demand side model and a provider model in a longitudinal federal learning model are trained by combining a Poisson regression implementation scheme, the predicted label quantity corresponding to the demand side predicted sample can be accurately predicted, the problem that accurate label data cannot be predicted in the prior art is solved, and the problem that terminal data and personal data privacy are easily revealed is solved by adopting a mode of building the longitudinal federal learning model.
Drawings
FIG. 1 is a schematic structural diagram of a tag prediction apparatus of a hardware operating environment according to an embodiment of a tag prediction method of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a tag prediction method according to the present invention;
FIG. 3 is a schematic diagram of a prediction flow of the tag prediction method of the present invention;
FIG. 4 is a schematic diagram of a modeling flow of the label prediction method of the present invention;
FIG. 5 is a schematic diagram of a modeling flow of the label prediction method of the present invention;
FIG. 6 is a schematic diagram of a modeling flow of a label prediction method of the present invention;
FIG. 7 is a schematic diagram of a modeling flow of the label prediction method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a tag prediction apparatus of a hardware operating environment according to an embodiment of the present invention.
The label prediction device of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, a portable computer and the like.
As shown in fig. 1, the tag prediction apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the tag prediction apparatus configuration shown in fig. 1 does not constitute a limitation of the tag prediction apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a tag prediction program.
In the label prediction apparatus shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke a tag prediction program stored in the memory 1005.
In this embodiment, the tag prediction apparatus includes: a memory 1005, a processor 1001, and a tag prediction program stored in the memory 1005 and executable on the processor 1001, wherein the processor 1001, when calling the tag prediction program stored in the memory 1005, performs the following operations:
the method comprises the steps that a demander obtains a first parameter after a demander model is updated, a first characteristic quantity of a demander prediction sample and a first exposure of the demander prediction sample;
the demander determines a first predicted value of the demander model based on the first parameter, the first feature amount and the first exposure amount;
the demand side acquires a second predicted value and a Poisson calculation rule of a provider model, wherein the provider is used for acquiring a second parameter after the provider model is updated and a second characteristic quantity of a provider prediction sample, and the second predicted value is determined based on the second parameter and the second characteristic quantity;
and the demander determines the prediction label quantity of the demander prediction sample based on the first prediction value, the second prediction value and the Poisson calculation rule.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the demander acquires a third parameter before the demander model is updated, a third characteristic quantity of a demander training sample and a second exposure of the demander training sample;
the demander determines a third predicted value of the demander model based on the third parameter, the third feature quantity and the second exposure;
the provider is used for acquiring a fourth parameter before the provider model is updated and a fourth feature quantity of the provider training sample, and the provider determines a fourth predicted value of the provider model based on the fourth parameter and the fourth feature quantity;
the demander determines fifth parameters of the demander model based on the third predicted values to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the fourth predicted values to update provider model parameters and train the provider model.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the requiring party acquires the label amount of the training sample of the requiring party, the public key information provided by the coordinating party and the intermediate encryption amount of the providing party model, wherein the providing party acquires the fourth predicted value and the public key information provided by the coordinating party, and the providing party determines the intermediate encryption amount based on the fourth predicted value and the public key information;
the demander determines the encryption residual quantity of the demander model based on the third predicted value, the label quantity and the intermediate encryption quantity;
the demander determines a fifth parameter of the demander model based on the encryption residual quantity and the public key information to update a parameter of the demander model and train the demander model, and the provider determines a sixth parameter of the provider model based on the encryption residual quantity and the public key information to update a parameter of the provider model and train the provider model.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the demander determines a first encryption gradient of the demander model based on the third feature quantity, the encryption residual quantity and the public key information;
the provider determines a second encryption gradient of the provider model based on the fourth feature quantity, the encryption residual quantity, and the common key information;
the demander determines fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the coordinator is used for acquiring a first encryption gradient of the demander model, a second encryption gradient of the provider model and private key information corresponding to the public key information;
the coordinating party is used for determining a first decryption gradient corresponding to the requiring party model based on the first encryption gradient and the private key information;
the coordinator is used for determining a second decryption gradient corresponding to the provider model based on the second encryption gradient and the private key information;
the demander determines fifth parameters of the demander model based on the first decryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second decryption gradient to update provider model parameters and train the provider model.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the demander determines the encryption loss variation of the demander model based on the third predicted value, the intermediate encryption amount and the second exposure amount;
the coordinator is used for acquiring the encryption loss variation of the demand side model, and detecting whether the encryption loss variation is smaller than or equal to a first preset threshold value or not;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the demander updates the parameter of the demander model, acquires the fifth parameter, and takes the fifth parameter as the first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the provider updates the parameter of the provider model, and the provider acquires the sixth parameter, and takes the sixth parameter as a second parameter to train the provider model;
and if the encryption loss variation is larger than the first preset threshold, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the demander acquires the training sample of the demander, and the provider acquires the training sample amount of each training sample provided by the demander;
the demander determines a third characteristic quantity of the demander training sample and a second exposure quantity of the demander training sample based on the demander training sample;
the provider is used for determining the provider training sample matched with the demander training sample based on the training sample amount;
the provider is used for determining a fourth feature quantity of the provider training sample based on the provider training sample.
Further, the processor 1001 may call a tag prediction program stored in the memory 1005, and also perform the following operations:
the coordinator is used for obtaining the number of model training rounds of the demand side model and detecting whether the number of model training rounds is larger than or equal to a second preset threshold value;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the demander updates the parameters of the demander model, acquires the fifth parameter, and takes the fifth parameter as a first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the provider updates the parameters of the provider model, and the provider acquires the sixth parameter which is used as a second parameter to train the provider model;
and if the number of model training rounds is smaller than a second preset threshold value, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
The invention also provides a label prediction method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the label prediction method of the invention.
In this embodiment, the tag prediction method includes:
federal learning is a technology which is widely applied in the financial field, and is a method for machine learning by combining different participants, so that efficient machine learning is carried out among multiple participants or multiple computing nodes on the premise of ensuring information safety during big data exchange, protecting terminal data and personal data privacy and guaranteeing legal compliance. The longitudinal federated learning in the federated learning is a method of segmenting the data sets of the two participants according to the longitudinal direction (namely feature dimension) under the condition that the users of the data sets of the two participants overlap more and the user features overlap less, and extracting the part of data which is the same for the users and has not the same user features for training.
The embodiment is applied to a three-party longitudinal federal learning scenario, referring to fig. 4, as shown in the modeling flow of fig. 4, it is set that the partners include a provider a party, a demander B party, and a coordinator C party, the demander B has the label with the most commercial value, the provider a has some characteristics that the provider B party does not have, and the coordinator C party is. A. And B, both sides need to model and predict the Poisson distribution on the premise of not revealing B-side label information and characteristic information of both sides.
Step S10, the demander obtains the updated first parameter of the demander model, the first characteristic quantity of the demander prediction sample and the first exposure quantity of the demander prediction sample;
referring to fig. 3, as shown in the prediction flow of fig. 3, the demand side is the side B in fig. 3, and the side B of the demand side includes a side federal learning model and model parameters, that is, a demand side model and model parameters thereof; the first parameter is a model parameter in the demand side model and is a model parameter after the modeling process is completed and the model is updated; the first characteristic quantity is a sample characteristic of a demand side prediction sample, namely a user characteristic of the demand side prediction sample, and if the demand side is a bank, the first characteristic quantity can be the collection and payment behaviors, credit rating and the like of a bank user; the first exposure is an acquisition unit of the label feature of the prediction sample, and may be a time span, such as 1 year or 1 month, or a geographic range, such as 10 square kilometers or 10 meters, and the exposure is also referred to as an exposure.
It can be understood that, in fact, the data information contained in the forecast sample of the demander side B and the forecast sample of the party a provider cannot be exchanged, and first, the data exchange of enterprises at two sides violates the law, and the terminal data and the user data are easily revealed; secondly, the exchange of the sensitive data does not accord with the benefits of both parties, and at the moment, longitudinal federal learning starts to play the unique role, namely, efficient machine learning is developed among multiple parties or multiple computing nodes on the premise of guaranteeing the information safety during big data exchange, protecting the privacy of terminal data and personal data and guaranteeing legal compliance.
In this embodiment, after the three-party longitudinal federal learning model modeling of this embodiment is completed, referring to fig. 3, as shown in the prediction flow of fig. 3, the prediction flow starts, and the B-party of the demand party first obtains the first parameter after the model is updated, the first feature quantity of the prediction sample of the demand party, and the first exposure quantity of the prediction sample of the demand party, so as to predict the subsequent samples. Wherein the first parameter can be expressed as thetaBThe first characteristic quantity may be expressed as
Figure BDA0002264589410000126
The first exposure amount may be expressed as ei. And after the first parameter is trained by the coordination party of the A party and the B party in combination with the coordination party of the C party, the parameter in the model of the demand party is updated so as to accurately predict the result corresponding to the prediction sample of the B party subsequently.
A step S20 of the demander determining a first predicted value of the demander model based on the first parameter, the first feature amount, and the first exposure amount;
in the present embodiment, referring to fig. 3, as shown in the prediction flow of fig. 3, in the prediction flow, the first parameter θ is acquired on the demander side BBFirst characteristic amount
Figure BDA0002264589410000121
And a first exposure eiThereafter, the B party of the demand side is based on the first parameter thetaBFirst characteristic amount
Figure BDA0002264589410000122
And a first exposure eiAnd calculating a first predicted value of the B-side model of the demand side. Wherein the calculation process of the first predicted value comprises the steps of calculating first
Figure BDA0002264589410000123
Recalculation
Figure BDA0002264589410000124
The first predicted value is calculated by the formula
Figure BDA0002264589410000125
Step S30, the demander acquires a second predicted value and a Poisson calculation rule of a provider model, wherein the provider is used for acquiring a second parameter updated by the provider model and a second characteristic quantity of a provider prediction sample, and determines the second predicted value based on the second parameter and the second characteristic quantity;
referring to fig. 3, as shown in the prediction flow of fig. 3, the provider is the provider a in fig. 3, and the provider a includes a party federal learning model and model parameters, that is, a provider model and model parameters thereof; the second parameter is a model parameter in the provider model and is a model parameter after the modeling process is completed and the provider model is updated; the second characteristic quantity is a sample characteristic of the provider prediction sample, that is, a user characteristic of the provider prediction sample, and if the provider a is the e-commerce, the first characteristic quantity may be browsing and purchasing history of the e-commerce user.
In the present embodiment, referring to fig. 3, as shown in the prediction flow of fig. 3, in the prediction flow, the provider a first obtains the second parameter θ after the model is updatedAAnd a second feature quantity of its predicted sample
Figure BDA0002264589410000131
So that the three-party longitudinal federal learning model can predict the subsequent samples. Obtaining a second parameter theta at provider AAAnd a second characteristic quantity
Figure BDA0002264589410000132
Thereafter, the provider a side bases on the second parameter θAAnd a second characteristic quantityA second prediction value of the provider prediction sample is calculated. And after the provider A calculates the second predicted value, the provider A sends the second predicted value to the demander, and the demander B acquires the second predicted value calculated by the provider model and acquires a Poisson calculation rule preset by the demander. Wherein the calculation process of the second predicted value comprises calculating first
Figure BDA0002264589410000134
Recalculation
Figure BDA0002264589410000135
The second predicted value is calculated by
Figure BDA0002264589410000136
It can be understood that, before the prediction process starts, the A, B party first needs to complete matching of common prediction samples through a matching mechanism, that is, A, B party knows the sample id to be predicted, and performs matching on the common prediction samples, and only A, B parties have the common prediction sample corresponding to the sample id to perform prediction of the tag feature on the prediction sample, where the prediction sample may be one or multiple.
And step S40, the demander determines the prediction label quantity of the demand side prediction sample based on the first prediction value, the second prediction value and the Poisson calculation rule.
The predicted tag quantity is the number of times of occurrence of a certain event in a certain uncertain time range, namely the tag data which needs to be predicted by the demand side model, and comprises the number of times, such as the number of times of purchasing funds by a certain user in a month. The poisson calculation rule may be a poisson distribution model or formula, or may be other models having an effect on poisson distribution.
In this embodiment, referring to fig. 3, as shown in the prediction flow of fig. 3, after the demander determines the first predicted value and the provider determines the second predicted value, the provider a side determines the second predicted value
Figure BDA0002264589410000137
And sending the data to the B party of the demand party. The B party of the demand party receives the second predicted value sent by the A party
Figure BDA0002264589410000138
Thereafter, the model of the B-party of the demand party is based on the first predicted value
Figure BDA0002264589410000139
Second predicted value
Figure BDA00022645894100001310
And a Poisson calculation rule for calculating the prediction label quantity of the demand side prediction sample so as to calculate the prediction result of the label characteristic of the demand side B prediction sample. Wherein, the calculation of the predicted label quantity based on the Poisson calculation ruleIs of the formula
Figure BDA0002264589410000141
In the longitudinal federal learning scenario in this embodiment, the B-side of the demand party combines the user characteristics of the common sample provided by the a-side of the provider, i.e., the second characteristic quantity, according to the user characteristics, i.e., the first characteristic quantity, and the label characteristics, i.e., the first exposure quantity, of the training sample owned by the B-side of the demand party, and obtains a model containing the first parameter and the second parameter after training based on the a-side and the B-side combined C-side, and can predict the label quantity of the training sample. For example, given the income and expenditure behavior and credit rating of the bank user of party B and the browsing and purchasing history of the e-commerce user of party a, the number of times a certain user purchases funds in a certain period of time in the future can be predicted through the longitudinal federal learning model.
In the label prediction method provided in this embodiment, a demander obtains a first parameter after updating a demander model, a first feature quantity of a demander prediction sample, and a first exposure of the demander prediction sample, the demander determines a first predicted value of the demander model based on the first parameter, the first feature quantity, and the first exposure, and the demander obtains a second predicted value of a provider model and a poisson calculation rule, wherein a provider is configured to obtain a second parameter after updating the provider model and a second feature quantity of the provider prediction sample, and determines the second predicted value based on the second parameter and the second feature quantity, the demander determines a predicted label quantity of the demander prediction sample based on the first predicted value, the second predicted value, and the poisson calculation rule, and by combining with a poisson regression implementation scheme, the method has the advantages that a demander model and a provider model in the longitudinal federated learning model are trained, the predicted label quantity corresponding to a demander predicted sample can be accurately predicted, the problem that accurate label data cannot be predicted in the prior art is solved, and the problem that terminal data and personal data privacy are easily revealed is solved by adopting a mode of building the longitudinal federated learning model.
Based on the first embodiment, a second embodiment of the method of the present invention is provided, where before step S10, the method further includes:
step a, the demander acquires a third parameter of the demander model, a third characteristic quantity of a demander training sample and a second exposure of the demander training sample;
referring to fig. 4, as shown in the modeling process of fig. 4, the third parameter is a model parameter in the demand side model, which is a demand side model parameter in the modeling process, unlike the first parameter, the first parameter is a model parameter of the demand side model after modeling is completed, and the third parameter is a model parameter in an unmodeled or modeled process, at which time the longitudinal federal learning model is not modeled; the third characteristic quantity is a sample characteristic in a training sample in a B party of a demander and a user characteristic in the training sample of the B party, and if the demander is a bank, the third characteristic quantity can be the income and expenditure behavior, credit rating and the like of a bank user; the second exposure is a unit of acquiring the label features of the training sample, and may be a time span, such as 1 year or 1 month, or a geographic range, such as 10 square kilometers or 10 meters, and the exposure is also referred to as an exposure.
In this embodiment, referring to fig. 4, as shown in the modeling flow of fig. 4, in the longitudinal federal learning modeling process, the B-side of the demand party obtains the third parameter of its model, the third feature quantity of the training sample of the B-side and the second exposure quantity of its training sample, so as to be used for training of the longitudinal federal learning model of the following three parties. Wherein the third parameter can be expressed as thetaBThe third characteristic amount may be expressed as XBThe second exposure amount may be expressed as E.
Step b, the demander determines a third predicted value of the demander model based on the third parameter, the third characteristic quantity and the second exposure;
in the present embodiment, referring to fig. 4, as shown in the modeling flow of fig. 4, in the longitudinal federal learning modeling process, the third parameter θ is obtained on the B-side of the demand sideBAnd the third characteristic quantity XBAnd the second exposure E, the requiring party B is based on the third parameter thetaBAnd the third characteristic quantity XBAnd the second exposure amount E is set to a second value,and calculating a third predicted value of the demand side model. Wherein the calculation process of the third predicted value comprises the first calculation of XBθBThen, exp (X) is calculatedBθB) E, the formula for calculating the third predicted value is exp (X)BθB)*E。
Step c, the provider is used for obtaining a fourth parameter before the provider model is updated and a fourth feature quantity of the provider training sample, and the provider determines a fourth predicted value of the provider model based on the fourth parameter and the fourth feature quantity;
referring to fig. 4, as shown in the modeling flow of fig. 4, the fourth parameter is a model parameter in the provider model, which is a parameter in the provider model in the modeling flow process, and is different from the third parameter, the third parameter is a model parameter of the provider model after modeling is completed, and the fourth parameter is a model parameter in the provider model or in the modeling process, at which time, the longitudinal federal learning model is not modeled; the fourth feature quantity is a sample feature in a training sample in the provider A, and is a user feature in the training sample provided by the provider A, and if the provider A is an e-commerce, the first feature quantity can be browsing and purchasing history of the e-commerce user.
In this embodiment, referring to fig. 4, as shown in the modeling flow of fig. 4, in the vertical federal learning modeling process, the provider a first obtains a fourth parameter of the provider model and a fourth feature quantity of a provider training sample for training of a subsequent three-party vertical federal learning model, where the fourth parameter may be represented as θAThe fourth characteristic amount may be expressed as XA. Obtaining a fourth parameter theta at provider AAAnd a fourth characteristic quantity XAThereafter, the provider a side bases on the fourth parameter θAAnd a fourth characteristic quantity XAAnd calculating a fourth predicted value of the provider training sample. Wherein the calculation process of the fourth predicted value comprises calculating X firstAθAThen, exp (X) is calculatedAθA) The formula for calculating the fourth predicted value is exp (X)AθA)。
It can be understood that the training samples of the requiring party B and the training samples of the providing party a cannot be exchanged, and before the modeling process starts, the A, B party first needs to complete the matching of the common training samples through a matching mechanism, that is, the A, B party completes the screening of the common training samples through the encryption ID intersection, and only if both A, B parties have the common training samples corresponding to the encryption ID, the longitudinal federal learning model can be modeled on the training samples, where the training samples are generally multiple, and may be millions or tens of millions. The A, B training samples are screened and trained together by encrypting the ID, so that the effect of carrying out efficient machine learning among multiple parties or multiple computing nodes is achieved on the premise of ensuring information safety during big data exchange, protecting terminal data and personal data privacy and ensuring legal compliance.
And d, the demander determines a fifth parameter of the demander model based on the encryption residual quantity and the public key information so as to update parameters of the demander model and train the demander model, and the provider determines a sixth parameter of the provider model based on the encryption residual quantity and the public key information so as to update parameters of the provider model and train the provider model.
In this embodiment, in the process of longitudinal federated learning modeling, after the third predicted value is determined by the B-party of the demander and the fourth predicted value is determined by the provider, the third predicted value exp (X) is based on the demander by the poisson regression implementation scheme of the three-party longitudinal federated learning framework combining the a-party, the B-party and the C-partyBθB) E, determining a fifth parameter of the demander model to update the demander model parameters and train the demander model, and the provider based on a fourth predicted value exp (X)AθA) And determining a sixth parameter of the provider model to update the provider model parameter and train the provider model.
Further, in an embodiment, the determining, by the demander, a fifth parameter of the demander model based on the third predicted value to update demander model parameters and train the demander model, and the determining, by the provider, a sixth parameter of the provider model based on the fourth predicted value to update provider model parameters and train the provider model comprises:
step e, the demander acquires the label amount of the training sample of the demander, the public key information provided by the coordinator and the intermediate encryption amount of the provider model, wherein the provider acquires the fourth predicted value and the public key information provided by the coordinator, and the provider determines the intermediate encryption amount based on the fourth predicted value and the public key information;
referring to fig. 4, as shown in the modeling flow of fig. 4, the public key information is the public key information provided by the coordinating party C, and is an encryption rule providing data encryption between A, B parties, and only the coordinating party C has the private key information corresponding to the public key information; the label quantity is the label characteristic Y in the training sample in the B party of the demand party, namely the label with the most commercial value in the training sample in the B party, and if the demand party is a bank, the label quantity can be the times of purchasing funds by a bank user in a certain time period.
In this embodiment, referring to fig. 4, as shown in the modeling flow of fig. 4, in the vertical federal learning modeling process, the coordinator C acquires the public key information and sends the public key information to the demander and the provider. After the provider A receives the public key information, a fourth predicted value exp (X) based on the provider modelAθA) And the public key information of the coordination party model, and the fourth predicted value exp (X) is obtained through a homomorphic encryption technologyAθA) Encrypting to determine intermediate encryption amount [ exp (X) corresponding to the fourth predicted value in the provider modelAθA)]]. And after the provider determines the intermediate encryption amount, the provider sends the intermediate encryption amount to the demander, and the demander B cannot decrypt the intermediate encryption amount. And then, the requiring party acquires the intermediate encryption quantity, and simultaneously acquires the label quantity in the training sample of the requiring party and the public key information provided by the coordinating party.
Step f, the demander determines the encryption residual quantity of the demander model based on the third predicted value, the label quantity and the intermediate encryption quantity;
in this embodiment, referring to fig. 5, as shown in the modeling flow of fig. 5, in the vertical federal learning modeling process, after the requiring party B obtains the third predicted value, the tag amount and the intermediate encryption amount, the requiring party B bases on the third predicted value exp (X) of the requiring party modelBθB) E, intermediate encryption quanta of provider model [ exp (X)AθA)]]And the label quantity Y of the training sample of the demand party, firstly determining the residual quantity d, then encrypting the residual quantity by using the public key information based on the homomorphic encryption technology, and determining the encrypted residual quantity [ d ] of the demand party model]]. Wherein the amount of encryption residue [ [ d ]]]The calculation formula of (a) is as follows:
[[d]]=exp(XBθB)*E*[[XAθA]]-Y
wherein, exp (X)BθB) E is the third predicted value, [ [ exp (X)AθA)]]Is the intermediate encryption amount and Y is the label amount.
And g, the demander determines a fifth parameter of the demander model based on the encryption residual quantity and the public key information so as to update parameters of the demander model and train the demander model, and the provider determines a sixth parameter of the provider model based on the encryption residual quantity and the public key information so as to update parameters of the provider model and train the provider model.
In this embodiment, in the process of vertical federal learning modeling, after determining the amount of encryption residue of the demander model [ [ d ] ], by combining the poisson regression implementation scheme of the three-party vertical federal learning framework of the party a, the party B, and the party C, the demander determines the fifth parameter of the demander model based on the amount of encryption residue and the common key information to update the parameter of the demander model and train the demander model, and the provider determines the sixth parameter of the provider model based on the amount of encryption residue and the common key information to update the parameter of the provider model and train the provider model.
Further, in an embodiment, the determining, by the demander, a fifth parameter of the demander model based on the encryption residual amount and the common key information to update parameters of the demander model and train the demander model, and the determining, by the provider, a sixth parameter of the provider model based on the encryption residual amount and the common key information to update parameters of the provider model and train the provider model includes:
step h, the demander determines a first encryption gradient of the demander model based on the third characteristic quantity, the encryption residual quantity and the public key information;
in the present embodiment, referring to fig. 6, as shown in the modeling flow of fig. 6, in the vertical federal learning modeling process, the encryption residual amount [ [ d ] is determined on the B-side of the demand side]]Then, based on the third feature quantity X in the demand side modelBThe amount of encryption residue in the demand side model [ [ d ]]]And the requiring party B calculates the gradient value corresponding to the first encryption gradient first
Figure BDA0002264589410000181
Recalculating the first encryption gradient
Figure BDA0002264589410000182
Step i, the provider determines a second encryption gradient of the provider model based on the fourth feature quantity, the encryption residual quantity and the public key information;
in the present embodiment, referring to fig. 6, as shown in the modeling flow of fig. 6, in the vertical federal learning modeling process, the encryption residual amount [ [ d ] is determined on the B-side of the demand side]]Then, the encryption residue amount [ d ]]]And sending the data to the provider A. The provider A receives the encryption residue amount [ d ] sent by the demander B]]Then, based on the fourth feature amount in the provider model, the received encryption residual amount [ [ d ]]]And the provider A calculates gradient value corresponding to the second encryption gradient first
Figure BDA0002264589410000183
Recalculating a second encryption gradient
Figure BDA0002264589410000184
And j, the demander determines a fifth parameter of the demander model based on the first encryption gradient so as to update the demander model parameters and train the demander model, and the provider determines a sixth parameter of the provider model based on the second encryption gradient so as to update the provider model parameters and train the provider model.
In this embodiment, in the process of longitudinal federated learning modeling, after a first encryption gradient is determined by a demander party B and a second encryption gradient is determined by a provider party a, by a poisson regression implementation scheme of a three-party longitudinal federated learning framework combining the demander party a, the demander party B and the provider party C, a fifth parameter of a demander model is determined by the demander party based on the first encryption gradient to update a demander model parameter and train a demander model, and a sixth parameter of a provider model is determined by the provider party based on the second encryption gradient to update a provider model parameter and train a provider model.
Further, in an embodiment, the determining, by the demander, fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the determining, by the provider, sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model, comprises:
step k, the coordinator is used for obtaining a first encryption gradient of the demander model, a second encryption gradient of the provider model and private key information corresponding to the public key information;
the private key information is provided by the coordinating party C, and is a decryption rule for providing A, B parties of encrypted data, and only the coordinating party C has the private key information corresponding to the public key information.
In the embodiment, in the process of longitudinal federal learning modeling, after a first encryption gradient of a demand side model is determined by a demand side, the first encryption gradient is sent to a coordinator side C; after the provider determines a second encryption gradient for the provider model, the second encryption gradient is sent to coordinator party C. Thereafter, the coordinator C acquires the received first encryption gradient, second encryption gradient, and private key information held by the coordinator C itself to decrypt data received from the demander B and the provider a.
Step l, the coordinating party is configured to determine a first decryption gradient corresponding to the requiring party model based on the first encryption gradient and the private key information;
in the present embodiment, referring to fig. 7, as shown in the modeling flow of fig. 7, during the vertical federal learning modeling, the coordinator C receives the first encryption gradientThen, the coordinating party C determines a first decryption gradient corresponding to the requiring party model through a decryption technology of private key information
Figure BDA0002264589410000192
Step m, the coordinator is configured to determine a second decryption gradient corresponding to the provider model based on the second encryption gradient and the private key information;
in the present embodiment, referring to fig. 7, as shown in the modeling flow of fig. 7, during the vertical federal learning modeling, the coordinator C receives the second encryption gradientThen, the coordinating party C determines a second decryption gradient corresponding to the requiring party model through the decryption technology of the private key information
Figure BDA0002264589410000194
And n, the demander determines a fifth parameter of the demander model based on the first decryption gradient so as to update the demander model parameter and train the demander model, and the provider determines a sixth parameter of the provider model based on the second decryption gradient so as to update the provider model parameter and train the provider model.
In the present embodiment, referring to fig. 7, as shown in the modeling flow of fig. 7, in the longitudinal federal learning modeling process, the first decryption gradient is determined on the coordinator C side
Figure BDA0002264589410000201
And a second decryption gradient
Figure BDA0002264589410000202
Then, the coordinator C side respectively decrypts the first decryption gradient
Figure BDA0002264589410000203
Sending to the B party of the demand party, and a second decryption gradient
Figure BDA0002264589410000204
And sending the data to the provider A. After the demander B receives the first decryption gradient, the demander determines a fifth parameter theta of the demander model based on the first decryption gradientBUpdating parameters of a demand side model and training the demand side model; and after the provider A receives the second decryption gradient, the provider determines a sixth parameter theta of the provider model based on the second decryption gradientATo update provider model parameters and train provider models.
Further, in an embodiment, after the steps of determining, by the demander, the fifth parameter of the demander model based on the first decryption gradient to update the demander model parameter and train the demander model, and determining, by the provider, the sixth parameter of the provider model based on the second decryption gradient to update the provider model parameter and train the provider model, the method further includes:
a step n1 of the demander determining an encryption loss variation of the demander model based on the third predicted value, the intermediate encryption amount and the second exposure amount;
in the present embodiment, referring to fig. 5, as shown in the modeling flow of fig. 5, in the longitudinal federal learning modeling process, X in the third predicted value determined based on the B-side of the demand sideBθBIntermediate encryption amount [ exp (X) determined by the B side of the demand sideAθA)]]And a second exposure E of the training sample of the B party of the demand party, wherein the B party of the demand party can firstly determine the encryption Loss amount [ [ Loss ] of the model]]Wherein the encryption Loss amount [ Loss [ [ Loss ]]]The calculation formula of (a) is as follows:
[[Loss]]=∑[[exp(XAθA)]]*exp(XBθB)*E-Y([[XAθA]]+XBθB+log(E))
after the encryption Loss amount [ [ Loss ] ] of the demand side model is calculated, the encryption Loss variation delta L of the demand side model is determined, wherein the calculation formula of the encryption Loss variation is as follows:
ΔL=[[Loss]]-[[Loss]]'
wherein, [ [ Loss ] ] is the encryption Loss amount calculated this time, and [ [ Loss ] ]' is the encryption Loss amount calculated and saved last time.
Step n2, the coordinator obtains the encryption loss variation of the demand side model, and detects whether the encryption loss variation is smaller than or equal to a first preset threshold;
in the present embodiment, referring to fig. 5, as shown in the modeling flow of fig. 5, in the vertical federal learning modeling process, after the B-side of the demand party determines the encryption Loss variation [ [ Loss ] ], the demand party transmits the encryption Loss variation to the C-side of the coordinator party. After the coordinator party C obtains the encryption Loss variation amount [ Loss ], the coordinator party C detects whether the encryption Loss variation amount is smaller than or equal to a first preset threshold value so as to detect whether the three-party longitudinal federated learning model converges.
Step n3, the step of the demander obtaining the updated first parameter of the demander model includes: if the encryption loss variation is smaller than or equal to the first preset threshold, the demander updates the parameter of the demander model, acquires the fifth parameter, and takes the fifth parameter as the first parameter to train the demander model;
in this embodiment, after the coordinator side C detects whether the encryption Loss variation is smaller than or equal to the first preset threshold, if it detects that the encryption Loss variation [ [ less ] ] is smaller than or equal to the first preset threshold, the demander side obtains the fifth parameter of the demander side model, and updates the parameter of the demander side model, that is, the fifth parameter is used as the first parameter. And updating parameters of the demand side model, wherein the situation shows that the three-party longitudinal federal learning model is modeled, so that the subsequent model can predict the label amount of the prediction sample.
Step n4, the step of the provider obtaining the updated second parameter of the provider model includes: if the encryption loss variation is smaller than or equal to the first preset threshold, the provider updates the parameter of the provider model, and the provider acquires the sixth parameter, and takes the sixth parameter as a second parameter to train the provider model;
in this embodiment, after the coordinator C detects whether the encryption Loss variation is smaller than or equal to the first preset threshold, if it detects that the encryption Loss variation [ [ less ] ] is smaller than or equal to the first preset threshold, the provider acquires the sixth parameter of the provider model, and updates the parameter of the provider model, that is, the sixth parameter is used as the second parameter. And updating parameters of the provider model, wherein the parameters show that the three-party longitudinal federal learning model is modeled, so that the subsequent model can predict the label amount of the prediction sample.
Step n5, if the encryption loss variation is greater than the first preset threshold, the demander continues to execute the step of determining the fifth parameter of the demander model based on the first decryption gradient, and the provider continues to execute the step of determining the sixth parameter of the provider model based on the second decryption gradient.
In this embodiment, after the coordinator C detects whether the encryption Loss variation is smaller than or equal to the first preset threshold, if it detects that the encryption Loss variation [ [ less ] ] is greater than the first preset threshold, which indicates that the model does not converge, the demander continues to execute the step of determining the fifth parameter of the demander model based on the first decryption gradient by the demander, and the provider continues to execute the step of determining the sixth parameter of the provider model based on the second decryption gradient by the provider.
In the label prediction method provided by this embodiment, the demander obtains a third parameter before updating the demander model, a third feature quantity of a demander training sample, and a second exposure quantity of the demander training sample, the demander determines a third predicted value of the demander model based on the third parameter, the third feature quantity, and the second exposure quantity, the provider obtains a fourth parameter before updating the provider model and a fourth feature quantity of the provider training sample, the provider determines a fourth predicted value of the provider model based on the fourth parameter and the fourth feature quantity, the demander determines a fifth parameter of the demander model based on the third predicted value to update the demander model parameters and train the demander model, and the provider determines the fourth predicted value, determining a sixth parameter of the provider model, updating a parameter of the provider model, training the provider model, updating a parameter in the demander model and a parameter in the provider model through a modeling process and a training process of three-party longitudinal federal learning so as to accurately predict the predicted label quantity of the predicted sample of the demander in the follow-up process, and solving the problem of easy leakage of terminal data and personal data privacy by adopting a mode of building a three-party longitudinal federal learning framework based on Poisson regression.
Based on the second embodiment, a third embodiment of the method of the present invention is provided, where in this embodiment, before step a, the method further includes:
step p, the demander acquires the training sample of the demander, and the provider acquires the training sample amount of each training sample provided by the demander;
in this embodiment, referring to fig. 4, as shown in the modeling flow of fig. 4, before the modeling flow starts, the demander B acquires the training sample amount of each training sample provided by the demander and the training sample of the demander, and the demander B transmits the acquired training sample amount to the provider a. And after the provider A receives the training sample size sent by the demander B, the provider A acquires the training sample size. Wherein, the training sample size is the size of the training sample used each time.
Step q, the demander determines a third characteristic quantity of the demander training sample and a second exposure quantity of the demander training sample based on the demander training sample;
in the embodiment, after the requirement side training sample is determined, the requirement side determines a third feature quantity of the requirement side training sample and a second exposure quantity of the requirement side training sample based on the requirement side training sample, so that the requirement side can model and train a three-party longitudinal federal learning model subsequently.
Step r, the provider is used for determining the provider training sample matched with the demander training sample based on the training sample amount;
in this embodiment, after the provider a obtains the training sample amount sent from the demander, the provider a completes matching of the common training samples through a matching mechanism, that is, the provider a completes A, B common training sample screening through the training sample amount, determines the provider training samples matched with the training samples of the demander, and can perform modeling and training on the longitudinal federal learning model only if the common training samples are screened.
And s, the provider is used for determining a fourth characteristic quantity of the provider training sample based on the provider training sample.
In this embodiment, after the provider determines the provider training sample, the provider may determine a fourth feature quantity of the provider training sample based on the provider training sample, so as to be used for subsequent modeling and training of the three-party longitudinal federal learning model.
Further, in an embodiment, after the steps of determining, by the demander, the fifth parameter of the demander model based on the first decryption gradient to update the demander model parameter and train the demander model, and determining, by the provider, the sixth parameter of the provider model based on the second decryption gradient to update the provider model parameter and train the provider model, the method further includes:
step t, the coordinator is used for obtaining the number of model training rounds of the demand side model and detecting whether the number of model training rounds is larger than or equal to a second preset threshold value;
in this embodiment, in the longitudinal federal learning modeling process, the demand side records and updates the number of model training rounds of the demand side model in real time, and the coordinator side obtains the number of model training rounds of the demand side model. The model training round number is the round number of longitudinal federal learning of the current training three parties, and the model training round number is increased once the fourth parameter of the demand party model and the fifth parameter of the provider model are determined. After the coordinator obtains the number of model training rounds of the model of the demand side, whether the number of model training rounds is larger than or equal to a second preset threshold value or not is detected, whether the number of model training rounds reaches the maximum number of model training rounds or not is detected, and whether the model converges or not is detected.
Step u, the step of the demander obtaining the updated first parameter of the demander model comprises the following steps: if the number of model training rounds is larger than or equal to a second preset threshold, the demander updates the parameters of the demander model, acquires the fifth parameter, and takes the fifth parameter as a first parameter to train the demander model;
in this embodiment, after detecting whether the number of model training rounds is greater than or equal to the second preset threshold, if it is detected that the number of model training rounds is greater than or equal to the second preset threshold, the demander obtains a fifth parameter of the demander model and updates the parameter of the demander model, that is, the fifth parameter is used as the first parameter. And updating parameters of the demand side model, wherein the situation shows that the three-party longitudinal federal learning model is modeled, so that the subsequent model can predict the label amount of the prediction sample.
Step v, the step that the provider obtains the updated second parameter of the provider model includes: if the number of model training rounds is larger than or equal to a second preset threshold, the provider updates the parameters of the provider model, and the provider acquires the sixth parameter which is used as a second parameter to train the provider model;
in this embodiment, after detecting whether the number of model training rounds is greater than or equal to the second preset threshold, if it is detected that the number of model training rounds is greater than or equal to the second preset threshold, the provider acquires a sixth parameter of the provider model, and updates the parameter of the provider model, that is, the sixth parameter is used as the second parameter. And updating parameters of the provider model, wherein the parameters show that the three-party longitudinal federal learning model is modeled, so that the subsequent model can predict the label amount of the prediction sample.
And w, if the number of model training rounds is smaller than a second preset threshold value, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
In this embodiment, after detecting whether the number of model training rounds is greater than or equal to the second preset threshold, if it is detected that the number of model training rounds is less than the second preset threshold, which indicates that the model training is not completed, the demander continues to execute the step in which the demander determines the fifth parameter of the demander model based on the first decryption gradient, and the provider continues to execute the step in which the provider determines the sixth parameter of the provider model based on the second decryption gradient.
In the label prediction method provided by this embodiment, the demander training sample is obtained by the demander, the provider acquires the training sample amount of each training sample provided by the demander, the demander determines a third characteristic amount of the demander training sample and a second exposure amount of the demander training sample based on the demander training sample, the provider is configured to determine the provider training pattern that matches the requestor training pattern based on the amount of training patterns, the provider is configured to determine a fourth feature quantity of the provider training pattern based on the provider training pattern, by accurately acquiring the training sample amount to ensure that A, B common training sample screening of two parties is completed, the training sample of the provider matched with the training sample of the demander is determined, and the training of a longitudinal federal learning model of the following three parties is facilitated.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a label prediction program is stored on the computer-readable storage medium, and the label prediction program, when executed by a processor, implements the steps of the label prediction method according to any one of the above.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the label prediction method described above, and will not be described in detail herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A label prediction method, characterized in that the label prediction method comprises the following steps:
the method comprises the steps that a demander obtains a first parameter after a demander model is updated, a first characteristic quantity of a demander prediction sample and a first exposure of the demander prediction sample;
the demander determines a first predicted value of the demander model based on the first parameter, the first feature amount and the first exposure amount;
the demand side acquires a second predicted value and a Poisson calculation rule of a provider model, wherein the provider is used for acquiring a second parameter after the provider model is updated and a second characteristic quantity of a provider prediction sample, and the second predicted value is determined based on the second parameter and the second characteristic quantity;
and the demander determines the prediction label quantity of the demander prediction sample based on the first prediction value, the second prediction value and the Poisson calculation rule.
2. The label prediction method of claim 1, wherein before the step of the demander obtaining the updated first parameter of the demander model, the first characteristic quantity of the demander prediction sample and the first exposure of the demander prediction sample, the method further comprises:
the demander acquires a third parameter before the demander model is updated, a third characteristic quantity of a demander training sample and a second exposure of the demander training sample;
the demander determines a third predicted value of the demander model based on the third parameter, the third feature quantity and the second exposure;
the provider is used for acquiring a fourth parameter before the provider model is updated and a fourth feature quantity of the provider training sample, and the provider determines a fourth predicted value of the provider model based on the fourth parameter and the fourth feature quantity;
the demander determines fifth parameters of the demander model based on the third predicted values to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the fourth predicted values to update provider model parameters and train the provider model.
3. The label prediction method of claim 2, wherein the determining, by the demander, fifth parameters of the demander model based on the third predicted value to update demander model parameters and train the demander model, and the determining, by the provider, sixth parameters of the provider model based on the fourth predicted value to update provider model parameters and train the provider model comprises:
the requiring party acquires the label amount of the training sample of the requiring party, the public key information provided by the coordinating party and the intermediate encryption amount of the providing party model, wherein the providing party acquires the fourth predicted value and the public key information provided by the coordinating party, and the providing party determines the intermediate encryption amount based on the fourth predicted value and the public key information;
the demander determines the encryption residual quantity of the demander model based on the third predicted value, the label quantity and the intermediate encryption quantity;
the demander determines a fifth parameter of the demander model based on the encryption residual quantity and the public key information to update a parameter of the demander model and train the demander model, and the provider determines a sixth parameter of the provider model based on the encryption residual quantity and the public key information to update a parameter of the provider model and train the provider model.
4. The label prediction method of claim 3, wherein the step of the demander determining fifth parameters of the demander model based on the amount of encryption residuals and the common key information to update demander model parameters and train the demander model, and the step of the provider determining sixth parameters of the provider model based on the amount of encryption residuals and the common key information to update provider model parameters and train the provider model comprises:
the demander determines a first encryption gradient of the demander model based on the third feature quantity, the encryption residual quantity and the public key information;
the provider determines a second encryption gradient of the provider model based on the fourth feature quantity, the encryption residual quantity, and the common key information;
the demander determines fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model.
5. The label prediction method of claim 4, wherein the determining, by the demander, fifth parameters of the demander model based on the first encryption gradient to update demander model parameters and train the demander model, and the determining, by the provider, sixth parameters of the provider model based on the second encryption gradient to update provider model parameters and train the provider model comprises:
the coordinator is used for acquiring a first encryption gradient of the demander model, a second encryption gradient of the provider model and private key information corresponding to the public key information;
the coordinating party is used for determining a first decryption gradient corresponding to the requiring party model based on the first encryption gradient and the private key information;
the coordinator is used for determining a second decryption gradient corresponding to the provider model based on the second encryption gradient and the private key information;
the demander determines fifth parameters of the demander model based on the first decryption gradient to update demander model parameters and train the demander model, and the provider determines sixth parameters of the provider model based on the second decryption gradient to update provider model parameters and train the provider model.
6. The label prediction method of claim 5, wherein the steps of the demander determining fifth parameters of the demander model based on the first decryption gradient to update demander model parameters and train the demander model, and the provider determining sixth parameters of the provider model based on the second decryption gradient to update provider model parameters and train the provider model further comprise:
the demander determines the encryption loss variation of the demander model based on the third predicted value, the intermediate encryption amount and the second exposure amount;
the coordinator is used for acquiring the encryption loss variation of the demand side model, and detecting whether the encryption loss variation is smaller than or equal to a first preset threshold value or not;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the demander updates the parameter of the demander model, acquires the fifth parameter, and takes the fifth parameter as the first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the encryption loss variation is smaller than or equal to the first preset threshold, the provider updates the parameter of the provider model, and the provider acquires the sixth parameter, and takes the sixth parameter as a second parameter to train the provider model;
and if the encryption loss variation is larger than the first preset threshold, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
7. The label prediction method of claim 2, wherein before the step of the demander obtaining the third parameter before the updating of the demander model, the third feature quantity of the demander training sample and the second exposure quantity of the demander training sample, the method further comprises:
the demander acquires the training sample of the demander, and the provider acquires the training sample amount of each training sample provided by the demander;
the demander determines a third characteristic quantity of the demander training sample and a second exposure quantity of the demander training sample based on the demander training sample;
the provider is used for determining the provider training sample matched with the demander training sample based on the training sample amount;
the provider is used for determining a fourth feature quantity of the provider training sample based on the provider training sample.
8. The label prediction method as in any one of claims 1-7, wherein the step of the demander determining fifth parameters of the demander model based on the first decryption gradient to update demander model parameters and train the demander model, and the provider determining sixth parameters of the provider model based on the second decryption gradient to update provider model parameters and train the provider model further comprises:
the coordinator is used for obtaining the number of model training rounds of the demand side model and detecting whether the number of model training rounds is larger than or equal to a second preset threshold value;
the step of the demander acquiring the updated first parameter of the demander model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the demander updates the parameters of the demander model, acquires the fifth parameter, and takes the fifth parameter as a first parameter to train the demander model;
the step that the provider obtains the updated second parameter of the provider model comprises the following steps:
if the number of model training rounds is larger than or equal to a second preset threshold, the provider updates the parameters of the provider model, and the provider acquires the sixth parameter which is used as a second parameter to train the provider model;
and if the number of model training rounds is smaller than a second preset threshold value, the demander continuously executes the step that the demander determines a fifth parameter of the demander model based on the first decryption gradient, and the provider continuously executes the step that the provider determines a sixth parameter of the provider model based on the second decryption gradient.
9. A label prediction apparatus, characterized in that the label prediction apparatus comprises: memory, a processor and a tag prediction program stored on the memory and executable on the processor, the tag prediction program when executed by the processor implementing the steps of the tag prediction method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a tag prediction program which, when executed by a processor, implements the steps of the tag prediction method according to any one of claims 1 to 8.
CN201911083212.XA 2019-11-07 2019-11-07 Label prediction method, apparatus and computer readable storage medium Active CN110837653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911083212.XA CN110837653B (en) 2019-11-07 2019-11-07 Label prediction method, apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911083212.XA CN110837653B (en) 2019-11-07 2019-11-07 Label prediction method, apparatus and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110837653A true CN110837653A (en) 2020-02-25
CN110837653B CN110837653B (en) 2023-09-19

Family

ID=69576330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911083212.XA Active CN110837653B (en) 2019-11-07 2019-11-07 Label prediction method, apparatus and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110837653B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368314A (en) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 Modeling and predicting method, device, equipment and storage medium based on cross features
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112818369A (en) * 2021-02-10 2021-05-18 中国银联股份有限公司 Combined modeling method and device
CN114187006A (en) * 2021-11-03 2022-03-15 杭州未名信科科技有限公司 Block chain supervision-based federal learning method
CN115409096A (en) * 2022-08-17 2022-11-29 北京融数联智科技有限公司 Two-party Poisson regression privacy calculation model training method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272914A1 (en) * 2013-03-15 2014-09-18 William Marsh Rice University Sparse Factor Analysis for Learning Analytics and Content Analytics
CN107993088A (en) * 2017-11-20 2018-05-04 北京三快在线科技有限公司 A kind of Buying Cycle Forecasting Methodology and device, electronic equipment
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
US20190244680A1 (en) * 2018-02-07 2019-08-08 D-Wave Systems Inc. Systems and methods for generative machine learning
CN110276210A (en) * 2019-06-12 2019-09-24 深圳前海微众银行股份有限公司 Based on the determination method and device of the model parameter of federation's study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272914A1 (en) * 2013-03-15 2014-09-18 William Marsh Rice University Sparse Factor Analysis for Learning Analytics and Content Analytics
CN107993088A (en) * 2017-11-20 2018-05-04 北京三快在线科技有限公司 A kind of Buying Cycle Forecasting Methodology and device, electronic equipment
US20190244680A1 (en) * 2018-02-07 2019-08-08 D-Wave Systems Inc. Systems and methods for generative machine learning
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN110276210A (en) * 2019-06-12 2019-09-24 深圳前海微众银行股份有限公司 Based on the determination method and device of the model parameter of federation's study

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368314A (en) * 2020-02-28 2020-07-03 深圳前海微众银行股份有限公司 Modeling and predicting method, device, equipment and storage medium based on cross features
WO2021169477A1 (en) * 2020-02-28 2021-09-02 深圳前海微众银行股份有限公司 Cross feature-based model building and prediction methods, devices and apparatuses, and storage medium
CN111368314B (en) * 2020-02-28 2024-08-06 深圳前海微众银行股份有限公司 Modeling and prediction method, device, equipment and storage medium based on cross characteristics
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN112766514A (en) * 2021-01-22 2021-05-07 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112766514B (en) * 2021-01-22 2021-12-24 支付宝(杭州)信息技术有限公司 Method, system and device for joint training of machine learning model
CN112818369A (en) * 2021-02-10 2021-05-18 中国银联股份有限公司 Combined modeling method and device
CN112818369B (en) * 2021-02-10 2024-03-29 中国银联股份有限公司 Combined modeling method and device
CN114187006A (en) * 2021-11-03 2022-03-15 杭州未名信科科技有限公司 Block chain supervision-based federal learning method
CN115409096A (en) * 2022-08-17 2022-11-29 北京融数联智科技有限公司 Two-party Poisson regression privacy calculation model training method and device and storage medium
CN115409096B (en) * 2022-08-17 2023-06-16 北京融数联智科技有限公司 Training method, device and storage medium for two-party poisson regression privacy calculation model

Also Published As

Publication number Publication date
CN110837653B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN110837653A (en) Label prediction method, device and computer readable storage medium
CN110189192B (en) Information recommendation model generation method and device
CN109167695B (en) Federal learning-based alliance network construction method and device and readable storage medium
CN111008709A (en) Federal learning and data risk assessment method, device and system
WO2020037918A1 (en) Risk control strategy determining method based on predictive model, and related device
JP7095140B2 (en) Multi-model training methods and equipment based on feature extraction, electronic devices and media
CN110264288A (en) Data processing method and relevant apparatus based on information discriminating technology
CN111210003B (en) Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN105262779B (en) Identity authentication method, device and system
WO2021031825A1 (en) Network fraud identification method and device, computer device, and storage medium
CN104252677A (en) Two-dimension code anti-counterfeiting technology and two-dimension code anti-counterfeiting system-based platform system
CN111931076B (en) Method and device for carrying out relationship recommendation based on authorized directed graph and computer equipment
CN112132198A (en) Data processing method, device and system and server
CN111666460A (en) User portrait generation method and device based on privacy protection and storage medium
CN111164632A (en) Information processing method and device based on block chain and block chain network
CN110516173B (en) Illegal network station identification method, illegal network station identification device, illegal network station identification equipment and illegal network station identification medium
CN107852412A (en) For phishing and the system and method for brand protection
CN109409693A (en) A kind of business associate mode recommended method and relevant device
CN111368196A (en) Model parameter updating method, device, equipment and readable storage medium
WO2020181854A1 (en) Payment anomaly detection
CN114186256A (en) Neural network model training method, device, equipment and storage medium
CN114186263A (en) Data regression method based on longitudinal federal learning and electronic device
CN115577691A (en) Bidding generation method, storage medium and electronic device
CN111859360A (en) Safe multi-device joint data computing system, method and device
CN109818965B (en) Personal identity verification device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant