CN110245510A - Method and apparatus for predictive information - Google Patents

Method and apparatus for predictive information Download PDF

Info

Publication number
CN110245510A
CN110245510A CN201910533286.2A CN201910533286A CN110245510A CN 110245510 A CN110245510 A CN 110245510A CN 201910533286 A CN201910533286 A CN 201910533286A CN 110245510 A CN110245510 A CN 110245510A
Authority
CN
China
Prior art keywords
model
current
gradient value
training sample
penalty values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910533286.2A
Other languages
Chinese (zh)
Other versions
CN110245510B (en
Inventor
刘昊骋
许韩晨玺
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910533286.2A priority Critical patent/CN110245510B/en
Publication of CN110245510A publication Critical patent/CN110245510A/en
Application granted granted Critical
Publication of CN110245510B publication Critical patent/CN110245510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2107File encryption

Abstract

The embodiment of the present application discloses the method and apparatus for predictive information.One specific embodiment of this method includes: to obtain the feature of user;The feature of user is separately input into the first model and the second model of training in advance, obtain the first prediction result and the second prediction result of user, wherein, first model and the second model respectively correspond different classes of mechanism, and are obtained based on corresponding training sample using longitudinal federal learning method training;Summarize the first prediction result and the second prediction result, generates the prediction result of user.The embodiment is related to field of cloud calculation, based on the first model and the second model prediction information obtained using longitudinal federal learning method training, improves the accuracy of information prediction.

Description

Method and apparatus for predictive information
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for predictive information.
Background technique
Data isolation and island effect seriously restrict Artificial Intelligence Development.The data dimension and sample number that different institutions possess It measures limited.Due to legal restrictions such as Information Security and personal information protections, the data of different institutions cannot be exchanged or be shared, and be led The modelling effect for causing mechanism to establish based on itself data and sample is poor, and generalization ability is weak, and does not have reproducibility.
For example, financial institution possesses the data such as the deposit of user, account trading flowing water, the amount of the loan and consumer record, but Lack the behavior and interest tags that user browses web sites;Internet mechanism possess the website browsing behavior of user, interest tags and Geographical location information, but lack the finance data that financial institution possesses.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for predictive information.
In a first aspect, the embodiment of the present application provides a kind of method for predictive information, comprising: obtain the spy of user Sign;The feature of user is separately input into the first model and the second model of training in advance, obtains the first prediction result of user With the second prediction result, wherein the first model and the second model respectively correspond different classes of mechanism, and based on corresponding Training sample is obtained using longitudinal federal learning method training;Summarize the first prediction result and the second prediction result, generates user Prediction result.
In some embodiments, the first model and the second model are trained as follows: it is corresponding to obtain the first model First training sample and corresponding second training sample of the second model, wherein the first training sample includes first sample user First sample feature and first sample label, the second training sample include the second sample characteristics of the second sample of users;Based on One training sample and the second training sample are trained the first model and the second model using longitudinal federal learning method.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, comprising: obtain the current gradient value of the first model and the current gradient value of the second model; The current gradient value of current gradient value and the second model to the first model carries out public key encryption, obtains the current public affairs of the first model The current public key encryption gradient value of key encryption gradient value and the second model;Summarize the first model current public key encryption gradient value and The current public key encryption gradient value of second model, obtains current public key encryption gradient value;Current public key encryption gradient value is carried out Private key decryption obtains current secret key decryption gradient value;The first model and second are updated respectively based on current secret key decryption gradient value Model.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, further includes: obtain the current median of the first model and the current centre of the second model Value;The current median of current median and the second model to the first model carries out public key encryption, obtains working as the first model The current public key encryption median of preceding public key encryption median and the second model;Among current public key encryption based on the second model It is worth the first model of training, and the second model of training of the current public key encryption median based on the first model.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, further includes: obtain the current penalty values of the first model;Current loss to the first model Value carries out public key encryption, obtains the current public key encryption penalty values of the first model;And summarizes the current public key of the first model and add The current public key encryption gradient value of close gradient value and the second model, obtains current public key encryption gradient value, comprising: to the first model Current public key encryption penalty values carry out private key decryption, obtain the first model current secret key decryption penalty values;Based on the first mould The current secret key decryption penalty values of type summarize the current public key encryption gradient value of the first model and the current public key of the second model adds Close gradient value obtains current public key encryption gradient value.
In some embodiments, based on current secret key decryption gradient value update respectively the first model and the second model it Afterwards, further includes: obtain the current penalty values of the first model and the current penalty values of the second model;Determine the current damage of the first model Whether mistake value and the current penalty values of the second model restrain;If the current penalty values of the first model and the current loss of the second model Value convergence determines that the first model and the second model training are completed.
In some embodiments, whether received in the current penalty values of the current penalty values and the second model that determine the first model After holding back, further includes: if the current penalty values of the first model and the current penalty values of the second model do not restrain, continue based on first Training sample and the second training sample are trained the first model and the second model using longitudinal federal learning method.
In some embodiments, longitudinal federal learning method pair is being used based on the first training sample and the second training sample Before first model and the second model are trained, further includes: carry out sample pair to the first training sample and the second training sample Together.
In some embodiments, longitudinal federal learning method pair is being used based on the first training sample and the second training sample Before first model and the second model are trained, further includes: obtain the corresponding third training sample of third model, wherein the One model and third model respectively correspond the different institutions of the same category, and third training sample includes the third of third sample of users Sample characteristics and third sample label;Based on the first training sample and third training sample using laterally federal learning method to the One model and third model are trained.
In some embodiments, based on the first training sample and third training sample using laterally federal learning method to the One model and third model are trained, comprising: obtain the current gradient value of the first model and the current gradient value of third model; The current gradient value of current gradient value and third model to the first model carries out homomorphic cryptography, obtains the current same of the first model The current homomorphic cryptography gradient value of state encryption gradient value and third model;Polymerize the first model current homomorphic cryptography gradient value and The current homomorphic cryptography gradient value of third model, obtains current homomorphic cryptography gradient value;Current homomorphic cryptography gradient value is decrypted, is obtained To current decryption gradient value;The first model and third model are updated respectively based on current decryption gradient value.
In some embodiments, after updating the first model and third model respectively based on current decryption gradient value, also It include: the current penalty values for the current penalty values and third model for obtaining the first model;Determine the current penalty values of the first model Whether restrained with the current penalty values of third model;If the current penalty values of the first model and the current penalty values of third model are received It holds back, the first model and the second model is carried out using longitudinal federal learning method based on the first training sample and the second training sample Training.
In some embodiments, whether received in the current penalty values of the current penalty values and third model that determine the first model After holding back, further includes: if the current penalty values of the first model and the current penalty values of third model do not restrain, continue based on first Training sample and third training sample are trained the first model and third model using laterally federal learning method.
Second aspect, the embodiment of the present application provide a kind of device for predictive information, comprising: acquiring unit is matched It is set to the feature for obtaining user;Predicting unit is configured to for the feature of user being separately input into the first model of training in advance With the second model, the first prediction result and the second prediction result of user is obtained, wherein the first model and the second model are right respectively Different classes of mechanism is answered, and is obtained based on corresponding training sample using longitudinal federal learning method training;Cover sheet Member is configured to summarize the first prediction result and the second prediction result, generates the prediction result of user.
In some embodiments, the first model and the second model are trained as follows: it is corresponding to obtain the first model First training sample and corresponding second training sample of the second model, wherein the first training sample includes first sample user First sample feature and first sample label, the second training sample include the second sample characteristics of the second sample of users;Based on One training sample and the second training sample are trained the first model and the second model using longitudinal federal learning method.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, comprising: obtain the current gradient value of the first model and the current gradient value of the second model; The current gradient value of current gradient value and the second model to the first model carries out public key encryption, obtains the current public affairs of the first model The current public key encryption gradient value of key encryption gradient value and the second model;Summarize the first model current public key encryption gradient value and The current public key encryption gradient value of second model, obtains current public key encryption gradient value;Current public key encryption gradient value is carried out Private key decryption obtains current secret key decryption gradient value;The first model and second are updated respectively based on current secret key decryption gradient value Model.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, further includes: obtain the current median of the first model and the current centre of the second model Value;The current median of current median and the second model to the first model carries out public key encryption, obtains working as the first model The current public key encryption median of preceding public key encryption median and the second model;Among current public key encryption based on the second model It is worth the first model of training, and the second model of training of the current public key encryption median based on the first model.
In some embodiments, based on the first training sample and the second training sample using longitudinal federal learning method to the One model and the second model are trained, further includes: obtain the current penalty values of the first model;Current loss to the first model Value carries out public key encryption, obtains the current public key encryption penalty values of the first model;And summarizes the current public key of the first model and add The current public key encryption gradient value of close gradient value and the second model, obtains current public key encryption gradient value, comprising: to the first model Current public key encryption penalty values carry out private key decryption, obtain the first model current secret key decryption penalty values;Based on the first mould The current secret key decryption penalty values of type summarize the current public key encryption gradient value of the first model and the current public key of the second model adds Close gradient value obtains current public key encryption gradient value.
In some embodiments, based on current secret key decryption gradient value update respectively the first model and the second model it Afterwards, further includes: obtain the current penalty values of the first model and the current penalty values of the second model;Determine the current damage of the first model Whether mistake value and the current penalty values of the second model restrain;If the current penalty values of the first model and the current loss of the second model Value convergence determines that the first model and the second model training are completed.
In some embodiments, whether received in the current penalty values of the current penalty values and the second model that determine the first model After holding back, further includes: if the current penalty values of the first model and the current penalty values of the second model do not restrain, continue based on first Training sample and the second training sample are trained the first model and the second model using longitudinal federal learning method.
In some embodiments, longitudinal federal learning method pair is being used based on the first training sample and the second training sample Before first model and the second model are trained, further includes: carry out sample pair to the first training sample and the second training sample Together.
In some embodiments, longitudinal federal learning method pair is being used based on the first training sample and the second training sample Before first model and the second model are trained, further includes: obtain the corresponding third training sample of third model, wherein the One model and third model respectively correspond the different institutions of the same category, and third training sample includes the third of third sample of users Sample characteristics and third sample label;Based on the first training sample and third training sample using laterally federal learning method to the One model and third model are trained.
In some embodiments, based on the first training sample and third training sample using laterally federal learning method to the One model and third model are trained, comprising: obtain the current gradient value of the first model and the current gradient value of third model; The current gradient value of current gradient value and third model to the first model carries out homomorphic cryptography, obtains the current same of the first model The current homomorphic cryptography gradient value of state encryption gradient value and third model;Polymerize the first model current homomorphic cryptography gradient value and The current homomorphic cryptography gradient value of third model, obtains current homomorphic cryptography gradient value;Current homomorphic cryptography gradient value is decrypted, is obtained To current decryption gradient value;The first model and third model are updated respectively based on current decryption gradient value.
In some embodiments, after updating the first model and third model respectively based on current decryption gradient value, also It include: the current penalty values for the current penalty values and third model for obtaining the first model;Determine the current penalty values of the first model Whether restrained with the current penalty values of third model;If the current penalty values of the first model and the current penalty values of third model are received It holds back, the first model and the second model is carried out using longitudinal federal learning method based on the first training sample and the second training sample Training.
In some embodiments, whether received in the current penalty values of the current penalty values and third model that determine the first model After holding back, further includes: if the current penalty values of the first model and the current penalty values of third model do not restrain, continue based on first Training sample and third training sample are trained the first model and third model using laterally federal learning method.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes: one or more processing Device;Storage device is stored thereon with one or more programs;When one or more programs are executed by one or more processors, So that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method as described in implementation any in first aspect is realized when computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for predictive information obtain the feature of user first;Then will The feature of user is separately input into the first model and the second model of training in advance, to obtain the first prediction result and the of user Two prediction results;Finally summarize the first prediction result and the second prediction result, generates the prediction result of user.Based on using longitudinal The first model and the second model prediction information that federal learning method training obtains, improve the accuracy of information prediction.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architectures therein;
Fig. 2 is the flow chart according to one embodiment of the method for predictive information of the application;
Fig. 3 is the process according to one embodiment of the method for training the first model and the second model of the application Figure;
Fig. 4 is the flow chart according to one embodiment of longitudinal federal learning method of the application;
Fig. 5 is the process according to one embodiment of the method for training the first model and third model of the application Figure;
Fig. 6 is the flow chart according to one embodiment of the laterally federal learning method of the application;
Fig. 7 A is the schematic diagram of an application scenarios of laterally federal study and longitudinal federal learning training model;
Fig. 7 B is the schematic diagram of a laterally application scenarios of federal learning method;
Fig. 7 C is the schematic diagram of an application scenarios of longitudinal federal learning method;
Fig. 8 is the structural schematic diagram according to one embodiment of the device for predictive information of the application;
Fig. 9 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for predictive information of the application or the implementation of the device for predictive information The exemplary system architecture 100 of example.
As shown in Figure 1, may include terminal device 101, network 102 and server 103 in system architecture 100.Network 102 To provide the medium of communication link between terminal device 101 and server 103.Network 102 may include various connection classes Type, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101 and be interacted by network 102 with server 103, to receive or send message etc.. Various client softwares, such as the application of information prediction class etc. can be installed on terminal device 101.
Terminal device 101 can be hardware, be also possible to software.When terminal device 101 is hardware, can be with aobvious Display screen and the various electronic equipments for supporting information prediction.Including but not limited to smart phone, tablet computer, portable meter on knee Calculation machine and desktop computer etc..When terminal device 101 is software, may be mounted in above-mentioned electronic equipment.It can be real Ready-made multiple softwares or software module, also may be implemented into single software or software module.It is not specifically limited herein.
Server 103 can be to provide the server of various services.Such as information prediction server.Information prediction server Can the data such as feature to the user got carry out the processing such as analyzing, generate processing result (such as the prediction knot of user Fruit), and processing result is pushed to terminal device 101.
It should be noted that server 103 can be hardware, it is also possible to software.It, can when server 103 is hardware To be implemented as the distributed server cluster that multiple servers form, individual server also may be implemented into.In general, In order to realize the data isolation of different institutions, the server and arbitration clothes of different classes of mechanism is may be implemented into server 103 The server cluster of business device composition.When the server of different classes of mechanism carries out data exchange, at arbitrating server It is swapped again after reason.When server 103 is software, multiple softwares or software module may be implemented into (such as providing Distributed Services), single software or software module also may be implemented into.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for predictive information is generally held by server 103 Row, correspondingly, the device for predictive information is generally positioned in server 103.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates the processes according to one embodiment of the method for predictive information of the application 200.This is used for the method for predictive information, comprising the following steps:
Step 201, the feature of user is obtained.
It in the present embodiment, can be with for the executing subject of the method for predictive information (such as server 103 shown in FIG. 1) Obtain the feature of user.In general, above-mentioned executing subject can find out this from the feature of a large number of users according to the mark of user The feature of user.Less, above-mentioned execution is overlapped since the user of different classes of mechanism is overlapped more and user feature Main body can find out individual features of the user in different classes of mechanism according to the mark of user respectively.
For example, financial institution's more and user feature overlapping be overlapped with the user of internet mechanism is less.Financial institution Possess the financial feature of a large number of users, the server of the financial institution can be special according to the finance of mark from a large number of users of user The financial feature of the user is found out in sign.Wherein, financial feature can include but is not limited to deposit data, transaction flow water number According to, loan data and consumption data etc..Meanwhile internet mechanism possesses the behavioural characteristic of a large number of users, the internet mechanism Server the behavioural characteristic of the user can be found out from the behavioural characteristic of a large number of users according to the mark of user.Wherein, Behavioural characteristic can include but is not limited to website browsing data, interest tags data and geographic position data etc..
Step 202, the feature of user is separately input into the first model and the second model of training in advance, obtains user's First prediction result and the second prediction result.
In the present embodiment, the feature of user can be input to the first model of training in advance by above-mentioned executing subject, with Obtain the first prediction result of user.Meanwhile the feature of user can also be input to the of training in advance by above-mentioned executing subject Two models, to obtain the second prediction result of user.Wherein, the first model and the second model can respectively correspond different classes of Mechanism, and obtained based on corresponding training sample using longitudinal federal learning method training.That is, the first model and the second mould Type operates on the server of different classes of mechanism.Longitudinal federation learns to be more in user's overlapping of different classes of mechanism And user feature overlapping it is less in the case where, the different classes of corresponding training sample of mechanism according to longitudinal (and special Levy dimension) cutting, and take out the part sample that two parties are identical and the feature of user is not exactly the same and be trained.First Prediction result may include that the probability of event occurs for the user of the first model prediction.Second prediction result may include the second model The probability of event occurs for the user of prediction.
For example, the first model can correspond to financial institution, operate on the server of financial institution.Second model can be right Internet mechanism is answered, is operated on the server of internet mechanism.The corresponding training sample of the financial institution, which can be, to be based on being somebody's turn to do What the financial feature for a large number of users that financial institution possesses obtained.The corresponding training sample of internet mechanism, which can be, to be based on being somebody's turn to do What the behavioural characteristic for a large number of users that internet mechanism possesses obtained.The clothes of the server of the financial institution and the internet mechanism Business device can obtain the first model and the second model using longitudinal federal learning method training based on corresponding training sample. Then, the financial feature of the user can be input to the first model by the server of the financial institution, to obtain the first user's First prediction result.Meanwhile the behavior of the user can be input to the second model by the server of the internet mechanism, to obtain Second prediction result of second user.
Step 203, summarize the first prediction result and the second prediction result, generate the prediction result of user.
In the present embodiment, above-mentioned executing subject can summarize the first prediction result and the second prediction result, to generate use The prediction result at family.For example, above-mentioned executing subject can calculate the average value of the first prediction result and the second prediction result, as Prediction result.In another example above-mentioned executing subject can one in optional first prediction result and the second prediction result, as pre- Survey result.
For example, the first prediction result can be sent to the server of financial institution by the server of internet mechanism.Then, The server of financial institution can summarize the first prediction result and the second prediction result, generate the prediction result of user.
Method provided by the embodiments of the present application for predictive information obtains the feature of user first;Then by user's Feature is separately input into the first model and the second model of training in advance, to obtain the first prediction result and the second prediction of user As a result;Finally summarize the first prediction result and the second prediction result, generates the prediction result of user.Based on using longitudinal federal The first model and the second model prediction information that learning method training obtains, improve the accuracy of information prediction.
With continued reference to Fig. 3, it illustrates according to the one of the method for training the first model and the second model of the application The process 300 of a embodiment.The method for being used to train the first model and the second model, comprising the following steps:
Step 301, corresponding first training sample of the first model and corresponding second training sample of the second model are obtained.
In the present embodiment, for training the executing subject of the method for the first model and the second model (such as shown in FIG. 1 Server 103) corresponding first training sample of available first model and corresponding second training sample of the second model.Its In, since the first model and the second model respectively correspond different classes of mechanism, then the first training sample and the second training sample This also respectively corresponds different classes of mechanism.First training sample may include first sample user and first sample label. Second training sample may include the second sample characteristics of the second sample of users.First sample user can be the first training sample The user of corresponding mechanism.Second sample of users can be the user of the corresponding mechanism of the second training sample.
For example, first sample user can be the user of financial institution.First training sample can be the financial institution What the financial feature for a large number of users that server is possessed based on the financial institution obtained.Wherein, first sample feature can be The financial feature of one user.First sample label can be numerical value corresponding to the financial feature of the first user.Second sample is used Family can be the user of internet mechanism.The server that second training sample can be the internet mechanism is based on the internet machine What the behavioural characteristic for a large number of users that structure possesses obtained.Second sample characteristics can be the behavioural characteristic of second user.
In some optional implementations of the present embodiment, used since user's overlapping of different classes of mechanism is more The feature overlapping at family is less, therefore above-mentioned executing subject can carry out sample pair to the first training sample and the second training sample Together.In general, above-mentioned executing subject can the first training sample based on encryption and the second training sample carry out sample alignment.This Sample not only confirms the co-user of both sides under the premise of underground first training sample and the second training sample, does not expose also The user not overlapped each other.
For example, the server of financial institution can carry out the cell-phone number of the first sample user in the first training sample MD5 (Message-Digest Algorithm, Message Digest 5) encryption, is sent to internet machine for encrypted cell-phone number The server of structure.The server of internet mechanism can call the in IDMapping (user's portrait) and the second sample sample The mark of two sample of users is got through, to realize that the first training sample is aligned with the second training sample.
Step 302, use longitudinal federal learning method to the first model based on the first training sample and the second training sample It is trained with the second model.
In the present embodiment, above-mentioned executing subject can be based on the first training sample and the second training sample using longitudinal connection Nation's learning method is trained the first model and the second model, until the first model and the second model meet preset constraint item Until part.
For example, the server of financial institution is based on the second training based on the server of the first training sample and internet mechanism Sample obtains the first model and the second model using longitudinal federal learning method training.In general, in order to guarantee number in training process According to confidentiality, need to carry out encryption training by third-party arbitrating server.That is, in the training process, working as financial institution Server and internet mechanism server carry out data exchange when, swapped again after being handled by arbitrating server.
Method provided by the embodiments of the present application for training the first model and the second model, obtains the first model pair first Corresponding second training sample of the first training sample and the second model answered;It is then based on the first training sample and the second training sample The longitudinal federal learning method of this use is trained the first model and the second model.Data isolation is realized, not only to meet The demand of privacy of user protection and data safety, it is ensured that the corresponding training sample of different classes of mechanism is keeping independence In the case of, carry out the encryption exchange of information and model parameter.
With further reference to Fig. 4, it illustrates the processes of one embodiment of the federal learning method in longitudinal direction according to the application 400.Longitudinal federal learning method, comprising the following steps:
Step 401, the current median of the first model and the current median of the second model are obtained.
In the present embodiment, the executing subject (such as server 103 shown in FIG. 1) of longitudinal federal learning method can obtain Take the current median of the first model and the current median of the second model.In general, being instructed to the first model and the second model Median can be generated in experienced process.
For example, the current median during available first model training of the server of financial institution.Internet machine Current median during available second model training of the server of structure.
Step 402, public key encryption is carried out to the current median of the current median of the first model and the second model, obtained The current public key encryption median of first model and the current public key encryption median of the second model.
In the present embodiment, above-mentioned executing subject can current median to the first model and the second model it is current in Between value carry out public key encryption, to obtain in the current public key encryption median of the first model and the current public key encryption of the second model Between be worth.
For example, the confidentiality in order to guarantee data in training process, needs to be added by third-party arbitrating server Close training.Specifically, public key can be sent respectively to the server of financial institution and the clothes of internet mechanism by arbitrating server Business device.In this way, the server of the financial institution can carry out public key encryption to the current median of the first model.The internet machine The server of structure can carry out public key encryption to the current median of the second model.
In some optional implementations of the present embodiment, the current damage of above-mentioned available first model of executing subject Mistake value;Public key encryption is carried out to the current penalty values of the first model, obtains the current public key encryption penalty values of the first model.It is logical It often, can generational loss value during being trained to the first model.For example, available first mould of the server of financial institution The current penalty values of type carry out public key encryption to the current penalty values of the first model.
Step 403, current public key encryption median the first model of training based on the second model, and it is based on the first model Current public key encryption median training the second model.
In the present embodiment, above-mentioned executing subject can be based on the current public key encryption median of the second model training first Model, and the second model of training of the current public key encryption median based on the first model.The corresponding mechanism of first model and The corresponding mechanism exchange of two models is public key encryption median.Median is exchanged in an encrypted form, to realize different machines Data isolation between structure, to meet the needs of privacy of user protection and data safety.
For example, the current public key encryption median of the second model can be sent to financial machine by the server of internet mechanism The server of structure, in this way, the server of financial institution can be based on the current public key encryption median of the second model training first Model.Meanwhile the current public key encryption median of the first model can be sent to internet mechanism by the server of financial institution Server, in this way, the server of internet mechanism can based on the current public key encryption median of the first model training second Model.
Step 404, the current gradient value of the first model and the current gradient value of the second model are obtained.
In the present embodiment, the current gradient value of available first model of above-mentioned executing subject and the second model is current Gradient value.In general, gradient value can be generated during being trained to the first model and the second model.
For example, the current gradient value during available first model training of the server of financial institution.Internet machine Current gradient value during available second model training of the server of structure.
Step 405, public key encryption is carried out to the current gradient value of the current gradient value of the first model and the second model, obtained The current public key encryption gradient value of first model and the current public key encryption gradient value of the second model.
In the present embodiment, above-mentioned executing subject can current gradient value to the first model and the second model work as front ladder Angle value carries out public key encryption, to obtain the current public key encryption gradient value of the first model and the current public key encryption ladder of the second model Angle value.
For example, the server of financial institution can carry out public key encryption to the current gradient value of the first model.Internet machine The server of structure can carry out public key encryption to the current gradient value of the second model.
Step 406, summarize the current public key encryption gradient value of the first model and the current public key encryption gradient of the second model Value, obtains current public key encryption gradient value.
In the present embodiment, above-mentioned executing subject can summarize the current public key encryption gradient value and the second mould of the first model The current public key encryption gradient value of type, to obtain current public key encryption gradient value.
For example, the current public key encryption gradient value of the first model can be sent to mediation service by the server of financial institution Device.The current public key encryption gradient value of the second model can be sent to arbitrating server by the server of internet mechanism simultaneously. Then, arbitrating server can summarize the current public key encryption gradient value of the first model and the current public key encryption ladder of the second model Angle value, to obtain current public key encryption gradient value.
In some optional implementations of the present embodiment, above-mentioned executing subject can be first to the current of the first model Public key encryption penalty values carry out private key decryption, obtain the current secret key decryption penalty values of the first model;It is then based on the first model Current secret key decryption penalty values summarize the current public key encryption gradient value of the first model and the current public key encryption of the second model Gradient value obtains current public key encryption gradient value.For example, the server of financial institution can add the current public key of the first model Close penalty values are sent to arbitrating server.Arbitrating server can current public key encryption penalty values first to the first model carry out Private key decryption decrypts penalty values to obtain the current secret key of the first model;It is then based on the current secret key decryption damage of the first model Mistake value summarizes the current public key encryption gradient value of the first model and the current public key encryption gradient value of the second model, obtains current public affairs Key encrypts gradient value.
Step 407, private key decryption is carried out to current public key encryption gradient value, obtains current secret key decryption gradient value.
In the present embodiment, above-mentioned executing subject can carry out private key decryption to current public key encryption gradient value, to obtain Current secret key decrypts gradient value.
For example, arbitrating server can carry out private key decryption to current public key encryption gradient value, current secret key decryption is obtained Gradient value.
Step 408, the first model and the second model are updated based on current secret key decryption gradient value respectively.
In the present embodiment, above-mentioned executing subject can based on current secret key decryption gradient value update respectively the first model and Second model.
For example, arbitrating server can by current secret key decrypt gradient value be respectively sent to financial institution server and mutually The server of network unit.The server of financial institution can update the first model based on current secret key decryption gradient value.Meanwhile The server of internet mechanism can update the second model based on current secret key decryption gradient value.
Step 409, the current penalty values of the first model and the current penalty values of the second model are obtained.
In the present embodiment, the current penalty values of available first model of above-mentioned executing subject and the second model is current Penalty values.In general, can generational loss value during being trained to the first model and the second model.
For example, the current penalty values during available first model training of the server of financial institution.Internet machine Current penalty values during available second model training of the server of structure.
Step 410, whether the current penalty values of the current penalty values and the second model that determine the first model restrain.
In the present embodiment, above-mentioned executing subject can determine the first model current penalty values and the second model it is current Whether penalty values restrain.If the current penalty values of the first model and the current penalty values convergence of the second model, continue to execute step 411;If the current penalty values of the first model and the current penalty values of the second model do not restrain, 401 are returned to step, that is, after It is continuous that the first model and the second model are carried out using longitudinal federal learning method based on the first training sample and the second training sample Training, until the current penalty values of the first model and the current penalty values convergence of the second model.
For example, the server of financial institution can determine whether the current penalty values of the first model restrain.Internet mechanism Server can determine whether the current penalty values of the second model restrain.
Step 411, determine that the first model and the second model training are completed.
In the present embodiment, if the current penalty values of the first model and the convergence of the current penalty values of the second model, above-mentioned to hold Row main body can determine that the first model and the second model training are completed.
In practice, there is also data isolations between the different institutions of the same category.For same category of multiple mechanisms, have The feature for a large number of users that a little mechanisms possess, and some mechanisms possess the feature of a small amount of user.If it is corresponding to be based only upon respective mechanism The corresponding model of training sample training, result in training possesses the effect of the corresponding model of mechanism of the feature of a small amount of user Fruit is poor.In order to solve this problem, before using longitudinal federal learning method the first model of training and the second model, may be used also Using laterally federal learning method the first model of training and third model.
With continued reference to Fig. 5, it illustrates according to the one of the method for training the first model and third model of the application The process 500 of a embodiment.The method for being used to train the first model and third model, comprising the following steps:
Step 501, corresponding first training sample of the first model and the corresponding third training sample of third model are obtained.
In the present embodiment, for training the executing subject of the method for the first model and third model (such as shown in FIG. 1 Server 103) corresponding first training sample of available first model and the corresponding third training sample of third model.Its In, since the first model and third model respectively correspond the different institutions of the same category, then the first training sample and third instruction Practice the different institutions that sample also respectively corresponds the same category.First training sample may include first sample user and the first sample This label.Third training sample includes the third sample characteristics and third sample label of third sample of users.First sample user It can be the user of the corresponding mechanism of the first training sample.Third sample of users can be the corresponding mechanism of third training sample User.
For example, first sample user can be the user of large-scale financial institution.First training sample can be the Big Gold Melt what the financial feature of a large number of users that the server of mechanism is possessed based on the large size financial institution obtained.Wherein, first sample Feature can be the financial feature of the first user.First sample label can be number corresponding to the financial feature of the first user Value.Third sample of users can be the user of small-sized financial institution.Third training sample can be the clothes of the small-sized financial institution What the behavioural characteristic for a small amount of user that business device is possessed based on the small-sized financial institution obtained.Third sample characteristics can be third use The financial feature at family.Third sample label can be numerical value corresponding to the financial feature of third user.
Step 502, use lateral federal learning method to the first model based on the first training sample and third training sample It is trained with third model.
In the present embodiment, above-mentioned executing subject can be based on the first training sample and third training sample using laterally connection Nation's learning method is trained the first model and third model, until the first model and third model meet preset constraint item Until part.Wherein, laterally federal to learn to be more in the feature overlapping of the user of the different institutions of the same category and user's overlapping In the case where less, the corresponding training sample of different institutions of the same category according to laterally (and user's dimension) cutting, And the feature of taking out two parties is identical and part sample that user is not exactly the same is trained.
For example, the server of large-scale financial institution is based on the based on the server of the first training sample and small-sized financial institution Three training samples obtain the first model and third model using laterally federal learning method training.In general, in order to guarantee to train The confidentiality of data in journey needs to carry out encryption training by third-party arbitrating server.
Method provided by the embodiments of the present application for training the first model and third model, obtains the first model pair first The corresponding third training sample of the first training sample and third model answered;It is then based on the first training sample and third training sample Laterally federal learning method is trained the first model and third model for this use.Data isolation is realized, not only to meet The demand of privacy of user protection and data safety, it is ensured that the corresponding training sample of the different institutions of the same category is keeping independent Property in the case where, carry out the encryption exchange of information and model parameter.
With further reference to Fig. 6, it illustrates the processes of one embodiment of the federal learning method of transverse direction according to the application 600.The lateral federal learning method, comprising the following steps:
Step 601, the current gradient value of the first model and the current gradient value of third model are obtained.
In the present embodiment, laterally the executing subject (such as server 103 shown in FIG. 1) of federal learning method can obtain Take the current gradient value of the first model and the current gradient value of third model.In general, being instructed to the first model and third model Gradient value can be generated in experienced process.
For example, the current gradient value during available first model training of server of large-scale financial institution.It is small-sized Current gradient value during the available third model training of the server of financial institution.
Step 602, homomorphic cryptography is carried out to the current gradient value of the current gradient value of the first model and third model, obtained The current homomorphic cryptography gradient value of first model and the current homomorphic cryptography gradient value of third model.
In the present embodiment, above-mentioned executing subject can current gradient value to the first model and third model work as front ladder Angle value carries out homomorphic cryptography, to obtain the current homomorphic cryptography gradient value of the first model and the current homomorphic cryptography ladder of third model Angle value.Wherein, homomorphic cryptography is the cryptological technique of the computational complexity theory based on difficult math question.To by homomorphic cryptography Data are handled to obtain an output, this output is decrypted, result and the original with Same Way processing unencryption Output that beginning data obtain is the result is that the same.
For example, the server of large-scale financial institution can carry out homomorphic cryptography to the current gradient value of the first model.It is small-sized The server of financial institution can carry out homomorphic cryptography to the current gradient value of third model.
Step 603, it polymerize the current homomorphic cryptography gradient value of the first model and the current homomorphic cryptography gradient of third model Value, obtains current homomorphic cryptography gradient value.
In the present embodiment, above-mentioned executing subject can polymerize the current homomorphic cryptography gradient value and third mould of the first model The current homomorphic cryptography gradient value of type, to obtain current homomorphic cryptography gradient value.
For example, the confidentiality in order to guarantee data in training process, needs to be added by third-party arbitrating server Close training.Specifically, the current homomorphic cryptography gradient value of the first model can be sent to secondary by the server of large-scale financial institution Cut out server.The server of small-sized financial institution can be sent to mediation service to the current homomorphic cryptography gradient value of third model Device.In this way, arbitrating server can polymerize the current homomorphic cryptography gradient value of the first model and the current homomorphism of third model adds Close gradient value obtains current homomorphic cryptography gradient value.
Step 604, current homomorphic cryptography gradient value is decrypted, current decryption gradient value is obtained.
In the present embodiment, above-mentioned executing subject can decrypt current homomorphic cryptography gradient value, to obtain current decryption ladder Angle value.
For example, current homomorphic cryptography gradient value can be respectively sent to the server of large-scale financial institution by arbitrating server With the server of small-sized financial institution.The server of large-scale financial institution and the server of small-sized financial institution can be decrypted respectively Current homomorphic cryptography gradient value, to obtain current decryption gradient value.
Step 605, the first model and third model are updated based on current decryption gradient value respectively.
In the present embodiment, above-mentioned executing subject can update the first model and third based on current decryption gradient value respectively Model.
For example, the server of large-scale financial institution can update the first model based on current decryption gradient value.Meanwhile it is small-sized The server of financial institution can update third model based on current decryption gradient value.
Step 606, the current penalty values of the first model and the current penalty values of third model are obtained.
In the present embodiment, the current penalty values of available first model of above-mentioned executing subject and third model is current Penalty values.In general, can generational loss value during being trained to the first model and third model.
For example, the current penalty values during available first model training of server of large-scale financial institution.It is small-sized Current penalty values during the available third model training of the server of financial institution.
Step 607, whether the current penalty values of the current penalty values and third model that determine the first model restrain.
In the present embodiment, above-mentioned executing subject can determine the first model current penalty values and third model it is current Whether penalty values restrain.If the current penalty values of the first model and the current penalty values convergence of third model, continue to execute step 608;If the current penalty values of the first model and the current penalty values of third model do not restrain, 601 are returned to step, that is, after It is continuous that the first model and third model are carried out using laterally federal learning method based on the first training sample and third training sample Training, until the current penalty values of the first model and the current penalty values convergence of third model.
For example, the server of large-scale financial institution can determine whether the current penalty values of the first model restrain.Small-sized gold The server for melting mechanism can determine whether the current penalty values of third model restrain.
Step 608, use longitudinal federal learning method to the first model based on the first training sample and the second training sample It is trained with the second model.
In the present embodiment, if the current penalty values of the first model and the convergence of the current penalty values of third model, above-mentioned to hold Row main body is based on the first training sample and the second training sample using longitudinal federal learning method to the first model and the second model It is trained.
With further reference to Fig. 7 A, it illustrates an applied fields of laterally federal study and longitudinal federal learning training model The schematic diagram of scape.In the application scenarios, the server of bank A is based on based on the server of the training sample of bank A and bank B The training sample of bank B obtains horizontal combination using the model of the laterally model of federal learning method training bank A and bank B Model.The model of training sample training internet C of the server of internet C based on internet C.Horizontal combination model and interconnection The model of C is netted using longitudinal federal learning method training, obtains vertical integration model
With further reference to Fig. 7 B, it illustrates the schematic diagrames of a laterally application scenarios of federal learning method.Specifically, The server of bank A based on bank A training sample training bank A model, during model training carry out parameter optimization, Costing bio disturbance, gradient updating.The server of bank A carries out gradient encryption to the current gradient value of the model of bank A, and is sent to Arbitrating server.Meanwhile the model of training sample training bank B of the server of bank B based on bank B, in model training mistake Parameter optimization, costing bio disturbance, gradient updating are carried out in journey.The server of bank B carries out the current gradient value of the model of bank B Gradient encryption, and it is sent to arbitrating server.Arbitrating server carries out gradient polymeric, and is respectively sent to the server of bank A With the server of bank B.After the server of bank A carries out gradient decryption, parameter optimization is carried out to the model of bank A, obtains silver The model of row A.Meanwhile after the server of bank B carries out gradient decryption, parameter optimization is carried out to the model of bank B, obtains bank The model of B.
With further reference to Fig. 7 C, it illustrates the schematic diagrames of an application scenarios of longitudinal federal learning method.Specifically, The model of alignment training sample training bank B of the server of bank B based on bank B, generates median.The service of internet C The model of alignment training sample training internet C of the device based on internet C, generates median.The server and networking C of bank B Server carry out median exchange, and model training is continued based on the median after exchange.The server of bank B is in mould Parameter optimization, costing bio disturbance, gradient updating are carried out in type training process.The server of bank B works as front ladder to the model of bank B Angle value carries out gradient encryption, carries out loss encryption to current penalty values, and be sent to arbitrating server.Meanwhile the clothes of internet C Business device carries out parameter optimization, costing bio disturbance, gradient updating during model training.The server of internet C is to internet C The current gradient value of model carries out gradient encryption, carries out loss encryption to current penalty values, and be sent to arbitrating server.Arbitration The gradient that server carries out summarizes calculating and gradient decryption, and is respectively sent to the server of bank B and the server of internet C. The server of bank B carries out parameter optimization to the model of bank B, obtains the model of bank B.Meanwhile the server pair of internet C The model of the server of internet C carries out parameter optimization, obtains the model of internet C.
With further reference to Fig. 8, as the realization to method shown in above-mentioned each figure, this application provides one kind for predicting letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 8, the device 800 for predictive information of the present embodiment may include: acquiring unit 801, prediction list Member 802 and collection unit 803.Wherein, acquiring unit 801 are configured to obtain the feature of user;Predicting unit 802, is configured At by the feature of user be separately input into advance training the first model and the second model, obtain user the first prediction result and Second prediction result, wherein the first model and the second model respectively correspond different classes of mechanism, and are based on corresponding instruction Practice sample to obtain using longitudinal federal learning method training;Collection unit 803 is configured to summarize the first prediction result and second Prediction result generates the prediction result of user.
In the present embodiment, in the device of predictive information 800: acquiring unit 801, predicting unit 802 and cover sheet The specific processing of member 803 and its brought technical effect can be respectively with reference to step 201, the steps 202 in Fig. 2 corresponding embodiment With the related description of step 203, details are not described herein.
In some optional implementations of the present embodiment, the first model and the second model are trained as follows: Obtain corresponding first training sample of the first model and corresponding second training sample of the second model, wherein the first training sample First sample feature and first sample label including first sample user, the second training sample include the of the second sample of users Two sample characteristics;Based on the first training sample and the second training sample using longitudinal federal learning method to the first model and second Model is trained.
In some optional implementations of the present embodiment, based on the first training sample and the second training sample using vertical The first model and the second model are trained to federal learning method, comprising: obtain the current gradient value and the of the first model The current gradient value of two models;The current gradient value of current gradient value and the second model to the first model carries out public key encryption, Obtain the current public key encryption gradient value of the first model and the current public key encryption gradient value of the second model;Summarize the first model The current public key encryption gradient value of current public key encryption gradient value and the second model, obtains current public key encryption gradient value;To working as Preceding public key encryption gradient value carries out private key decryption, obtains current secret key decryption gradient value;Based on current secret key decryption gradient value point The first model and the second model are not updated.
In some optional implementations of the present embodiment, based on the first training sample and the second training sample using vertical The first model and the second model are trained to federal learning method, further includes: obtain the first model current median and The current median of second model;The current median of current median and the second model to the first model carries out public key and adds It is close, obtain the current public key encryption median of the first model and the current public key encryption median of the second model;Based on the second mould Current public key encryption median the first model of training of type, and the training of the current public key encryption median based on the first model the Two models.
In some optional implementations of the present embodiment, based on the first training sample and the second training sample using vertical The first model and the second model are trained to federal learning method, further includes: obtain the current penalty values of the first model;It is right The current penalty values of first model carry out public key encryption, obtain the current public key encryption penalty values of the first model;And summarize The current public key encryption gradient value of one model and the current public key encryption gradient value of the second model, obtain current public key encryption gradient Value, comprising: private key decryption is carried out to the current public key encryption penalty values of the first model, obtains the current secret key decryption of the first model Penalty values;Current secret key decryption penalty values based on the first model summarize the current public key encryption gradient value and second of the first model The current public key encryption gradient value of model, obtains current public key encryption gradient value.
In some optional implementations of the present embodiment, first is being updated respectively based on current secret key decryption gradient value After model and the second model, further includes: obtain the current penalty values of the first model and the current penalty values of the second model;It determines Whether the current penalty values of the first model and the current penalty values of the second model restrain;If the current penalty values of the first model and The current penalty values of two models restrain, and determine that the first model and the second model training are completed.
In some optional implementations of the present embodiment, in the current penalty values and the second model for determining the first model Current penalty values whether restrain after, further includes: if the current penalty values of the current penalty values of the first model and the second model It does not restrain, continues based on the first training sample and the second training sample using longitudinal federal learning method to the first model and second Model is trained.
In some optional implementations of the present embodiment, used based on the first training sample and the second training sample Before longitudinal federation's learning method is trained the first model and the second model, further includes: to the first training sample and second Training sample carries out sample alignment.
In some optional implementations of the present embodiment, used based on the first training sample and the second training sample Before longitudinal federation's learning method is trained the first model and the second model, further includes: obtain third model corresponding the Three training samples, wherein the first model and third model respectively correspond the different institutions of the same category, and third training sample includes The third sample characteristics and third sample label of third sample of users;Based on the first training sample and third training sample using horizontal The first model and third model are trained to federal learning method.
In some optional implementations of the present embodiment, based on the first training sample and third training sample using horizontal The first model and third model are trained to federal learning method, comprising: obtain the current gradient value and the of the first model The current gradient value of three models;The current gradient value of current gradient value and third model to the first model carries out homomorphic cryptography, Obtain the current homomorphic cryptography gradient value of the first model and the current homomorphic cryptography gradient value of third model;It polymerize the first model The current homomorphic cryptography gradient value of current homomorphic cryptography gradient value and third model, obtains current homomorphic cryptography gradient value;Decryption Current homomorphic cryptography gradient value, obtains current decryption gradient value;The first model and are updated respectively based on current decryption gradient value Three models.
In some optional implementations of the present embodiment, the first model is being updated based on current decryption gradient value respectively After third model, further includes: obtain the current penalty values of the first model and the current penalty values of third model;Determine first Whether the current penalty values of model and the current penalty values of third model restrain;If the current penalty values and third mould of the first model The current penalty values of type restrain, based on the first training sample and the second training sample using longitudinal federal learning method to the first mould Type and the second model are trained.
In some optional implementations of the present embodiment, in the current penalty values and third model for determining the first model Current penalty values whether restrain after, further includes: if the current penalty values of the current penalty values of the first model and third model It does not restrain, continues based on the first training sample and third training sample using laterally federal learning method to the first model and third Model is trained.
Below with reference to Fig. 9, it is (such as shown in FIG. 1 that it illustrates the electronic equipments for being suitable for being used to realize the embodiment of the present application Server 103) computer system 900 structural schematic diagram.Electronic equipment shown in Fig. 9 is only an example, should not be right The function and use scope of the embodiment of the present application bring any restrictions.
As shown in figure 9, computer system 900 includes central processing unit (CPU) 901, it can be read-only according to being stored in Program in memory (ROM) 902 or be loaded into the program in random access storage device (RAM) 903 from storage section 908 and Execute various movements appropriate and processing.In RAM 903, also it is stored with system 900 and operates required various programs and data. CPU 901, ROM 902 and RAM 903 are connected with each other by bus 904.Input/output (I/O) interface 905 is also connected to always Line 904.
I/O interface 905 is connected to lower component: the importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 908 including hard disk etc.; And the communications portion 909 of the network interface card including LAN card, modem etc..Communications portion 909 via such as because The network of spy's net executes communication process.Driver 910 is also connected to I/O interface 905 as needed.Detachable media 911, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 910, in order to read from thereon Computer program be mounted into storage section 908 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 909, and/or from detachable media 911 are mounted.When the computer program is executed by central processing unit (CPU) 901, limited in execution the present processes Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object-oriented programming language-such as Java, Smalltalk, C+ +, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or holds on remote computer or electronic equipment completely on the user computer for part Row.In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize internet Service provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include acquiring unit, predicting unit and collection unit.Wherein, the title of these units is not constituted in this case to the unit sheet The restriction of body, for example, acquiring unit is also described as " obtaining the unit of the feature of user ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment. Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row, so that the electronic equipment: obtaining the feature of user;By the feature of user be separately input into advance training the first model and Second model obtains the first prediction result and the second prediction result of user, wherein the first model and the second model respectively correspond Different classes of mechanism, and obtained based on corresponding training sample using longitudinal federal learning method training;Summarize first Prediction result and the second prediction result, generate the prediction result of user.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (15)

1. a kind of method for predictive information, comprising:
Obtain the feature of user;
The feature of the user is separately input into the first model and the second model of training in advance, obtains the first of the user Prediction result and the second prediction result, wherein first model and second model respectively correspond different classes of mechanism, And it is obtained based on corresponding training sample using longitudinal federal learning method training;
Summarize first prediction result and second prediction result, generates the prediction result of the user.
2. according to the method described in claim 1, wherein, first model and second model are instructed as follows Practice:
Obtain corresponding first training sample of first model and corresponding second training sample of second model, wherein First training sample includes the first sample feature and first sample label of first sample user, second training sample The second sample characteristics including the second sample of users;
Based on first training sample and second training sample using longitudinal federal learning method to first model It is trained with second model.
3. described to be based on first training sample and second training sample according to the method described in claim 2, wherein First model and second model are trained using longitudinal federal learning method, comprising:
Obtain the current gradient value of first model and the current gradient value of second model;
The current gradient value of current gradient value and second model to first model carries out public key encryption, obtains described The current public key encryption gradient value of first model and the current public key encryption gradient value of second model;
Summarize the current public key encryption gradient value of first model and the current public key encryption gradient value of second model, obtains To current public key encryption gradient value;
Private key decryption is carried out to the current public key encryption gradient value, obtains current secret key decryption gradient value;
First model and second model are updated respectively based on current secret key decryption gradient value.
4. described to be based on first training sample and second training sample according to the method described in claim 3, wherein First model and second model are trained using longitudinal federal learning method, further includes:
Obtain the current median of first model and the current median of second model;
The current median of current median and second model to first model carries out public key encryption, obtains described The current public key encryption median of first model and the current public key encryption median of second model;
Current public key encryption median training first model based on second model, and it is based on first model Current public key encryption median training second model.
5. described to be based on first training sample and second training sample according to the method described in claim 4, wherein First model and second model are trained using longitudinal federal learning method, further includes:
Obtain the current penalty values of first model;
Public key encryption is carried out to the current penalty values of first model, obtains the current public key encryption loss of first model Value;And
The current public key encryption gradient of the current public key encryption gradient value for summarizing first model and second model Value, obtains current public key encryption gradient value, comprising:
Private key decryption is carried out to the current public key encryption penalty values of first model, obtains the current secret key of first model Decrypt penalty values;
Current secret key decryption penalty values based on first model summarize the current public key encryption gradient value of first model With the current public key encryption gradient value of second model, current public key encryption gradient value is obtained.
6. according to the method described in claim 3, wherein, updating institute respectively based on current secret key decryption gradient value described After stating the first model and second model, further includes:
Obtain the current penalty values of first model and the current penalty values of second model;
Whether the current penalty values of the current penalty values and second model that determine first model restrain;
If the current penalty values of first model and the convergence of the current penalty values of second model, determine first model It is completed with second model training.
7. according to the method described in claim 6, wherein, in the current penalty values of the determination first model and described After whether the current penalty values of two models restrain, further includes:
If the current penalty values of first model and the current penalty values of second model do not restrain, continue based on described the One training sample and second training sample use longitudinal federal learning method to first model and second model It is trained.
8. the method according to one of claim 2-6, wherein be based on first training sample and described second described Before training sample is trained first model and second model using longitudinal federal learning method, further includes:
Sample alignment is carried out to first training sample and second training sample.
9. according to the method described in claim 2, wherein, being based on first training sample and the second training sample described Before the longitudinal federal learning method of this use is trained first model and second model, further includes:
Obtain the corresponding third training sample of third model, wherein first model respectively corresponds mutually similar with third model Other different institutions, the third training sample include the third sample characteristics and third sample label of third sample of users;
Based on first training sample and the third training sample using laterally federal learning method to first model It is trained with the third model.
10. described based on first training sample and third training sample according to the method described in claim 9, wherein Laterally federal learning method is trained first model and the third model for this use, comprising:
Obtain the current gradient value of first model and the current gradient value of the third model;
The current gradient value of current gradient value and the third model to first model carries out homomorphic cryptography, obtains described The current homomorphic cryptography gradient value of first model and the current homomorphic cryptography gradient value of the third model;
It polymerize the current homomorphic cryptography gradient value of first model and the current homomorphic cryptography gradient value of the third model, obtains To current homomorphic cryptography gradient value;
The current homomorphic cryptography gradient value is decrypted, current decryption gradient value is obtained;
First model and the third model are updated respectively based on the current decryption gradient value.
11. according to the method described in claim 10, wherein, it is described updated respectively based on the current decryption gradient value described in After first model and the third model, further includes:
Obtain the current penalty values of first model and the current penalty values of the third model;
Whether the current penalty values of the current penalty values and the third model that determine first model restrain;
If the current penalty values of first model and the convergence of the current penalty values of the third model, based on first training Sample and second training sample instruct first model and second model using longitudinal federal learning method Practice.
12. according to the method for claim 11, wherein in the current penalty values of the determination first model and described After whether the current penalty values of third model restrain, further includes:
If the current penalty values of first model and the current penalty values of the third model do not restrain, continue based on described the One training sample and the third training sample use lateral federal learning method to first model and the third model It is trained.
13. a kind of device for predictive information, comprising:
Acquiring unit is configured to obtain the feature of user;
Predicting unit is configured to for the feature of the user being separately input into the first model and the second model of training in advance, Obtain the first prediction result and the second prediction result of the user, wherein first model and second model difference Corresponding different classes of mechanism, and obtained based on corresponding training sample using longitudinal federal learning method training;
Collection unit is configured to summarize first prediction result and second prediction result, generates the pre- of the user Survey result.
14. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now method as described in any in claim 1-12.
15. a kind of computer-readable medium, is stored thereon with computer program, wherein the computer program is held by processor The method as described in any in claim 1-12 is realized when row.
CN201910533286.2A 2019-06-19 2019-06-19 Method and apparatus for predicting information Active CN110245510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910533286.2A CN110245510B (en) 2019-06-19 2019-06-19 Method and apparatus for predicting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910533286.2A CN110245510B (en) 2019-06-19 2019-06-19 Method and apparatus for predicting information

Publications (2)

Publication Number Publication Date
CN110245510A true CN110245510A (en) 2019-09-17
CN110245510B CN110245510B (en) 2021-12-07

Family

ID=67888271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910533286.2A Active CN110245510B (en) 2019-06-19 2019-06-19 Method and apparatus for predicting information

Country Status (1)

Country Link
CN (1) CN110245510B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633806A (en) * 2019-10-21 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110782042A (en) * 2019-10-29 2020-02-11 深圳前海微众银行股份有限公司 Method, device, equipment and medium for combining horizontal federation and vertical federation
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110827147A (en) * 2019-10-31 2020-02-21 山东浪潮人工智能研究院有限公司 Federal learning incentive method and system based on alliance chain
CN110995737A (en) * 2019-12-13 2020-04-10 支付宝(杭州)信息技术有限公司 Gradient fusion method and device for federal learning and electronic equipment
CN111046425A (en) * 2019-12-12 2020-04-21 支付宝(杭州)信息技术有限公司 Method and device for risk identification by combining multiple parties
CN111081337A (en) * 2020-03-23 2020-04-28 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111178538A (en) * 2019-12-17 2020-05-19 杭州睿信数据科技有限公司 Federated learning method and device for vertical data
CN111291801A (en) * 2020-01-21 2020-06-16 深圳前海微众银行股份有限公司 Data processing method and device
CN111310204A (en) * 2020-02-10 2020-06-19 北京百度网讯科技有限公司 Data processing method and device
CN111325352A (en) * 2020-02-20 2020-06-23 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
CN111369260A (en) * 2020-03-10 2020-07-03 支付宝(杭州)信息技术有限公司 Privacy-protecting risk prediction method and device
CN111428885A (en) * 2020-03-31 2020-07-17 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN111553443A (en) * 2020-05-14 2020-08-18 北京华宇元典信息服务有限公司 Training method and device for referee document processing model and electronic equipment
CN111861099A (en) * 2020-06-02 2020-10-30 光之树(北京)科技有限公司 Model evaluation method and device of federal learning model
CN111882054A (en) * 2020-05-27 2020-11-03 杭州中奥科技有限公司 Method and related equipment for cross training of network data of encryption relationship between two parties
CN111935156A (en) * 2020-08-12 2020-11-13 科技谷(厦门)信息技术有限公司 Data privacy protection method for federated learning
CN112016932A (en) * 2020-09-04 2020-12-01 中国银联股份有限公司 Test method, device, server and medium
CN112182399A (en) * 2020-10-16 2021-01-05 中国银联股份有限公司 Multi-party security calculation method and device for federated learning
CN112199709A (en) * 2020-10-28 2021-01-08 支付宝(杭州)信息技术有限公司 Multi-party based privacy data joint training model method and device
WO2021004551A1 (en) * 2019-09-26 2021-01-14 深圳前海微众银行股份有限公司 Method, apparatus, and device for optimization of vertically federated learning system, and a readable storage medium
CN112330048A (en) * 2020-11-18 2021-02-05 中国光大银行股份有限公司 Scoring card model training method and device, storage medium and electronic device
CN112396189A (en) * 2020-11-27 2021-02-23 中国银联股份有限公司 Method and device for multi-party construction of federal learning model
CN112749812A (en) * 2019-10-29 2021-05-04 华为技术有限公司 Joint learning system, training result aggregation method and equipment
CN112801731A (en) * 2021-01-06 2021-05-14 广东工业大学 Federal reinforcement learning method for order taking auxiliary decision
WO2021103909A1 (en) * 2019-11-27 2021-06-03 支付宝(杭州)信息技术有限公司 Risk prediction method and apparatus, risk prediction model training method and apparatus, and electronic device
WO2021114822A1 (en) * 2019-12-12 2021-06-17 支付宝(杭州)信息技术有限公司 Private data protection-based risk decision making method, apparatus and system, and device
CN112990612A (en) * 2021-05-17 2021-06-18 湖南三湘银行股份有限公司 Prediction system and method based on federal learning
WO2021120855A1 (en) * 2019-12-20 2021-06-24 支付宝(杭州)信息技术有限公司 Method and system for carrying out model training on the basis of privacy data
WO2021159798A1 (en) * 2020-02-12 2021-08-19 深圳前海微众银行股份有限公司 Method for optimizing longitudinal federated learning system, device and readable storage medium
WO2021169477A1 (en) * 2020-02-28 2021-09-02 深圳前海微众银行股份有限公司 Cross feature-based model building and prediction methods, devices and apparatuses, and storage medium
CN113537633A (en) * 2021-08-09 2021-10-22 中国电信股份有限公司 Prediction method, device, equipment, medium and system based on longitudinal federal learning
CN113554476A (en) * 2020-04-23 2021-10-26 京东数字科技控股有限公司 Training method and system of credit prediction model, electronic device and storage medium
CN113688855A (en) * 2020-05-19 2021-11-23 华为技术有限公司 Data processing method, federal learning training method, related device and equipment
CN113704779A (en) * 2021-07-16 2021-11-26 杭州医康慧联科技股份有限公司 Encrypted distributed machine learning training method
CN113723688A (en) * 2021-09-01 2021-11-30 网银在线(北京)科技有限公司 Prediction method, prediction device, computer equipment and storage medium
CN114219369A (en) * 2022-01-17 2022-03-22 北京达佳互联信息技术有限公司 Prediction model training method and device, and user category prediction method and device
CN116383865A (en) * 2022-12-30 2023-07-04 上海零数众合信息科技有限公司 Federal learning prediction stage privacy protection method and system
CN116701972A (en) * 2023-08-09 2023-09-05 腾讯科技(深圳)有限公司 Service data processing method, device, equipment and medium
CN110633806B (en) * 2019-10-21 2024-04-26 深圳前海微众银行股份有限公司 Longitudinal federal learning system optimization method, device, equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
US20190138934A1 (en) * 2018-09-07 2019-05-09 Saurav Prakash Technologies for distributing gradient descent computation in a heterogeneous multi-access edge computing (mec) networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG QIANG: "Federated Machine Learning: Concept and Applications", 《ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY (TIST)》 *
黄冬梅 等: "《案例驱动的大数据原理技术及应用》", 30 November 2018, 上海交通大学出版社 *

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021004551A1 (en) * 2019-09-26 2021-01-14 深圳前海微众银行股份有限公司 Method, apparatus, and device for optimization of vertically federated learning system, and a readable storage medium
CN110633806B (en) * 2019-10-21 2024-04-26 深圳前海微众银行股份有限公司 Longitudinal federal learning system optimization method, device, equipment and readable storage medium
CN110633806A (en) * 2019-10-21 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN110782042A (en) * 2019-10-29 2020-02-11 深圳前海微众银行股份有限公司 Method, device, equipment and medium for combining horizontal federation and vertical federation
WO2021083276A1 (en) * 2019-10-29 2021-05-06 深圳前海微众银行股份有限公司 Method, device, and apparatus for combining horizontal federation and vertical federation, and medium
WO2021082647A1 (en) * 2019-10-29 2021-05-06 华为技术有限公司 Federated learning system, training result aggregation method, and device
CN112749812A (en) * 2019-10-29 2021-05-04 华为技术有限公司 Joint learning system, training result aggregation method and equipment
CN110797124B (en) * 2019-10-30 2024-04-12 腾讯科技(深圳)有限公司 Model multiterminal collaborative training method, medical risk prediction method and device
CN110797124A (en) * 2019-10-30 2020-02-14 腾讯科技(深圳)有限公司 Model multi-terminal collaborative training method, medical risk prediction method and device
CN110827147A (en) * 2019-10-31 2020-02-21 山东浪潮人工智能研究院有限公司 Federal learning incentive method and system based on alliance chain
WO2021103909A1 (en) * 2019-11-27 2021-06-03 支付宝(杭州)信息技术有限公司 Risk prediction method and apparatus, risk prediction model training method and apparatus, and electronic device
CN111046425A (en) * 2019-12-12 2020-04-21 支付宝(杭州)信息技术有限公司 Method and device for risk identification by combining multiple parties
CN111046425B (en) * 2019-12-12 2021-07-13 支付宝(杭州)信息技术有限公司 Method and device for risk identification by combining multiple parties
WO2021114822A1 (en) * 2019-12-12 2021-06-17 支付宝(杭州)信息技术有限公司 Private data protection-based risk decision making method, apparatus and system, and device
CN110995737A (en) * 2019-12-13 2020-04-10 支付宝(杭州)信息技术有限公司 Gradient fusion method and device for federal learning and electronic equipment
CN111178538A (en) * 2019-12-17 2020-05-19 杭州睿信数据科技有限公司 Federated learning method and device for vertical data
CN111178538B (en) * 2019-12-17 2023-08-15 杭州睿信数据科技有限公司 Federal learning method and device for vertical data
WO2021120855A1 (en) * 2019-12-20 2021-06-24 支付宝(杭州)信息技术有限公司 Method and system for carrying out model training on the basis of privacy data
CN111291801B (en) * 2020-01-21 2021-08-27 深圳前海微众银行股份有限公司 Data processing method and device
CN111291801A (en) * 2020-01-21 2020-06-16 深圳前海微众银行股份有限公司 Data processing method and device
CN111310204A (en) * 2020-02-10 2020-06-19 北京百度网讯科技有限公司 Data processing method and device
CN111310204B (en) * 2020-02-10 2022-06-14 北京百度网讯科技有限公司 Data processing method and device
WO2021159798A1 (en) * 2020-02-12 2021-08-19 深圳前海微众银行股份有限公司 Method for optimizing longitudinal federated learning system, device and readable storage medium
CN111325352A (en) * 2020-02-20 2020-06-23 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
CN111325352B (en) * 2020-02-20 2021-02-19 深圳前海微众银行股份有限公司 Model updating method, device, equipment and medium based on longitudinal federal learning
WO2021169477A1 (en) * 2020-02-28 2021-09-02 深圳前海微众银行股份有限公司 Cross feature-based model building and prediction methods, devices and apparatuses, and storage medium
CN111369260A (en) * 2020-03-10 2020-07-03 支付宝(杭州)信息技术有限公司 Privacy-protecting risk prediction method and device
CN111081337B (en) * 2020-03-23 2020-06-26 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111081337A (en) * 2020-03-23 2020-04-28 腾讯科技(深圳)有限公司 Collaborative task prediction method and computer readable storage medium
CN111428885B (en) * 2020-03-31 2021-06-04 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN111428885A (en) * 2020-03-31 2020-07-17 深圳前海微众银行股份有限公司 User indexing method in federated learning and federated learning device
CN113554476A (en) * 2020-04-23 2021-10-26 京东数字科技控股有限公司 Training method and system of credit prediction model, electronic device and storage medium
CN113554476B (en) * 2020-04-23 2024-04-19 京东科技控股股份有限公司 Training method and system of credit prediction model, electronic equipment and storage medium
CN111553443A (en) * 2020-05-14 2020-08-18 北京华宇元典信息服务有限公司 Training method and device for referee document processing model and electronic equipment
WO2021232832A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Data processing method, training method for federated learning and related apparatus, and device
CN113688855A (en) * 2020-05-19 2021-11-23 华为技术有限公司 Data processing method, federal learning training method, related device and equipment
CN113688855B (en) * 2020-05-19 2023-07-28 华为技术有限公司 Data processing method, federal learning training method, related device and equipment
CN111882054A (en) * 2020-05-27 2020-11-03 杭州中奥科技有限公司 Method and related equipment for cross training of network data of encryption relationship between two parties
CN111882054B (en) * 2020-05-27 2024-04-12 杭州中奥科技有限公司 Method for cross training of encryption relationship network data of two parties and related equipment
CN111861099A (en) * 2020-06-02 2020-10-30 光之树(北京)科技有限公司 Model evaluation method and device of federal learning model
CN111935156A (en) * 2020-08-12 2020-11-13 科技谷(厦门)信息技术有限公司 Data privacy protection method for federated learning
CN111935156B (en) * 2020-08-12 2022-06-14 科技谷(厦门)信息技术有限公司 Data privacy protection method for federated learning
CN112016932A (en) * 2020-09-04 2020-12-01 中国银联股份有限公司 Test method, device, server and medium
CN112182399A (en) * 2020-10-16 2021-01-05 中国银联股份有限公司 Multi-party security calculation method and device for federated learning
CN112199709A (en) * 2020-10-28 2021-01-08 支付宝(杭州)信息技术有限公司 Multi-party based privacy data joint training model method and device
CN112330048A (en) * 2020-11-18 2021-02-05 中国光大银行股份有限公司 Scoring card model training method and device, storage medium and electronic device
CN112396189A (en) * 2020-11-27 2021-02-23 中国银联股份有限公司 Method and device for multi-party construction of federal learning model
CN112396189B (en) * 2020-11-27 2023-09-01 中国银联股份有限公司 Method and device for constructing federal learning model by multiple parties
CN112801731A (en) * 2021-01-06 2021-05-14 广东工业大学 Federal reinforcement learning method for order taking auxiliary decision
CN112990612A (en) * 2021-05-17 2021-06-18 湖南三湘银行股份有限公司 Prediction system and method based on federal learning
CN113704779A (en) * 2021-07-16 2021-11-26 杭州医康慧联科技股份有限公司 Encrypted distributed machine learning training method
CN113537633A (en) * 2021-08-09 2021-10-22 中国电信股份有限公司 Prediction method, device, equipment, medium and system based on longitudinal federal learning
CN113723688A (en) * 2021-09-01 2021-11-30 网银在线(北京)科技有限公司 Prediction method, prediction device, computer equipment and storage medium
CN113723688B (en) * 2021-09-01 2024-04-19 网银在线(北京)科技有限公司 Prediction method, prediction device, computer equipment and storage medium
CN114219369B (en) * 2022-01-17 2023-08-11 北京达佳互联信息技术有限公司 Prediction model training method and device, and user category prediction method and device
CN114219369A (en) * 2022-01-17 2022-03-22 北京达佳互联信息技术有限公司 Prediction model training method and device, and user category prediction method and device
CN116383865B (en) * 2022-12-30 2023-10-10 上海零数众合信息科技有限公司 Federal learning prediction stage privacy protection method and system
CN116383865A (en) * 2022-12-30 2023-07-04 上海零数众合信息科技有限公司 Federal learning prediction stage privacy protection method and system
CN116701972B (en) * 2023-08-09 2023-11-24 腾讯科技(深圳)有限公司 Service data processing method, device, equipment and medium
CN116701972A (en) * 2023-08-09 2023-09-05 腾讯科技(深圳)有限公司 Service data processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN110245510B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN110245510A (en) Method and apparatus for predictive information
CN110189192B (en) Information recommendation model generation method and device
WO2021179720A1 (en) Federated-learning-based user data classification method and apparatus, and device and medium
CN110399742B (en) Method and device for training and predicting federated migration learning model
US20230023520A1 (en) Training Method, Apparatus, and Device for Federated Neural Network Model, Computer Program Product, and Computer-Readable Storage Medium
EP3876125A1 (en) Model parameter training method based on federated learning, terminal, system and medium
US20220230071A1 (en) Method and device for constructing decision tree
US20230078061A1 (en) Model training method and apparatus for federated learning, device, and storage medium
CN111814985A (en) Model training method under federated learning network and related equipment thereof
CN110493007A (en) A kind of Information Authentication method, apparatus, equipment and storage medium based on block chain
CN111081337B (en) Collaborative task prediction method and computer readable storage medium
CN110110811A (en) Method and apparatus for training pattern, the method and apparatus for predictive information
CN111428887B (en) Model training control method, device and system based on multiple computing nodes
CN109255210A (en) The method, apparatus and storage medium of intelligent contract are provided in block chain network
CN111563267A (en) Method and device for processing federal characteristic engineering data
CN111709051A (en) Data processing method, device and system, computer storage medium and electronic equipment
WO2022156594A1 (en) Federated model training method and apparatus, electronic device, computer program product, and computer-readable storage medium
CN108182472A (en) For generating the method and apparatus of information
CN110197707A (en) Medical record information processing method, device, medium and electronic equipment based on block chain
CN107820614A (en) The personal search index of privacy enhancing
CN113051239A (en) Data sharing method, use method of model applying data sharing method and related equipment
CN111259446A (en) Parameter processing method, equipment and storage medium based on federal transfer learning
CN113609781A (en) Automobile production mold optimization method, system, equipment and medium based on federal learning
CN108959642A (en) Method and apparatus for information to be written
CN114881247A (en) Longitudinal federal feature derivation method, device and medium based on privacy computation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant