CN111310047B - Information recommendation method, device and equipment based on FM model and storage medium - Google Patents

Information recommendation method, device and equipment based on FM model and storage medium Download PDF

Info

Publication number
CN111310047B
CN111310047B CN202010110426.8A CN202010110426A CN111310047B CN 111310047 B CN111310047 B CN 111310047B CN 202010110426 A CN202010110426 A CN 202010110426A CN 111310047 B CN111310047 B CN 111310047B
Authority
CN
China
Prior art keywords
model
terminal
parameter
model parameter
information recommendation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010110426.8A
Other languages
Chinese (zh)
Other versions
CN111310047A (en
Inventor
周洋磊
裴勇
郑文琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010110426.8A priority Critical patent/CN111310047B/en
Publication of CN111310047A publication Critical patent/CN111310047A/en
Application granted granted Critical
Publication of CN111310047B publication Critical patent/CN111310047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an information recommendation method, device, equipment and storage medium based on an FM model, relating to the field of financial science and technology, wherein the method comprises the following steps: the coordination terminal receives a first model parameter sent by a first terminal and receives a second model parameter sent by a second terminal, wherein the model parameters comprise linear parameters and vector parameters; and obtaining a third model parameter according to the first model parameter and the second model parameter, and sending the third model parameter to the first terminal and the second terminal, so that the first terminal can update the first factor decomposition machine FM model corresponding to the first model parameter according to the third model parameter, the second terminal can update the second FM model corresponding to the second model parameter according to the third model parameter, and information recommendation can be carried out according to the updated FM model. The method solves the problem of insufficient data volume of user data (namely sample data) in the process of training the FM model, thereby improving the recommendation accuracy of the trained FM model.

Description

Information recommendation method, device and equipment based on FM model and storage medium
Technical Field
The invention relates to the technical field of data processing of financial technology (Fintech), in particular to an information recommendation method, device, equipment and storage medium based on an FM model.
Background
With the development of computer technology, more and more technologies are applied in the financial field, the traditional financial industry is gradually changing to financial technology (Fintech), and the data processing technology is no exception, but due to the requirements of the financial industry on safety and real-time performance, higher requirements are also put forward on the technology.
Personalized recommendation is widely applied to various fields of the internet, and according to user data, characteristics and the like, the user likes and interests are learned, and corresponding products, information and the like are recommended to the user. The existing personalized recommendation often faces the problems of insufficient data volume, difficult cold start, sparse user characteristics and the like. The effect of the recommendation algorithm is related to the filling degree of a user-rating matrix, and insufficient data leads to the situation that the personalized recommendation corresponding model cannot fully learn the user preference, the recommendation effect is poor, and finally the user activity and the life cycle of the product are influenced.
Therefore, the data volume of the user data corresponding to the model for training the personalized recommendation is insufficient, and the recommendation accuracy of the obtained model is low.
Disclosure of Invention
The invention mainly aims to provide an information recommendation method, device, equipment and storage medium based on an FM model, and aims to solve the technical problems that the data volume of user data of a corresponding model is insufficient and the recommendation accuracy of the obtained model is low in the conventional training personalized recommendation.
In order to achieve the above object, the present invention provides an FM model-based information recommendation method, which includes the steps of:
the coordination terminal receives a first model parameter sent by a first terminal and receives a second model parameter sent by a second terminal, wherein the model parameters comprise linear parameters and vector parameters;
and obtaining a third model parameter according to the first model parameter and the second model parameter, and sending the third model parameter to the first terminal and the second terminal, so that the first terminal updates a first factorization machine FM model corresponding to the first model parameter according to the third model parameter, and carries out information recommendation according to the updated first FM model, and the second terminal updates a second FM model corresponding to the second model parameter according to the third model parameter, and carries out information recommendation according to the updated second FM model.
Preferably, after the step of obtaining a third model parameter according to the first model parameter and the second model parameter, and sending the third model parameter to the first terminal and the second terminal, the method further includes:
calculating a first loss value according to the third model parameter, and detecting whether the first FM model and the second FM model meet a preset training end condition or not according to the first loss value;
and if the first FM model and the second FM model are detected to meet the training ending condition, sending first prompt information to the first terminal and the second terminal so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt information.
Preferably, the step of calculating a first loss value from the third model parameter comprises:
calculating to obtain a first calculation parameter according to the linear parameter in the third model parameter and the corresponding characteristic data;
calculating to obtain a second calculation parameter according to the vector parameter in the third model parameter and the feature data, and calculating a parameter difference value between the second calculation parameter and the first calculation parameter;
and calculating to obtain a first loss value according to the parameter difference value and the label value corresponding to the characteristic data.
Preferably, after the step of detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value, the method further includes:
if the fact that the first FM model and the second FM model do not meet the training end condition is detected, second prompt information is sent to the first terminal and the second terminal, so that the first terminal can calculate to obtain a first gradient value according to a generated second loss value after receiving the second prompt information, the model parameter of the first FM model is updated according to the first gradient value, the second terminal can calculate to obtain a second gradient value according to a generated third loss value after receiving the second prompt information, and the model parameter of the second FM model is updated according to the second gradient value.
Preferably, if it is detected that the first FM model does not meet the training end condition, sending a second prompt message to the first terminal, so that the first terminal calculates a first gradient value according to the generated second loss value after receiving the second prompt message, and updates the model parameter of the first FM model according to the first gradient value, includes:
if the first FM model is detected not to meet the training end condition, second prompt information is sent to the first terminal, so that the first terminal can obtain a first gradient value by deriving the generated second loss value after receiving the second prompt information, a difference value between the third model parameter and the first gradient value is calculated, and the difference value is multiplied by a preset learning rate to obtain an updated model parameter of the first FM model, so that the model parameter of the first FM model is updated.
Preferably, the step of detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value includes:
obtaining a fourth loss value obtained by calculation when the model updating operation is executed last time, and calculating a loss difference value between the fourth loss value and the first loss value;
if the loss difference is smaller than a preset threshold value, determining that the first FM model and the second FM model meet a preset training end condition;
and if the loss difference is detected to be larger than or equal to a preset threshold value, determining that the first FM model and the second FM model do not accord with the training end condition.
Preferably, the step of obtaining a third model parameter according to the first model parameter and the second model parameter includes:
calculating parameter average values of the first model parameter and the second model parameter, and determining the parameter average values as third model parameters.
Preferably, the coordinating terminal receives a first model parameter sent by the first terminal and receives a second model parameter sent by the second terminal, where the model parameters include linear parameters and vector parameters, and before the step of:
the coordination terminal generates a public key and a private key and sends the public key to the first terminal and the second terminal;
the steps that the coordinating terminal receives the first model parameter sent by the first terminal and receives the second model parameter sent by the second terminal comprise:
the coordination terminal receives a first model parameter which is sent by the first terminal and encrypted by the public key, and receives a second model parameter which is sent by the second terminal and encrypted by the public key;
the step of obtaining a third model parameter according to the first model parameter and the second model parameter comprises:
decrypting the encrypted first model parameter by using the private key to obtain the decrypted first model parameter, and decrypting the encrypted second model parameter by using the private key to obtain the decrypted second model parameter;
and obtaining a third model parameter according to the decrypted first model parameter and the decrypted second model parameter.
In addition, in order to achieve the above object, the present invention provides an FM model based information recommendation apparatus, including:
the receiving module is used for receiving a first model parameter sent by a first terminal and a second model parameter sent by a second terminal, wherein the model parameters comprise linear parameters and vector parameters;
and the sending module is used for obtaining a third model parameter according to the first model parameter and the second model parameter, sending the third model parameter to the first terminal and the second terminal, so that the first terminal updates a first factorization machine FM model corresponding to the first model parameter according to the third model parameter, and carries out information recommendation according to the updated first FM model, and the second terminal updates a second FM model corresponding to the second model parameter according to the third model parameter, and carries out information recommendation according to the updated second FM model.
In addition, in order to achieve the above object, the present invention further provides an FM model-based information recommendation apparatus, which includes a memory, a processor, and an FM model-based information recommendation program stored on the memory and executable on the processor, wherein the FM model-based information recommendation program, when executed by the processor, implements the steps of the FM model-based information recommendation method corresponding to the federal learning server.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an FM model-based information recommendation program, which when executed by a processor, implements the steps of the FM model-based information recommendation method as described above.
The invention receives a first model parameter sent by a first terminal and a second model parameter sent by a second terminal through a coordination terminal, then obtains a third model parameter according to the first model parameter and the second model parameter, sends the third model parameter to the first terminal and the second terminal so that the first terminal and the second terminal update corresponding FM models according to the third model parameter, and carries out information recommendation according to the updated FM models, thereby realizing that the model parameters obtained by the coordination terminal integrating the training of the first terminal and the second terminal are returned to the first terminal and the second terminal, and under the condition of not revealing original sample data of the first terminal and the second terminal, the first terminal and the second terminal can train own FM models by means of the sample data generated by other terminals through horizontal federal learning so as to solve the problem of insufficient data volume of user data (namely sample data) in the process of training the FM models, therefore, the recommendation accuracy of the FM model obtained by training is improved.
Drawings
FIG. 1 is a schematic flowchart of a first embodiment of an information recommendation method based on an FM model according to the present invention;
FIG. 2 is a flowchart illustrating a second embodiment of the information recommendation method based on the FM model according to the present invention;
FIG. 3 is a block diagram illustrating the function of an embodiment of an FM model-based information recommendation apparatus according to the present invention;
FIG. 4 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 5 is a logical framework diagram for model updating in an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an information recommendation method based on an FM model, and referring to FIG. 1, FIG. 1 is a schematic flow diagram of a first embodiment of the information recommendation method based on the FM model.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in a different order than that shown.
The information recommendation method based on the FM model is applied to a server or a terminal, and the terminal may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like. In the embodiments of the information recommendation method based on the FM model, for convenience of description, the execution subject is omitted to explain the embodiments. The information recommendation method based on the FM model comprises the following steps:
and step S10, the coordination terminal receives the first model parameters sent by the first terminal and receives the second model parameters sent by the second terminal, wherein the model parameters comprise linear parameters and vector parameters.
For convenience of distinguishing, in this embodiment, the model parameter sent by the first terminal is denoted as a first model parameter, and the model parameter sent by the second terminal is denoted as a second model parameter. In the present embodiment, the model parameters include linear parameters and vector parameters. It should be noted that the first model parameter is a model parameter of a first FM (Factorization Machine) model in the first terminal, and the second model parameter is a model parameter of a second FM model in the second terminal.
Specifically, before the first terminal sends the first model parameter to the coordinating terminal, the first terminal initializes the first FM model of the first terminal to obtain the initialized first FM model, and obtains the first model parameter of the initialized first FM model. In the process of initializing the first FM model, the initialization mode may be set according to specific needs, for example, all model parameters in the first FM model may be set to the same value, or may be different values, and the value may be any one of 0 to 1, or may be other values. Further, when the sample data is input into the first FM model, an initialization mode may be carried in the sample data, and then the first terminal may initialize the model parameters of the first FM model according to the initialization mode.
It should be noted that, before the second terminal sends the second model parameter to the coordinating terminal, the processing procedure of the second terminal is also consistent with that of the first terminal, and is not repeated herein. In this embodiment, the first sample data of the first terminal and the second sample data of the second terminal have the same feature (feature) and label (label), but most of the user Identifications (IDs) of the first sample data and the second sample data are not overlapped, and most of the user identifications are different. The gender feature exists as both the first sample data and the second sample data, but the labels may be different, with the gender labels including "male" and "female". The user identifications of different users are different, i.e. different users can be distinguished by the user identifications.
Step S20, obtaining a third model parameter according to the first model parameter and the second model parameter, and sending the third model parameter to the first terminal and the second terminal, so that the first terminal updates the first factorization machine FM model corresponding to the first model parameter according to the third model parameter, and recommends information according to the updated first FM model, and the second terminal updates the second FM model corresponding to the second model parameter according to the third model parameter, and recommends information according to the updated second FM model.
And after the coordination terminal obtains the first model parameter and the second model parameter, the coordination terminal obtains a third model parameter according to the first model parameter and the second model parameter, and sends the obtained third model parameter to the first terminal and the second terminal. And after the first terminal receives the third model parameter, the first terminal updates the first FM model corresponding to the first model parameter according to the third model parameter, namely, the third model parameter is used as the model parameter of the first FM model so as to update the first FM model. And after the updated first FM model is obtained, the first terminal can recommend information according to the updated first FM model. And after the second terminal receives the third model parameter, the second terminal updates the second FM model corresponding to the second model parameter according to the third model parameter, namely, the third model parameter is used as the model parameter of the second FM model so as to update the second FM model. And after the updated second FM model is obtained, the second terminal can recommend information according to the updated second FM model.
Note that, an FM model may exist in the coordinating terminal, and this FM model is referred to as a third FM model in this embodiment, and the coordinating terminal updates the third FM model according to the obtained third model parameter.
Further, the step of obtaining a third model parameter according to the first model parameter and the second model parameter includes:
step a, calculating the parameter average value of the first model parameter and the second model parameter, and determining the parameter average value as a third model parameter.
Specifically, after obtaining the first model parameter and the second model parameter, the coordination terminal calculates a parameter average value between the first model parameter and the second model parameter, and determines the parameter average value as a third model parameter. It should be noted that the number of the first model parameters and the second model parameters may be one or more, and the number of the first model parameters and the number of the second model parameters are the same. When there are a plurality of first model parameters, in calculating the parameter average values, the parameter average values between the first model parameters and the corresponding second model parameters are calculated.
Further, a weight between the first model parameter and the second model parameter may be set, and it should be noted that the sum of the first weight corresponding to the first model parameter and the second weight corresponding to the second model parameter is equal to 1. After the coordination terminal obtains the first model parameter and the second model parameter, the coordination terminal calculates a product between the first model parameter and the first weight to obtain a first product, and then calculates a product between the second model and the second weight to obtain a second product. And after the coordination terminal obtains the first product and the second product, the coordination terminal calculates the sum of the first product and the second product to obtain a third model parameter. The specific sizes of the first weight and the second weight may be set according to needs, and the embodiment does not specifically limit the first weight and the second weight.
In the embodiment, the coordination terminal receives the first model parameter sent by the first terminal and the second model parameter sent by the second terminal, then the third model parameter is obtained according to the first model parameter and the second model parameter, the third model parameter is sent to the first terminal and the second terminal, so that the first terminal and the second terminal update the corresponding FM model according to the third model parameter, and information recommendation is performed according to the updated FM model, so that the model parameter obtained by integrating the training of the first terminal and the second terminal by the coordination terminal is returned to the first terminal and the second terminal, and the first terminal and the second terminal can train own FM model by means of sample data generated by other terminals under the condition that own original sample data is not leaked by the first terminal and the second terminal through horizontal federal learning, so as to solve the problem that the data volume of user data (namely, sample data) is insufficient in the process of training the FM model, therefore, the recommendation accuracy of the FM model obtained by training is improved.
Further, the information recommendation method based on the FM model further includes:
and b, coordinating the terminal to generate a public key and a private key, and sending the public key to the first terminal and the second terminal.
Further, the coordinating terminal generates a public key and a private key, transmits the generated public key to the first terminal and the second terminal, and stores the private key. In other embodiments, the coordinating terminal may also send the generated private key to the first terminal and the second terminal. In this embodiment, the coordination terminal may generate the private key and the public key by using an RSA encryption algorithm, or may generate the public key and the private key by using an algorithm such as ECC (Elliptic Curve Cryptography).
The step S10 includes:
and c, the coordinating terminal receives the first model parameter which is sent by the first terminal and encrypted by the public key, and receives the second model parameter which is sent by the second terminal and encrypted by the public key.
After the coordination terminal sends the public key to the first terminal and the second terminal, the coordination terminal detects whether a first model parameter sent by the first terminal and encrypted by the public key is received or not and detects whether a second model parameter sent by the second terminal and encrypted by the public key is received or not. It should be noted that, after the first terminal receives the public key, the first terminal encrypts the first model parameter by using the public key to obtain the encrypted first model parameter, and sends the encrypted first model parameter to the coordination terminal. And after the second terminal receives the public key, the second terminal encrypts the second model parameter by using the received public key to obtain the encrypted second model parameter, and sends the encrypted second model parameter to the coordination terminal.
The step of obtaining a third model parameter according to the first model parameter and the second model parameter comprises:
and d, decrypting the encrypted first model parameter by using the private key to obtain the decrypted first model parameter, and decrypting the encrypted second model parameter by using the private key to obtain the decrypted second model parameter.
And e, obtaining a third model parameter according to the decrypted first model parameter and the decrypted second model parameter.
After the coordination terminal receives the encrypted first model parameter and the encrypted second model parameter, the coordination terminal obtains a stored private key, decrypts the encrypted first model parameter by using the private key to obtain a decrypted first model parameter, and decrypts the encrypted second model parameter by using the private key to obtain a decrypted second model parameter. And after the decrypted first model parameter and the decrypted second model parameter are obtained, the coordination terminal obtains a third model parameter according to the decrypted first model parameter and the decrypted second model parameter. Further, after the coordination terminal obtains the third model parameter, the third model parameter is encrypted by using a private key of the coordination terminal to obtain the encrypted third model parameter, and the encrypted third model parameter is sent to the first terminal and the second terminal. And after the first terminal and the second terminal obtain the encrypted third model parameter, the first terminal and the second terminal decrypt the encrypted third model parameter by using the stored public key.
Specifically, referring to fig. 5, fig. 5 is a logic framework diagram of model updating according to an embodiment of the present invention. The coordination terminal generates a private key and a public key and sends the public key to the first terminal and the second terminal, the first terminal initializes the first model parameter, then encrypts the initialized first model parameter by adopting the public key and sends the initialized first model parameter to the coordination terminal; the second page initializes the second model parameters, encrypts the initialized second model parameters by using a public key, and sends the initialized second model parameters to the coordination terminal. The coordination terminal decrypts the encrypted first model parameter and the encrypted second model parameter by using a private key, integrates the first model parameter and the second model parameter to obtain a third model parameter, judges whether convergence occurs or not, namely judges whether a preset training end condition is met or not, and stops calculation if the convergence occurs, namely stops the operation of updating the FM model; and if not, informing the first terminal and the second terminal to enable the first terminal and the second terminal to correspondingly calculate loss values, correspondingly calculating gradient values according to the calculated loss values, and updating corresponding model parameters according to the gradient values.
In the embodiment, the data transmitted between the first terminal and the coordination terminal and between the second terminal and the coordination terminal are encrypted, so that the security of the data transmitted between the first terminal and the coordination terminal and between the second terminal and the coordination terminal is improved, and the privacy of the data in the process of training the FM model is further improved.
Further, a second embodiment of the information recommendation method based on the FM model is provided. The second embodiment of the FM model based information recommendation method is different from the first embodiment of the FM model based information recommendation method in that, referring to fig. 2, the FM model based information recommendation method further includes:
step S30, calculating a first loss value according to the third model parameter, and detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value.
It should be noted that, because a third FM model exists in the coordination terminal, and each FM model has a corresponding loss function, the coordination terminal may calculate a first loss value through the third model parameter and the loss function, and detect whether the first FM model and the second FM model meet a preset training end condition according to the calculated first loss value. The loss function (loss function) is a function that maps the value of a random event or its related random variables to non-negative real numbers to represent the "risk" or "loss" of the random event. In this embodiment, the type of the loss function is not limited, and the user may select a desired loss function as needed. Specifically, if the first loss value is smaller than the preset loss value, the coordination terminal can determine that the first FM model and the second FM model meet the preset training end condition; if the first loss value is greater than or equal to the preset loss value, the coordination terminal can determine that the first FM model and the second FM model do not meet the training end condition. The preset loss value may be set according to specific needs, and the preset loss value is not limited in this embodiment.
Further, the step of calculating a first loss value from the third model parameter comprises:
and k, calculating to obtain a first calculation parameter according to the linear parameter in the third model parameter and the corresponding characteristic data.
And step l, calculating to obtain a second calculation parameter according to the vector parameter in the third model parameter and the characteristic data, and calculating a parameter difference value between the second calculation parameter and the first calculation parameter.
And m, calculating to obtain a first loss value according to the parameter difference value and the label value corresponding to the characteristic data.
Specifically, in the process of calculating the first loss value according to the third model parameter, the linear parameter in the third model parameter and the feature data corresponding to the linear parameter are obtained, where the feature data is training data for training the FM model. And then multiplying the linear parameters by the corresponding characteristic data to obtain first calculation parameters. Obtaining vector parameters in the third model parameters, and then calculating the square of the product according to the product between the calculated vector parameters and the characteristic data to obtain a first numerical value; and calculating the product of the square of the vector parameter and the square of the feature data to obtain a second numerical value, and subtracting the second numerical value from the first numerical value to obtain a second calculation parameter. And when the first calculation parameter and the second calculation parameter are obtained, subtracting the first calculation parameter from the second calculation parameter to obtain a parameter difference value between the second calculation parameter and the first calculation parameter, obtaining a label value corresponding to the characteristic data, multiplying the parameter difference value by the label value to obtain a multiplication result, and inputting the multiplication result into a loss function to obtain a first loss value. It should be noted that the representation form of the tag value may be set according to specific needs, for example, the corresponding feature data may be represented as 020, the tag value of the executed click operation is "1", and the tag value of the unexecuted click operation is "0". In this embodiment, since the vector parameter is an array, the obtained parameter difference is also an array.
For the convenience of understanding, the process of calculating the parameter difference is expressed by a formula, and the specific formula is as follows:
L=(Vg×feature)2-Vg 2×feature2;L0=L-Wg×feature。
wherein L is0Representing the difference of the parameters, L representing a second calculated parameter, VgRepresenting vector parameters, feature representing feature data, WgRepresenting a linear parameter.
Further, the step of detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value includes:
and f, acquiring a fourth loss value obtained by calculation when the model updating operation is executed last time, and calculating a loss difference value between the fourth loss value and the first loss value.
And g, if the loss difference is smaller than a preset threshold value, determining that the first FM model and the second FM model accord with a preset training end condition.
And h, if the loss difference is detected to be larger than or equal to a preset threshold value, determining that the first FM model and the second FM model do not accord with the training end condition.
Further, the coordination terminal obtains a fourth loss value calculated when the model updating operation is executed last time. It should be noted that, in this embodiment, the updating operation of the FM model is continuous iterative training, and the coordinating terminal calculates the loss value and stores the calculated loss value in each updating operation. And after the coordination terminal obtains the fourth loss value, the coordination terminal calculates a loss difference value between the fourth loss value and the first loss value, and detects whether the loss difference value is smaller than a preset threshold value. The preset threshold is not limited in the embodiment, and the user can set the preset threshold according to specific needs. If the loss difference is smaller than the preset threshold value, determining that the first FM model and the second FM model meet the preset training end condition; and if the loss difference is detected to be larger than or equal to the preset threshold, determining that the first FM model and the second FM model do not accord with the training end condition.
Step S40, if it is detected that the first FM model and the second FM model meet the training end condition, sending a first prompt message to the first terminal and the second terminal, so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt message.
And if the fact that the first FM model and the second FM model accord with the training ending condition is detected, the coordination terminal automatically generates first prompt information and sends the first prompt information to the first terminal and the second terminal so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt information. In this embodiment, the presentation form of the first prompt information is not limited, for example, the first prompt information may represent a "true" character, and may also represent a "1" character. It is understood that when the first FM model and the second FM model are detected to meet the training end condition, the model updating operation of the first FM model and the second FM model is completed. It is understood that when it is detected that the first FM model and the second FM model meet the training end condition, the FM model in the coordinating terminal also meets the training end condition.
It can be understood that, after the first FM model and the second FM model meet the training end condition, the first terminal may recommend personalized information using the first FM model obtained by the training, the second terminal may recommend personalized information using the second FM model obtained by the training, and the coordination terminal may also recommend personalized information using the third FM model.
In the embodiment, whether the first FM model and the second FM model meet the training ending condition is detected through the loss value so as to determine when to end the training of the first FM model and the second FM model, and the training times are reduced on the basis of ensuring the accuracy of recommendation of the obtained personalized information of the FM models.
Further, a third embodiment of the information recommendation method based on the FM model is provided.
The third embodiment of the FM model based information recommendation method is different from the second embodiment of the FM model based information recommendation method in that the FM model based information recommendation method further includes:
step i, if it is detected that the first FM model and the second FM model do not meet the training end condition, sending second prompt information to the first terminal and the second terminal, so that the first terminal calculates to obtain a first gradient value according to the generated second loss value after receiving the second prompt information, and updates the model parameter of the first FM model according to the first gradient value, and the second terminal calculates to obtain a second gradient value according to the generated third loss value after receiving the second prompt information, and updates the model parameter of the second FM model according to the second gradient value.
If it is detected that the first FM model and the second FM model do not meet the training end condition, that is, the first FM model and the second FM model do not converge, the coordinating terminal automatically generates the second prompt message and sends the second prompt message to the first terminal and the second terminal, where the embodiment does not limit the expression form of the second prompt message, and the second prompt message may be represented as a character such as "false" or "0". After the first terminal receives the second prompt message, the first terminal calculates a first gradient value according to a loss function corresponding to the first FM model, and updates the model parameters of the first FM model according to the first gradient value. And after the second terminal receives the second prompt message, the second terminal calculates a second gradient value according to the loss function corresponding to the second FM model, and updates the model parameter of the second FM model according to the second gradient value.
Further, if it is detected that the first FM model does not meet the training end condition, sending a second prompt message to the first terminal, so that the first terminal calculates a first gradient value according to the generated second loss value after receiving the second prompt message, and updates the model parameter of the first FM model according to the first gradient value, including:
step j, if it is detected that the first FM model does not meet the training end condition, sending second prompt information to the first terminal, so that the first terminal can derive the generated second loss value after receiving the second prompt information to obtain a first gradient value, calculating a difference value between the third model parameter and the first gradient value, and multiplying the difference value by a preset learning rate to obtain an updated model parameter of the first FM model to update the model parameter of the first FM model.
Specifically, if it is detected that the first FM model does not meet the training end condition, the coordination terminal sends the second prompt message to the first terminal. And after the first terminal receives the second prompt message, obtaining a second loss value according to a loss function corresponding to the first FM model, then obtaining a first gradient value by derivation of the second loss value, calculating a difference value between the third model parameter and the first gradient value, multiplying the difference value by a preset learning rate to correspondingly obtain a fourth model parameter corresponding to the third model parameter, and updating the first FM model according to the fourth model parameter, namely determining the fourth model parameter as the updated model parameter of the first FM model so as to update the model parameter of the first FM model. It should be noted that the learning rate may be set according to specific needs, and the present embodiment does not specifically limit the magnitude of the learning rate, for example, the learning rate may be set to decrease with the decrease of the first gradient value.
It should be noted that, after the second terminal receives the second prompt message, the executed operation is consistent with that of the first terminal, and is not repeated herein.
According to the embodiment, when the fact that the FM model does not meet the training end condition is detected, the model parameters of the FM model are updated according to the gradient values obtained through the loss values, and therefore the accuracy rate of the obtained FM model for recommending the personalized information is improved.
In addition, the present invention also provides an FM model based information recommendation apparatus, and referring to fig. 3, the FM model based information recommendation apparatus includes:
a receiving module 10, configured to receive a first model parameter sent by a first terminal and a second model parameter sent by a second terminal, where the model parameters include a linear parameter and a vector parameter;
a sending module 20, configured to obtain a third model parameter according to the first model parameter and the second model parameter, and send the third model parameter to the first terminal and the second terminal, so that the first terminal updates the first factorization machine FM model corresponding to the first model parameter according to the third model parameter, and performs information recommendation according to the updated first FM model, and the second terminal updates the second FM model corresponding to the second model parameter according to the third model parameter, and performs information recommendation according to the updated second FM model.
Further, the FM model-based information recommendation apparatus further includes:
the detection module is used for calculating a first loss value according to the third model parameter and detecting whether the first FM model and the second FM model meet a preset training end condition or not according to the first loss value;
the sending module 20 is further configured to send a first prompt message to the first terminal and the second terminal if it is detected that the first FM model and the second FM model meet the training end condition, so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt message.
Further, the sending module 20 is further configured to send a second prompt message to the first terminal and the second terminal if it is detected that the first FM model and the second FM model do not meet the training end condition, so that the first terminal calculates to obtain a first gradient value according to the generated second loss value after receiving the second prompt message, updates the model parameter of the first FM model according to the first gradient value, calculates to obtain a second gradient value according to the generated third loss value after receiving the second prompt message, and updates the model parameter of the second FM model according to the second gradient value.
Further, the sending module 20 is further configured to send a second prompt message to the first terminal if it is detected that the first FM model does not meet the training end condition, so that the first terminal obtains a first gradient value by deriving the generated second loss value after receiving the second prompt message, calculates a difference between the third model parameter and the first gradient value, and multiplies a preset learning rate by the difference to obtain an updated model parameter of the first FM model, so as to update the model parameter of the first FM model.
Further, the detection module includes:
the obtaining unit is used for obtaining a fourth loss value obtained by calculation when the model updating operation is executed last time;
a first calculation unit configured to calculate a loss difference between the fourth loss value and the first loss value;
a first determining unit, configured to determine that the first FM model and the second FM model meet a preset training end condition if it is detected that the loss difference is smaller than a preset threshold; and if the loss difference is detected to be larger than or equal to a preset threshold value, determining that the first FM model and the second FM model do not accord with the training end condition.
Further, the sending module 20 includes:
the second calculation unit is used for calculating parameter average values of the first model parameters and the second model parameters;
a second determining unit, configured to determine the parameter average as a third model parameter.
Further, the FM model-based information recommendation apparatus further includes:
the generating module is used for generating a public key and a private key;
the sending module 20 is further configured to send the public key to the first terminal and the second terminal;
the receiving module 10 is further configured to receive a first model parameter sent by the first terminal and encrypted by using the public key, and receive a second model parameter sent by the second terminal and encrypted by using the public key;
the sending module 20 is further configured to decrypt the encrypted first model parameter with the private key to obtain a decrypted first model parameter, and decrypt the encrypted second model parameter with the private key to obtain a decrypted second model parameter; and obtaining a third model parameter according to the decrypted first model parameter and the decrypted second model parameter.
The specific implementation of the information recommendation device based on the FM model of the present invention is basically the same as that of each embodiment of the information recommendation method based on the FM model, and is not described herein again.
In addition, the invention also provides information recommendation equipment based on the FM model. As shown in fig. 4, fig. 4 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that fig. 4 is a schematic structural diagram of a hardware operating environment of an information recommendation device based on an FM model. The information recommendation device based on the FM model in the embodiment of the invention can be a terminal device such as a PC, a portable computer and the like.
As shown in fig. 4, the FM model-based information recommendation apparatus may include: a processor 1001, such as a CPU, a memory 1005, a user interface 1003, a network interface 1004, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the structure of the FM-model-based information recommendation device shown in fig. 4 does not constitute a limitation of the FM-model-based information recommendation device, and may include more or less components than those shown, or combine some components, or arrange different components.
As shown in fig. 4, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an FM model-based information recommendation program. The operating system is a program for managing and controlling hardware and software resources of the information recommendation device based on the FM model, and supports the running of the information recommendation program based on the FM model and other software or programs.
In the FM-model-based information recommendation apparatus shown in fig. 4, the user interface 1003 is mainly used for connecting the first terminal and the second terminal, and performing data communication with the first terminal and the second terminal; the network interface 1004 is mainly used for the background server and performs data communication with the background server; the processor 1001 may be configured to call the FM model based information recommendation program stored in the memory 1005 and execute the steps of the FM model based information recommendation method as described above.
The specific implementation of the information recommendation device based on the FM model of the present invention is basically the same as that of the above information recommendation method based on the FM model, and is not described herein again.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where an FM model-based information recommendation program is stored on the computer-readable storage medium, and when executed by a processor, the FM model-based information recommendation program implements the steps of the FM model-based information recommendation method described above.
The specific implementation manner of the computer-readable storage medium of the present invention is substantially the same as that of each embodiment of the information recommendation method based on the FM model, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An information recommendation method based on an FM model is characterized by comprising the following steps:
the coordination terminal receives a first model parameter sent by a first terminal and receives a second model parameter sent by a second terminal, wherein the model parameters comprise linear parameters and vector parameters;
obtaining a third model parameter according to the first model parameter and the second model parameter, and sending the third model parameter to the first terminal and the second terminal, so that the first terminal updates a first factorization machine FM model corresponding to the first model parameter according to the third model parameter and carries out information recommendation according to the updated first FM model, and the second terminal updates a second FM model corresponding to the second model parameter according to the third model parameter and carries out information recommendation according to the updated second FM model;
calculating a first loss value according to the third model parameter and a loss function corresponding to a third FM model in the coordination terminal, and detecting whether the first FM model and the second FM model meet a preset training end condition or not according to the first loss value;
and if the first FM model and the second FM model are detected to meet the training ending condition, sending first prompt information to the first terminal and the second terminal so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt information.
2. The FM-model-based information recommendation method of claim 1, wherein the step of calculating a first loss value according to the third model parameter and a loss function corresponding to a third FM model in the coordinating terminal comprises:
calculating to obtain a first calculation parameter according to the linear parameter in the third model parameter and the corresponding characteristic data;
calculating to obtain a second calculation parameter according to the vector parameter in the third model parameter and the feature data, and calculating a parameter difference value between the second calculation parameter and the first calculation parameter;
and calculating to obtain a first loss value according to the parameter difference value and the label value corresponding to the characteristic data.
3. The FM-model-based information recommendation method according to claim 1, wherein after the step of detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value, the method further comprises:
if the fact that the first FM model and the second FM model do not meet the training end condition is detected, second prompt information is sent to the first terminal and the second terminal, so that the first terminal can calculate to obtain a first gradient value according to a generated second loss value after receiving the second prompt information, the model parameter of the first FM model is updated according to the first gradient value, the second terminal can calculate to obtain a second gradient value according to a generated third loss value after receiving the second prompt information, and the model parameter of the second FM model is updated according to the second gradient value.
4. The method as claimed in claim 3, wherein if it is detected that the first FM model does not meet the training end condition, sending a second prompt message to the first terminal, so that the first terminal calculates a first gradient value according to the generated second loss value after receiving the second prompt message, and updates the model parameter of the first FM model according to the first gradient value comprises:
if the first FM model is detected not to meet the training end condition, second prompt information is sent to the first terminal, so that the first terminal can obtain a first gradient value by deriving the generated second loss value after receiving the second prompt information, a difference value between the third model parameter and the first gradient value is calculated, and the difference value is multiplied by a preset learning rate to obtain an updated model parameter of the first FM model, so that the model parameter of the first FM model is updated.
5. The FM-model-based information recommendation method according to claim 1, wherein the step of detecting whether the first FM model and the second FM model meet a preset training end condition according to the first loss value includes:
obtaining a fourth loss value obtained by calculation when the model updating operation is executed last time, and calculating a loss difference value between the fourth loss value and the first loss value;
if the loss difference is smaller than a preset threshold value, determining that the first FM model and the second FM model meet a preset training end condition;
and if the loss difference is detected to be larger than or equal to a preset threshold value, determining that the first FM model and the second FM model do not accord with the training end condition.
6. The FM-model-based information recommendation method of claim 1, wherein the step of deriving a third model parameter from the first model parameter and the second model parameter comprises:
calculating parameter average values of the first model parameter and the second model parameter, and determining the parameter average values as third model parameters.
7. The FM-model-based information recommendation method of claim 1, wherein the coordinating terminal receives a first model parameter transmitted by a first terminal and receives a second model parameter transmitted by a second terminal, wherein the model parameters comprise a linear parameter and a vector parameter, and further comprising, before the step of:
the coordination terminal generates a public key and a private key and sends the public key to the first terminal and the second terminal;
the steps that the coordinating terminal receives the first model parameter sent by the first terminal and receives the second model parameter sent by the second terminal comprise:
the coordination terminal receives a first model parameter which is sent by the first terminal and encrypted by the public key, and receives a second model parameter which is sent by the second terminal and encrypted by the public key;
the step of obtaining a third model parameter according to the first model parameter and the second model parameter comprises:
decrypting the encrypted first model parameter by using the private key to obtain the decrypted first model parameter, and decrypting the encrypted second model parameter by using the private key to obtain the decrypted second model parameter;
and obtaining a third model parameter according to the decrypted first model parameter and the decrypted second model parameter.
8. An information recommendation device based on an FM model, characterized in that the information recommendation device based on the FM model comprises:
the receiving module is used for receiving a first model parameter sent by a first terminal and a second model parameter sent by a second terminal, wherein the model parameters comprise linear parameters and vector parameters;
the sending module is used for obtaining a third model parameter according to the first model parameter and the second model parameter, sending the third model parameter to the first terminal and the second terminal, so that the first terminal updates a first factorization machine FM model corresponding to the first model parameter according to the third model parameter, and carries out information recommendation according to the updated first FM model, and the second terminal updates a second FM model corresponding to the second model parameter according to the third model parameter, and carries out information recommendation according to the updated second FM model;
the detection module is used for calculating a first loss value according to the third model parameter and a loss function corresponding to a third FM model in the coordination terminal, and detecting whether the first FM model and the second FM model meet a preset training end condition or not according to the first loss value;
the sending module is further configured to send first prompt information to the first terminal and the second terminal if it is detected that the first FM model and the second FM model meet the training end condition, so as to prompt the first terminal and the second terminal to end the model updating operation according to the first prompt information.
9. An FM-model-based information recommendation apparatus comprising a memory, a processor and an FM-model-based information recommendation program stored on the memory and executable on the processor, the FM-model-based information recommendation program when executed by the processor implementing the steps of the FM-model-based information recommendation method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an FM model-based information recommendation program, which when executed by a processor implements the steps of the FM model-based information recommendation method according to any one of claims 1 to 7.
CN202010110426.8A 2020-02-20 2020-02-20 Information recommendation method, device and equipment based on FM model and storage medium Active CN111310047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010110426.8A CN111310047B (en) 2020-02-20 2020-02-20 Information recommendation method, device and equipment based on FM model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010110426.8A CN111310047B (en) 2020-02-20 2020-02-20 Information recommendation method, device and equipment based on FM model and storage medium

Publications (2)

Publication Number Publication Date
CN111310047A CN111310047A (en) 2020-06-19
CN111310047B true CN111310047B (en) 2021-04-23

Family

ID=71158577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010110426.8A Active CN111310047B (en) 2020-02-20 2020-02-20 Information recommendation method, device and equipment based on FM model and storage medium

Country Status (1)

Country Link
CN (1) CN111310047B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768008B (en) * 2020-06-30 2023-06-16 平安科技(深圳)有限公司 Federal learning method, apparatus, device, and storage medium
CN113487351A (en) * 2021-07-05 2021-10-08 哈尔滨工业大学(深圳) Privacy protection advertisement click rate prediction method, device, server and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729488A (en) * 2017-10-17 2018-02-23 北京搜狐新媒体信息技术有限公司 A kind of information recommendation method and device
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
EP3447657A1 (en) * 2016-04-20 2019-02-27 Huizhou TCL Mobile Communication Co., Ltd. Social information search system and search method thereof
CN109816491A (en) * 2019-01-18 2019-05-28 创新奇智(北京)科技有限公司 Method based on successive Regression removal factorization machine cross term

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250347B2 (en) * 2018-06-27 2022-02-15 Microsoft Technology Licensing, Llc Personalization enhanced recommendation models
CN110516161B (en) * 2019-08-30 2021-06-01 深圳前海微众银行股份有限公司 Recommendation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3447657A1 (en) * 2016-04-20 2019-02-27 Huizhou TCL Mobile Communication Co., Ltd. Social information search system and search method thereof
CN107729488A (en) * 2017-10-17 2018-02-23 北京搜狐新媒体信息技术有限公司 A kind of information recommendation method and device
CN109165515A (en) * 2018-08-10 2019-01-08 深圳前海微众银行股份有限公司 Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
CN109816491A (en) * 2019-01-18 2019-05-28 创新奇智(北京)科技有限公司 Method based on successive Regression removal factorization machine cross term

Also Published As

Publication number Publication date
CN111310047A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
US20210232974A1 (en) Federated-learning based method of acquiring model parameters, system and readable storage medium
CN109284313B (en) Federal modeling method, device and readable storage medium based on semi-supervised learning
CN109165725B (en) Neural network federal modeling method, equipment and storage medium based on transfer learning
CN109255444B (en) Federal modeling method and device based on transfer learning and readable storage medium
EP2405379A1 (en) Generating a challenge response image including a recognizable image
CN111325352B (en) Model updating method, device, equipment and medium based on longitudinal federal learning
CN111310047B (en) Information recommendation method, device and equipment based on FM model and storage medium
CN110751294A (en) Model prediction method, device, equipment and medium combining multi-party characteristic data
CN111324812B (en) Federal recommendation method, device, equipment and medium based on transfer learning
CN111159570B (en) Information recommendation method and server
WO2021174877A1 (en) Processing method for smart decision-based target detection model, and related device
CN109325357B (en) RSA-based information value calculation method, device and readable storage medium
US11954536B2 (en) Data engine
CN112926073A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
CN111339412A (en) Longitudinal federal recommendation recall method, device, equipment and readable storage medium
CN111553744A (en) Federal product recommendation method, device, equipment and computer storage medium
CN111027981B (en) Method and device for multi-party joint training of risk assessment model for IoT (Internet of things) machine
CN111475628A (en) Session data processing method, device, computer equipment and storage medium
CN111343265B (en) Information pushing method, device, equipment and readable storage medium
CN111368314A (en) Modeling and predicting method, device, equipment and storage medium based on cross features
KR20210020885A (en) Improved data integrity with trusted proof of code tokens
CN111553742A (en) Federal product recommendation method, device, equipment and computer storage medium
CN111553743A (en) Federal product recommendation method, device, equipment and computer storage medium
US11165753B1 (en) Secure data communication for user interfaces
CN112905904B (en) Recommendation method, recommendation device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant