CN115409203A - Federal recommendation method and system based on model independent meta learning - Google Patents
Federal recommendation method and system based on model independent meta learning Download PDFInfo
- Publication number
- CN115409203A CN115409203A CN202210879263.9A CN202210879263A CN115409203A CN 115409203 A CN115409203 A CN 115409203A CN 202210879263 A CN202210879263 A CN 202210879263A CN 115409203 A CN115409203 A CN 115409203A
- Authority
- CN
- China
- Prior art keywords
- client
- model
- recommendation model
- recommendation
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a federal recommendation method and a system based on model independent meta learning. The method comprises the following steps: 1) The server selects a plurality of clients and sends the selected recommendation model to each client; 2) Each client divides local data into a support set and a query set; 3) Each client side trains and updates the received recommendation model based on the local support set; 4) Each client verifies the updated recommendation model on the query set and calculates the gradient of the model; 5) The server collects the gradients calculated by each client, updates the recommendation model based on the average gradient and sends the recommendation model to each client for the next round of training; 6) Repeating the steps 3) -5) until a condition is set, and obtaining a common recommendation model by each client; 7) Each client trains the public recommendation model by using local data to obtain respective personalized recommendation models; 8) And the client m inputs the interactive data in the target application scene into the personalized recommendation model to obtain a personalized recommendation result.
Description
Technical Field
The invention relates to the field of recommendation systems, in particular to a federal recommendation method and system based on model independent meta learning.
Background
The recommendation system helps users to effectively discover the most useful information or services in the event of information overload by learning the tastes and preferences of the users. Existing deep learning-based recommendation methods typically require the centralized storage of all user-item interaction data to learn deep neural networks and representations of users and items, which means that private data of users needs to be uploaded and aggregated. However, user-item interaction data is highly sensitive to privacy, and its transmission can cause privacy problems and data leakage. Under the pressure of strict Data Protection regulations such as General Data Protection Regulation (GDPR for short), user behavior Data is not permitted to be used without clear permission of a user. Therefore, these centrally trained recommendation models may no longer be applicable in the future.
Federal learning is a machine learning technique that learns intelligent models jointly based on scattered user data. Different from the existing machine learning method based on centralized storage of user data, in federal learning, the user data is locally stored on user equipment so as to protect privacy to the maximum extent. In view of this feature of federal learning, many research efforts combine it with recommendation systems to achieve privacy-preserving recommendations. Federal recommendations train the global recommendation model in a decentralized manner and distribute it to the user's devices for personalized recommendations. For example, in the joint collaborative filtering method FCF, each user device locally computes user and project embedded gradients from personal scores stored on the device. The gradient of item embedding is then uploaded and aggregated to update global item embedding on a central server while updating user embedding locally. And finally, calculating and acquiring a recommendation result of the user according to the dot product of the user embedding and the project embedding.
However, there are still two unresolved problems with the existing federal recommendation methods. First, typical federal learning develops only one common output for all users, and does not adapt the model to each user. This is an important missing feature, especially in view of the heterogeneity of the distribution of historical interaction data among users. The high heterogeneity of the historical interaction data of the user can be embodied in two aspects: (i) User interactions with items depend largely on the user's own interests and preferences, and the interests of different users vary widely, resulting in a wide variety of item types for different user interactions. (ii) The number of interactive items also shows a large difference between users. Therefore, one global recommendation model cannot meet the personalized recommendation requirements of different users. Secondly, the recommendation models applied by the existing federal recommendation method are mostly traditional collaborative filtering and matrix decomposition methods, and the exploration and application of more advanced self-attention mechanism-based models are lacked. In recent years, inspired by the Transformer model for machine translation, the application of a self-attention mechanism to a recommendation system has become a research trend. A recommendation model based on a self-attention mechanism may emphasize truly relevant and important interactions in a sequence while reducing irrelevant interactions. Therefore, compared with the traditional collaborative filtering and matrix decomposition model, the collaborative filtering and matrix decomposition model has higher flexibility and expression capability. The existing federal recommendation method does not explore the implementation and application of a self-attention mechanism in the federal framework, which causes that although the protection of user privacy can be realized, the recommendation precision has a larger gap compared with the most advanced recommendation model.
Disclosure of Invention
In order to overcome the defects of the conventional federal recommendation method, the invention provides a federal recommendation method and a system based on model independent meta learning. Personalization of recommended models is improved by introducing model independent meta learning (MAML) into the federated learning framework. Meanwhile, a recommendation model based on a self-attention mechanism is realized in a federal learning framework, and the recommendation performance is effectively improved.
Specifically, the present invention treats the local training process of each user as a training task, and each training task is composed of a support set and a query set which are mutually disjoint. The recommendation model is trained on the support set and then its loss on the query set is computed. This lost gradient is then uploaded to the central server, where the global model is updated accordingly. And then distributing the updated global model to the user equipment for a new round of training and updating. The goal of the present invention is to find an initial sharing model (i.e., a common initial point learned based on MAML) that new users can easily adapt to their local data set by performing one or several gradient descent steps on their own data. Thus, while the initial model is derived in a distributed manner among all users, the final model implemented by each user is highly personalized based on the user's own data.
The technical content of the invention comprises:
a federal recommendation method based on model independent meta learning comprises the following steps:
the central server randomly extracts part of the user clients for the recommended model training of the turn and sends the global model parameters of the turn to the clients; the recommendation model may employ an industry classical self-attention mechanism-based recommendation model: SASRec.
The client participating in training divides the local data into a support set and a query set according to a set proportion;
the client trains and updates the received recommendation model based on the support set;
verifying the updated recommendation model on the query set and calculating the loss and gradient of the recommendation model;
the server collects and averages the gradients calculated by all the clients participating in training, updates the recommendation model based on the average gradient and performs the next training round until all the clients obtain a common recommendation model;
each client trains the public recommendation model by using local data to obtain an individual recommendation model of each client;
the client m inputs the interactive data in the local target application scene into the personalized recommendation model corresponding to the client m to obtain a corresponding personalized recommendation result; and the client m is the client corresponding to the user m selected by the server.
Further, the serverSelecting a set of user clientsA subset ofModel parameters of the recommended model are then comparedSent to the subsetEach client in (1); wherein, the local model parameters of the client m
Further, the method for the client m to divide the local data into the support set and the query set includes:
21 To convert local interaction sequence data of user mDivided into k interactive sessions, i.e.Wherein for the k-th session For a conversationThe total number of time steps (i.e. the session length),is as followsItems interacted with at a time step;
22 The k interactive sessions are segmented into two parts which are not intersected with each other to obtain a support setAnd query setWherein (x) i ,y i ) Represents the ith sample, x, in the support set i Representing a sequence of items, y, of an interactive session a except the last item i Representing the last item in the interactive session a; (x' i ,y′ i ) Query the ith sample in set, x' i Sequence of items, y ', representing the last item except in one interactive session b' i Representing the last item in the interactive session b,to a support setThe total number of samples in (a) is,as a set of queriesTotal number of samples in (1).
Further, the method for training and updating the received recommendation model by using the model independent meta learning method comprises the following steps:
31 The recommendation model includes an embedding layer and an attention layer, wherein the attention layer includes an attention mechanism and two layers of feedforward neural networks; the model parameters of the recommendation model include a parameter θ of the embedding layer e And the parameter theta of the attention layer a (ii) a Wherein, theta e ={M I ,P},θ a ={W Q ,W K ,W V ,W (1) ,W (2) ,b (1) ,b (2) };M I Embedding a matrix for an item, P is a learnable position matrix, W Q 、W K 、W V Weight matrices, W, corresponding to the query, key, value in the self-attention mechanism (1) Weight matrix for the first layer feedforward neural network, b (1) Is a bias vector of a first layer feedforward neural network, W (2) Weight matrix for the second layer feedforward neural network, b (2) A bias vector for a second layer feedforward neural network;
32 For supporting setsX in the ith sample i Firstly, converting the sequence into a sequence with a fixed length, then converting each item in the sequence into a one-hot coded vector and embedding the vector and the item into a matrix M I Multiply to obtain x i The corresponding input is embedded into the matrix I; combining the learnable position matrix P with the input embedding matrix I to obtain the output E = I + P of the embedding layer;
33 Input the matrix E into the attention mechanism of the attention layer to obtain an interest representation S of the user m;
34 Two-layer feedforward neural network with S input of interest expression connected in series and Relu as an activation function, and obtaining output FFN (S) = ReLU (SW) (1) +b (1) )W (2) +b (2) ;
35 Processing the FFN (S) layer normalization unit, the residual connection unit and the dropout unit obtained in the step 34) in sequence to obtain a user interest representation S';
36 Predict the preference score of user m for item i based on the obtained SWherein the content of the first and second substances,is the embedded vector corresponding to item i in the sequence;
37 Rank the items based on the preference scores of user m for the items; selecting K items with the highest preference scores to obtain a recommended item list; calculating the cross entropy loss according to the recommended item list and the actual item list in the training data
38 Based on cross entropy lossCalculating gradient and updating local recommendation model to obtain updated recommendation model
Further, the cross entropy loss isWhere, l is the loss function,is the current local recommendation model of the client m, and (x, y) is an example of a training sample in the support set.
Further, the gradient method for calculating the recommendation model comprises the following steps:
41 Client m inputs training samples in the query set into the updated recommendation modelAnd calculating to obtain cross entropy loss
A federal recommendation system based on model independent meta learning is characterized by comprising a plurality of clients and a server; wherein the content of the first and second substances,
the client is used for dividing local data for training the recommendation model into a support set and a query set, and then training and updating the received recommendation model by adopting a model independent meta-learning method based on the local support set; verifying the updated recommendation model on a query set and calculating the gradient of the recommendation model;
the server is used for sending the selected recommendation model to each client; collecting and averaging the gradients calculated by the clients, updating the recommendation model based on the average gradient, and sending the updated recommendation model to the clients for the next round of training; stopping training when a set condition is reached, and obtaining a common recommendation model by each client;
the client trains the public recommendation model by using local data to obtain an individualized recommendation model corresponding to the client; and inputting the interactive data in the local target application scene into the corresponding personalized recommendation model to obtain a personalized recommendation result.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when executed.
An electronic device comprising a memory and a processor, wherein the memory stores a program that performs the above described method.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides a personalized federal learning recommendation variant based on model independent meta learning, which can find a model initial point shared among all users, wherein the model initial point is good after each user performs one or more gradient descent steps to update a loss function of the user. According to the method, the generation of the common initial point among different users is realized based on the MAML, and each subsequent user trains a recommendation model based on the common initial point. The role of the federal learning is to protect the privacy data of the user from leaking in the whole learning process by an external framework, namely the invention takes the federal learning as an external framework and MAML as an internal training method.
2. The recommendation model based on the self-attention mechanism is explored and realized in the federal learning framework, so that the recommendation performance of the recommendation model is greatly improved compared with that of a traditional recommendation model-based method.
3. The evaluation results on a plurality of reference data sets show that the method is superior to the existing federal recommendation model, the most advanced level is achieved, and the proposed components play important roles respectively.
Drawings
FIG. 1 is a flow chart of an implementation of the federated recommendation method based on model-independent meta-learning of the present invention.
Fig. 2 is a specific architecture diagram of the federate recommendation method based on model-independent meta learning according to the present invention.
FIG. 3 is a schematic diagram illustrating the effect of various components on the experimental results according to an embodiment of the present invention;
(a) The influence of different components on the experimental result under the ML-1m data set in one embodiment of the invention is shown;
(b) Is the influence of different components under the Beauty data set on the experimental results according to one embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is to be understood that the described embodiments are merely specific embodiments of the present invention, rather than all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention without making creative efforts, are described.
The object of the invention is meta-training (meta-train) an algorithmThe algorithm can quickly train out the algorithm with good performance after being deployed to a new user clientAnd (5) recommending the model. AlgorithmIs parameterized, wherein its parametersA set of training tasks is used for updating during meta-training. The training process for each user client is considered a training task, e.g., user m trains the model using his local data is training task m.
As shown in fig. 1, the present invention can be divided into 5 steps in total:
s1: and the central server randomly extracts part of the user clients for the recommendation model training of the round and sends the global model parameters of the recommendation model of the round to the clients.
The method comprises the following specific construction steps:
s1.1: in each iteration, the central server selects a set of user clientsA subset ofTo participate in the training.
S1.2: the central server transmits the model parameters of the current roundSend to the collectionEach client in the set. For client m, its local model parameters
S2: the client participating in training divides the local data set into a support set and a query set according to a set proportion.
The specific construction steps of the data set division are as follows:
s2.1: for each user m, his interaction sequence is represented as:whereinRepresenting the items that user m interacted with at time step j (i.e., the user's interaction items, e.g., movies in a movie recommendation scene, items in an item recommendation scene). The present invention manually partitions a user interaction sequence into a plurality of interaction sessions according to a particular time idle threshold. Thus, the interaction sequence of user m is divided into k interaction sessions, e.g.These sessions are considered as the basic unit of model training. For conversationIt contains the interactive items inside the session:wherein the content of the first and second substances,representing user m at time stepThe items that are interacted with,representing the length of the session.
S2.2: these sessions are split into two parts that are disjoint to each other: support setAnd query setWherein (x) i ,y i ) Represents the ith sample, x, in the support set i Representing a sequence of items, y, of the interactive session except the last item i Representing the last item in the interactive session. (x' i ,y′ i ) The same meaning is applied. That is, the recommendation model is trained and predicted with the last item of the input session as the target and the remaining sequence of items as inputs.
S3: the client trains and updates the received recommendation model based on the support set.
The specific steps of the training based on the support set are as follows:
s3.1: the recommendation model adopted by the invention comprises an embedding layer and an attention layer, wherein the attention layer comprises an attention mechanism and two layers of feedforward neural networks; the model parameters distributed from the central server to the user client in step S1.2 can be divided into the parameters θ of the embedding layer e And the parameter theta of the attention layer a . They in turn contain several different parameters: theta e ={M I P } and theta a ={W Q ,W K ,W V ,W (1) ,W (2) ,b (1) ,b (2) }. Wherein M is I Embedding a matrix for an item, P is a learnable position matrix, W Q ,W K ,W V Weight matrices, W, corresponding to the query, key, value in the self-attention mechanism (1) As a weight matrix of a first layer feedforward neural network, b (1) Is a bias vector of a first layer feedforward neural network, W (2) As a weight matrix of a second layer feedforward neural network, b (2) Is the bias vector of the second layer feedforward neural network.
S3.2: the input embedding is first generated according to the embedding layer parameters. For supporting setX in the ith sample i Corresponding training session dataFirstly, rotate itAs a fixed-length sequence: (s) 1 ,s 2 ,…,s l ) Then each item is converted into a one-hot coded vector and embedded into a matrix M with the item I Multiplication, obtaining the input embedding matrix:finally, combining the learnable position matrix P with the input embedding matrix to obtain the output of the embedding layer: e = I + P.
S3.3: based on the input embedding E in S3.2 and the parameters of the attention layer, the user interest representation is calculated using the self-attention mechanism:
S=SA(E)=Attention(EW Q ,EW K ,EW V )
wherein, the standard self-attention model is in the form of:
wherein Q, K, V represent query, key, value, respectively, with their respective dimensions beingThe proposed model used in the present invention uses a structure in which two attention layers are connected in series.
S3.4: after each attention layer, two layers of feedforward neural networks, connected in series, and Relu as the activation function are employed, which may impart model nonlinearity and account for the interaction between different potential dimensions:
FFN(S)=ReLU(SW (1) +b (1) )W (2) +b (2)
where S is the attention matrix, W, obtained in step S3.3 (1) A weight matrix which is a first layer feedforward neural network; w (2) Is the weight of the second layer feedforward neural network, and the dimensions are allb (1) Biasing for layer one feed-forward neural networkVector, b (2) Is the offset vector of the second layer feedforward neural network, and the dimensions are all
S3.5: as the stack of self-attention and feed-forward layers and the network deepens, some problems become more severe, including overfitting, gradient fading, and slower training processes. The invention introduces layer normalization, residual connection and dropout technologies respectively to solve the problems and obtain a user interest representation S'
S′=S+Dropout(FFN(LayerNorm(S)))
S3.6: to predict the next item, the present invention uses the latent factor model to calculate the user's preference score for item i:
wherein R is i Is the user's preference score for item i,is the embedded vector for item i obtained in step S3.2 and S' is the user interest representation obtained in step 3.5.
S3.7: the candidate items are ranked based on the user preference scores for the items calculated at S3.6. And selecting the K items with the highest scores as a recommendation list of the model. And calculating the cross entropy loss according to the recommendation list and the actual item list in the training data. Converging all losses obtained by calculation in the support set, and obtaining the loss of the current model parameter in the support set:
where l is the loss function, where,is the current local recommendation model, (x, y) isAnd supporting a training sample in a centralized way.
S3.8: calculate gradients based on the losses calculated at S3.7 and update the local model:
wherein, theta m The parameters before the update of the local recommendation model are recommended,for the updated parameters of the local recommendation model,alpha is the local model learning rate for the gradient of the loss calculated in step S3.7.
S4: and verifying the updated recommendation model on the query set and calculating model loss and gradient.
The training based on the query set comprises the following specific steps:
s4.1: updating the local model S3And (3) obtaining the loss of the current model in the query set according to the same step of S3:
s4.2: gradient was calculated based on the losses calculated at S4.2:
s5: and the server collects and averages the gradients calculated by all the clients participating in training, updates the global model based on the average gradient and performs the next round of training.
The specific steps of the server for updating the global model are as follows:
s5.1: the server collects the gradients calculated in step S4.3 for all local models of the user clients participating in the training;
Wherein β is the learning rate of the global model.
S5.3: and carrying out the next round of training, resampling the client and updating the global model. And when the global model reaches convergence or the iteration turns reach a set threshold value, stopping training.
According to the recommendation model training method provided by the invention, a common initial point of a model is obtained based on a training mode of the MAML, different users can quickly approach the optimal point of the respective model by executing one-step or several-step gradient descent based on the common initial point by using a small number of samples based on respective local data, so that quick learning and quick adaptation are realized. The introduction of federal learning enables the whole training process not to involve uploading and aggregation of user local data, thereby protecting user privacy. Compared with the traditional federal recommendation system, the invention provides a public recommendation model for all users, provides an initial point which can rapidly approach to the respective optimal models for all users, so that the recommendation model finally obtained by each user can be adapted to the local data of the user to the maximum extent, and the personalized enhancement of the recommendation model is realized.
In the actual application stage, based on the common initial point described in the present invention, the user m randomly extracts a small number of samples based on the local data, and executes the gradient descent update model in one or more steps in the manner of step S3.8, so as to quickly obtain a recommendation model approaching the optimal point, thereby obtaining the personalized recommendation result.
In the embodiment of the invention, the effectiveness and feasibility of the federate recommendation system based on model independent meta learning are verified through experiments, and the performance of the system is verified through two experiments.
The method provided by the invention is compared with the recommended performance of the existing method to verify the effectiveness. As shown in table 1, experimental results on six public data sets in different fields show that the recommended performance of the method proposed by the present invention is consistently better than other baseline models. Particularly, compared with other methods based on the federal learning architecture, the method provided by the invention is obviously improved.
Table 1 is a comparison table of effects
Next, the impact of the two-part assembly encompassed by the present invention is verified. The method provided by the invention comprises a joint meta-learning framework (denoted by FML) and a recommendation model (denoted by SA) based on an attention-free mechanism. To verify the validity of each component, multiple experiments were performed on the Beauty and ML-1m data sets, analyzing the contribution of each component. For the whole framework, the invention considers the standard federal learning framework FedAvg. In view of the more extensive framework analysis studies to be carried out by the present invention, the present invention also contemplates a Meta-learning variant of FedAvg, namely FedAvg (Meta). Before testing, fedAvg (Meta) uses the support set of the testing client to update the model initialization received from the server by a one-step stochastic gradient descent, which reflects the essence of Meta-learning, "learning and trimming". Both FedAvg and FedAvg (Meta) used all data of the training client during the training process. For the recommendation model, the invention considers BPR as a comparison, which is a traditional recommendation model based on matrix decomposition. As shown in FIG. 3, the joint meta-learning framework and the self-attention model of the present invention each play an important role in the final recommendation performance.
Finally, it should be noted that: the described embodiments are only some embodiments of the present application and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Claims (9)
1. A federal recommendation method based on model independent meta learning comprises the following steps:
1) The server selects a plurality of clients and sends the selected recommendation model to each client;
2) Each client divides local data used for training the recommendation model into a support set and a query set;
3) Each client side is trained and updates the received recommendation model by adopting a model independent meta learning method based on a local support set;
4) Each client verifies the updated recommendation model on the query set and calculates the gradient of the recommendation model;
5) The server collects and averages the gradients calculated by the clients, updates the recommendation model based on the average gradient and sends the updated recommendation model to the clients for the next round of training;
6) Repeating the steps 3) -5) until a condition is set, and obtaining a common recommendation model by each client;
7) Each client trains the public recommendation model by using local data to obtain an individual recommendation model of each client;
8) The client m inputs interactive data in a local target application scene into the personalized recommendation model corresponding to the client m to obtain a corresponding personalized recommendation result; and the client m is the client corresponding to the user m selected by the server.
3. The method of claim 2, wherein the method for the client m to divide the local data into the support set and the query set is as follows:
21 To convert local interaction sequence data of user mDivided into k interactive sessions, i.e.Wherein for the k-th session Representing user m at time stepThe items that are interacted with,on behalf of the sessionA length;
22 The k interactive sessions are segmented into two parts which are not intersected with each other to obtain a support setAnd query setWherein (x) i ,y i ) Represents the ith sample, x, in the support set i Representing a sequence of items, y, of an interactive session a except the last item i Representing the last item in the interactive session a; (x' i ,y′ i ) Query the ith sample in set, x' i Sequence of items, y ', representing the last item except in one interactive session b' i Representing the last item in the interactive session b,to a support setThe total number of samples in (a),as a set of queriesTotal number of samples in (1).
4. The method of claim 3, wherein the method of training and updating the received recommendation model using a model independent meta learning method is:
31 The recommendation model comprises an embedding layer and an attention layer, wherein the attention layer comprises an attention mechanism and two layers of feedforward neural networks; the model parameters of the recommendation model include a parameter θ of the embedding layer e And the parameter theta of the attention layer a (ii) a Wherein, theta e ={M I ,P},θ a ={W Q ,W K ,W V ,W (1) ,W (2) ,b (1) ,b (2) };M I Embedding a matrix for an item, P is a learnable position matrix, W Q 、W K 、W V Weight matrices, W, corresponding to the query, key, value in the self-attention mechanism (1) As a weight matrix of a first layer feedforward neural network, b (1) Is a bias vector of a first layer feedforward neural network, W (2) As a weight matrix of a second layer feedforward neural network, b (2) A bias vector for a second layer feedforward neural network;
32 For supporting setsX in the ith sample i Firstly, converting the sequence into a sequence with a fixed length, then converting each item in the sequence into a one-hot coded vector and embedding the vector and the item into a matrix M I Multiply to obtain x i The corresponding input is embedded into the matrix I; combining the learnable position matrix P with the input embedding matrix I to obtain the output E = I + P of the embedding layer;
33 Input the matrix E into the attention mechanism of the attention layer to obtain an interest representation S of the user m;
34 Input an interest representation S into a two-layer feedforward neural network connected in series and take Relu as an activation function to obtain an output FFN (S) = ReLU (SW) (1) +b (1) )W (2) +b (2) ;
35 Processing the FFN (S) obtained in the step 34) by a layer normalization unit, a residual error connection unit and a dropout unit in sequence to obtain a user interest representation S';
36 Predicting preference score of user m for item i according to the obtained SWherein the content of the first and second substances,is the corresponding inlay of item i in the sequenceInputting a vector;
37 Rank the items based on the preference scores of user m for the items; selecting K items with the highest preference scores to obtain a recommended item list; calculating the cross entropy loss according to the recommended item list and the actual item list in the training data
6. The method according to claim 4, wherein in step 4), the gradient method for calculating the recommendation model is:
41 Client m inputs training samples in the query set into the updated recommendation modelAnd calculating to obtain cross entropy loss
7. A federal recommendation system based on model independent meta-learning is characterized by comprising a plurality of clients and a server; wherein the content of the first and second substances,
the client is used for dividing local data of the training recommendation model into a support set and a query set, and then training and updating the received recommendation model by adopting a model independent meta learning method based on the local support set; verifying the updated recommendation model on a query set and calculating the gradient of the recommendation model;
the server is used for sending the selected recommendation model to each client; collecting and averaging the gradients calculated by the clients, updating the recommendation model based on the average gradient, and sending the updated recommendation model to the clients for the next round of training; stopping training when a set condition is reached, and obtaining a common recommendation model by each client;
the client trains the public recommendation model by using local data to obtain an individualized recommendation model corresponding to the client; and inputting the interactive data in the local target application scene into the corresponding personalized recommendation model to obtain a personalized recommendation result.
8. A server, comprising a memory and a processor, the memory storing a computer program configured to be executed by the processor, the computer program comprising instructions for carrying out the steps of the method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210879263.9A CN115409203A (en) | 2022-07-25 | 2022-07-25 | Federal recommendation method and system based on model independent meta learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210879263.9A CN115409203A (en) | 2022-07-25 | 2022-07-25 | Federal recommendation method and system based on model independent meta learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115409203A true CN115409203A (en) | 2022-11-29 |
Family
ID=84157549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210879263.9A Pending CN115409203A (en) | 2022-07-25 | 2022-07-25 | Federal recommendation method and system based on model independent meta learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115409203A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115952838A (en) * | 2023-02-03 | 2023-04-11 | 黑盒科技(广州)有限公司 | Recommendation system generation method and system based on adaptive learning |
CN116226540A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN117494191A (en) * | 2023-10-17 | 2024-02-02 | 南昌大学 | Point-of-interest micro-service system and method for information physical security |
-
2022
- 2022-07-25 CN CN202210879263.9A patent/CN115409203A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115952838A (en) * | 2023-02-03 | 2023-04-11 | 黑盒科技(广州)有限公司 | Recommendation system generation method and system based on adaptive learning |
CN115952838B (en) * | 2023-02-03 | 2024-01-30 | 黑盒科技(广州)有限公司 | Self-adaptive learning recommendation system-based generation method and system |
CN116226540A (en) * | 2023-05-09 | 2023-06-06 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN116226540B (en) * | 2023-05-09 | 2023-09-26 | 浙江大学 | End-to-end federation personalized recommendation method and system based on user interest domain |
CN117494191A (en) * | 2023-10-17 | 2024-02-02 | 南昌大学 | Point-of-interest micro-service system and method for information physical security |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Geva et al. | Transformer feed-forward layers are key-value memories | |
CN111931062B (en) | Training method and related device of information recommendation model | |
CN110457589B (en) | Vehicle recommendation method, device, equipment and storage medium | |
US20210326674A1 (en) | Content recommendation method and apparatus, device, and storage medium | |
Zarzour et al. | A new collaborative filtering recommendation algorithm based on dimensionality reduction and clustering techniques | |
CN115409203A (en) | Federal recommendation method and system based on model independent meta learning | |
CN110266745B (en) | Information flow recommendation method, device, equipment and storage medium based on deep network | |
CN111489095B (en) | Risk user management method, apparatus, computer device and storage medium | |
US20230162098A1 (en) | Schema-Guided Response Generation | |
Okon et al. | An improved online book recommender system using collaborative filtering algorithm | |
WO2021135701A1 (en) | Information recommendation method and apparatus, electronic device, and storage medium | |
Wang et al. | Attention-based CNN for personalized course recommendations for MOOC learners | |
CN109313540A (en) | The two stages training of spoken dialogue system | |
Ding et al. | Product color emotional design considering color layout | |
CN114510646A (en) | Neural network collaborative filtering recommendation method based on federal learning | |
CN110502701B (en) | Friend recommendation method, system and storage medium introducing attention mechanism | |
CN113641835A (en) | Multimedia resource recommendation method and device, electronic equipment and medium | |
CN114357201B (en) | Audio-visual recommendation method and system based on information perception | |
CN115409204A (en) | Federal recommendation method based on fast Fourier transform and learnable filter | |
CN113643046B (en) | Co-emotion strategy recommendation method, device, equipment and medium suitable for virtual reality | |
Shi et al. | Cross-domain variational autoencoder for recommender systems | |
CN111949894B (en) | Collaborative filtering personalized recommendation method based on multi-space interaction | |
CN116484105B (en) | Service processing method, device, computer equipment, storage medium and program product | |
CN117035059A (en) | Efficient privacy protection recommendation system and method for communication | |
WO2023087933A1 (en) | Content recommendation method and apparatus, device, storage medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |