CN110415081B - Content-based matching recommendation method for user personalized products - Google Patents

Content-based matching recommendation method for user personalized products Download PDF

Info

Publication number
CN110415081B
CN110415081B CN201910685469.6A CN201910685469A CN110415081B CN 110415081 B CN110415081 B CN 110415081B CN 201910685469 A CN201910685469 A CN 201910685469A CN 110415081 B CN110415081 B CN 110415081B
Authority
CN
China
Prior art keywords
user
product
information
hidden
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910685469.6A
Other languages
Chinese (zh)
Other versions
CN110415081A (en
Inventor
宋彬
马梦迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910685469.6A priority Critical patent/CN110415081B/en
Publication of CN110415081A publication Critical patent/CN110415081A/en
Application granted granted Critical
Publication of CN110415081B publication Critical patent/CN110415081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Landscapes

  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a matching recommendation method of a user personalized product based on content, which overcomes the problem that the recommendation algorithm still needs to be improved in the prior art. The invention comprises a step one, a random batch sampling method based on users; step two, a user product matching method based on content: establishing a network; according to a random batch sampling method of batch input based on users, obtaining an ordered user id, a historical record product id list and a set of target product id and label, and respectively training, adjusting and evaluating a network model on a training set, a verification set and a test set; inputting a specific user and a history record of the specific user, predicting and sequencing scores of the specific user on all unviewed movies by using a content-based user product matching network, and finally outputting a top-N recommendation result. The invention adopts the lightweight neural network, greatly reduces the training time and the requirements of training equipment, is easy in sampling process, has more randomness in model input and has stronger generalization capability of prediction results.

Description

Content-based matching recommendation method for user personalized products
Technical Field
The invention relates to the field of recommendation algorithms, in particular to a matching recommendation method of a user personalized product based on content.
Background
With the development of the internet era, the phenomenon of information overload is increasingly serious, more and more product choices are provided for users, and competition among products is larger and larger. The recommendation algorithm is an algorithm that matches users and products. The good recommendation algorithm can not only save the time of the user and increase the satisfaction degree of the user, but also increase the acceptance rate of the product and promote the increase of the transaction amount.
The existing recommendation algorithm is generally to pre-train in advance from a user product interaction information matrix through a collaborative filtering algorithm to obtain a product hidden factor and a user hidden factor, and then train a recommendation model by using the hidden factor obtained through pre-training to obtain a recommendation result. However, the interaction information matrix of the user product changes constantly, and pre-training and retraining are required to be performed again at intervals, so that the training process of the recommendation system is complex, the real-time performance of the recommendation result is low, and the cold start problem exists, that is, the recommendation cannot be performed for new users and new products.
Therefore, the existing recommendation algorithm has a plurality of problems, and needs to be improved urgently.
Disclosure of Invention
The invention overcomes the problem that the recommendation algorithm still needs to be improved in the prior art, and provides the matching recommendation method of the user personalized product based on the content, which has short training time.
The technical solution of the present invention is to provide a method for matching and recommending a content-based user-customized product, which comprises the following steps: step one, a random batch sampling method based on users, step two, a user product matching method based on contents,
the random batch sampling method based on the user comprises the following steps:
step 1.1, arranging an interaction information file, a user information file and a product information file, and dividing a test set and a training set;
step 1.2, encoding the user information and the product information into numerical vectors;
step 1.3, for each user, randomly taking the history record in a random number, and repeatedly adopting a leave-one-out form to the taken result to determine the history record and the target product;
step 1.4, outputting the user id, the historical record product id list, the target product id and the label set which are orderly arranged;
the user product matching method based on the content comprises the following steps:
step 2.1, establishing a network;
step 2.1.1, input positions of a user id, a historical record product id list and a target product id are reserved, and the information input at the corresponding position passes through a search layer to obtain corresponding user information, historical record product information and a code of the target product information;
step 2.1.2, mapping the codes of the user information, the historical record product information and the target product information to hidden factor spaces with the same dimensionality through a u network, a p network and a q network respectively;
step 2.1.3, carrying out self-adaptive weighting on the hidden factors of the user historical record products according to the hidden factors of the target products and the hidden factors of the user historical record products to obtain hidden factors of the user historical preference;
step 2.1.4, weighting by using a user historical preference hidden factor and a user representation hidden factor to obtain a user image;
step 2.1.5, predicting matching scores through a full connection layer by using the user portrait and the hidden factors of the target product;
step 2.1.6, correcting the matching score by using coeff and bias;
2.2, according to a random batch sampling method of batch input based on users, obtaining an ordered user id, a historical record product id list and a set of target product ids and labels, and respectively training, adjusting and evaluating a network model on a training set, a verification set and a test set;
and 2.3, inputting a specific user and a history record of the specific user, predicting the scores of the specific user on all unviewed movies by using a user product matching network based on the content, sequencing the scores, and finally outputting a top-N recommendation result.
Preferably, in step 1.3, assuming that a user has N history records, a random number x between 1 and N is randomly generated to represent the number of samples to be taken in the history records; sampling randomly without putting back for x times in the history of the user to obtain x history record product ids; and selecting the historical record product id of each position as a target product id in turn for the product ids in the x historical records, correspondingly scoring the target product id as corresponding label, taking the rest historical record product ids as the historical records of the user, and finally obtaining N sequences containing the user id, the historical record product id list and the target product id and the label.
Preferably, the content-based user product matching method comprises the following steps:
step 2.1.1, inputting the first three items of user id, historical record product id, target product id list and label which are obtained by sampling by using a random batch sampling method based on a user into a network, obtaining user information through a search layer, and coding the user historical record product content information and the user target product content information, and assuming that the user has n historical records; recommending according to personal information provided by a new user, wherein the historical information is null, and the input historical record product feature vector is a full 0 vector;
step 2.1.2, input content information codes are mapped through a neural network to obtain corresponding hidden factor representation, three groups of content mapping networks are provided, wherein a u network maps user information codes to a user representation hidden factor space, a p network maps user history record product content information codes to a user history record product hidden factor space, and a q network maps target product content information codes to a target product hidden factor space;
wherein each network is composed of one or more layers of fully-connected neural networks, and the calculation method of each layer of fully-connected neural network is as shown in formula (1);
y=f(Wx+b) (I)
where y is the output of the layer of neural network, x is the input of the layer of neural network, W is a weight matrix whose shape is (output vector dimension, input vector dimension), b is a bias vector whose shape is (output vector dimension, 1), f is an activation function; wherein W and b are trainable variables, the trainable variables are updated by a gradient descent method, a layer of forward neural network is adopted, the length of a hidden factor, namely the length of an output layer y is set to be 16, and an activation function is set to be relu;
step 2.1.3: firstly, carrying out operation in the formula (2) on each target product hidden factor and a user history record product hidden factor;
Figure BDA0002146158930000031
wherein p and q are a user history record product implicit factor and a target product implicit factor respectively,
Figure BDA0002146158930000032
the method is a mode of combining a product hidden factor and a target product hidden factor of a user historical record, wherein p and q are spliced; f (& gt.) represents a layer of neural network, the output of the layer of neural network is a vector with the length of 16, and an inner product operation is carried out on the h vector with the length of 16 and the h vector to obtain a scalar y which represents the contribution degree of the user history record product to the target product prediction; h, W, b in the formula (2) are trainable variables,updated using a gradient descent method, n contribution degrees (y) are obtained1,y2,...,yn) Each user history record product hidden factor corresponds to one user history record product hidden factor;
the contribution degree is normalized to be within the interval (0,1) using the softmax function in equation (3) below, and their sum is guaranteed to be equal to 1;
Figure BDA0002146158930000033
obtaining n weight coefficients (weight) after passing through the adaptive weighting network1,wqeight2,...,weightn) Wherein each user history record product hidden factor corresponds to one, the beta of the denominator is in the (0,1) interval, and the hyper-parameter is manually adjusted to be 0.85;
carrying out weighted summation on the corresponding hidden factors of the user history record with the length of 16 by using the weighting coefficients to obtain the hidden factors of the user history preference with the final length of 16;
step 2.1.4: the user history preference hidden factor and the user representation hidden factor both adopt 16-dimensional vectors, and the approximate proportion of the user history preference hidden factor and the user representation hidden factor is obtained by calculation by using a formula (4) and a formula (5);
a=hTf(Wph+b) (4)
b=hTf(Wpp+b) (5)
wherein p ishIs an input implicit factor of the user's historical preference, ppA and b are respectively proportional scalars corresponding to the user history preference hidden factor and the user representation hidden factor; w, b and h in the two formulas are trainable variables shared by the two variables and are updated through a gradient descent algorithm;
obtaining the weight of the user historical preference implicit factor and the user expression implicit factor by using the formula (6), namely the user image u; if a user does not have a history record, the user portrait obtained here is automatically adjusted to be a user representation hidden factor without redundant operation;
Figure BDA0002146158930000041
step 2.1.5: splicing the user portrait and the hidden factors of the target product, and passing through a two-layer neural network, wherein the number of neurons in the middle layer is 16, and the activation function adopts relu; the number of neurons in the output layer is 1, and no activation function is adopted;
step 2.1.6: modifying the match score using coeff and the bias;
and (3) outputting the predicted matching score prediction as a real score after being corrected, wherein the correction process is as the following formula (7):
score=coeff*prediction+bi+bu+b (7)
wherein b isi、buB, product scoring bias, user scoring bias and total scoring bias are respectively, each user and each product have independent scoring bias, the total scoring bias is common and is trainable parameters obtained by training through a gradient descent method; the coeff calculation method is as follows (8):
Figure BDA0002146158930000042
wherein R+I represents the number of user history records, and alpha is a hyperparameter within the (0,1) interval, and is set to 0.15;
step 2.2, implementing a random batch sampling method based on users one by one to obtain an ordered user id, a historical record product id, a target product id and a label set to train, adjust and evaluate the model; namely, the loss function training model is constructed
The historical records of the user comprise products which interact with the user, the label is the score of the user on the products to be predicted, and the training adopts the mean square error loss as shown in the formula (9);
Figure BDA0002146158930000043
where N represents the size of the batch, outputiRepresenting the score of the ith prediction, and y represents a label corresponding to a product to be predicted; calculating the loss of each batch and updating the parameters of the whole network by using a gradient descent back propagation algorithm, wherein an epoch means that a user-based random batch sampling method is performed on each user and input into the network for training, and when the user passes through the epoch, the matching degree of each user is predicted by using test set data and an evaluation index is calculated, wherein when the product evaluation is predicted, the used evaluation index is the root mean square error, and the formula is (10);
Figure BDA0002146158930000044
where N is the number of all scores predicted, outputiIs the prediction result for each case score, y is outputiIf the RMSE corresponding to the epoch is smaller than the minimum evaluation index in the epoch in the training history, the model at the moment is saved;
step 2.3, inputting a specific user and a history record of the specific user, predicting scores of the specific user on all unviewed movies by using a user product matching network based on content, sequencing the scores, and finally outputting a top-N recommendation result; namely, the optimal model after training is used for recommending a certain user, the parameters of the model are read, the personal information of the user is processed into a vector form according to the mode of preprocessing the personal information of the user during training, the content information of historical products of the user is processed into the vector form according to the mode of preprocessing the content information of products during training, all products which do not interact with the user are used as products to be predicted, the content information of the products is processed into the vector form according to the mode of preprocessing the content information of the products during training, the matching degree of the user and each product to be predicted is predicted once, all products to be predicted are ranked according to the matching degree, and the top-N recommendation result is used for recommending the user.
Compared with the prior art, the content-based matching recommendation method for the user personalized products has the following advantages: the method adopts a light-weight neural network, greatly reduces training time and requirements of training equipment, uses a method for predicting hidden factors according to personal information of a user and product content information, avoids complex pre-training and retraining processes, avoids the problem of cold start, can match new users and new products, and makes the sampling process easy and the model input more random based on the random sampling of the user, so that the generalization ability of the prediction result is stronger.
Drawings
FIG. 1 is a schematic diagram illustrating a relationship between a matching network of a content-based user-customized product and a random batch sampling method based on a user in the method for matching and recommending the content-based user-customized product according to the present invention;
FIG. 2 is a schematic diagram of a matching network structure of the content-based user-customized product in the method for matching and recommending the content-based user-customized product according to the present invention;
FIG. 3 is a schematic diagram of a process of generating recommendation results using a trained content-based user product matching network in the method for matching and recommending content-based user personalized products according to the present invention.
Detailed Description
The following describes the matching recommendation method for content-based user personalized products according to the present invention with reference to the accompanying drawings and the detailed description: the random batch sampling method based on the user comprises the following specific implementation steps:
step 1: and sorting out an interaction information file, a user information file and a product information file, and dividing a test set and a training set.
The data set used should include not less than 100000 user-product interaction records and corresponding user personal information and product content information; taking the public data set MovieLens 100k movie recommendation data set as an example, the data set comprises scoring (1-5) data of 943 users (1-943) to 1682 movies (1-1682), each user has at least 20 rating records, and three files are sorted out from the data set used:
and (2) ratios: each line of data includes [ user id, movie id, score ];
a user: each line of data includes [ user id, user gender, user age, user occupation ];
item: each line of data includes [ movie id, movie category, movie year ];
wherein the user career set is: administeror, artist, sector, editor, engineer, entertaine, executive, healthcare, homemaker, lawyer, librarian, marking, none, other, programer, reitored, salesman, scientist, student, technician, writer. There are 21 professions, one for each user.
The movie categories include: unknown, Action, Adventure, Animation, Children's, Commdy, Crime, Docummentary, Drama, Fantasy, Film-Noir, Horr, Musical, Mystery, Romance, Sci-Fi, Thriller, War, Western, 19 classes, each movie may belong to multiple categories.
And randomly selecting 80% of rating files as a training set, taking the other 10% of rating files as a test set, taking the remaining 10% of rating files as a verification set, wherein the training set is used for training the model after being processed by a random batch sampling method, the test set is used for evaluating the generalization capability of the model in the training process, and adjusting the hyper-parameters, controlling the training time and the like according to the model performance in the verification set.
Step 2: the user information and the product information are encoded into a numerical vector.
This step is to process the user's personal information and the movie information into a numerical vector form that can participate in computer operations. For the user information: and carrying out one-hot coding on the user information and the category information belonging to the user occupation, namely the vector length is equal to the size of the category, if the user information and the category information belong to the corresponding category, setting the corresponding position to be 1, and if the user information and the category information do not belong to the corresponding category, setting the corresponding position to be 0. For example, if the sex is male or female, the information is encoded into a length-2 vector, and if the sex is male, the information is encoded into [0,1], and if the sex is female, the information is encoded into [1,0 ]. The same method is used for encoding the user occupation information into a vector with the length of 21; the age of the user belongs to numerical information, and the information has a size relation, and the information is scaled to be between 0 and 1. The specific scaling is (user age-minimum user age)/(maximum user age-minimum user age). This scaling is done to ensure that the gradient when training the neural network remains within the normal range. Finally, the coded representation of each user information is spliced together according to the sequence of gender, occupation and age to form a numerical vector with the length of 24.
For movie information: the movie category information belongs to category information, and is coded in a one-hot coding mode, wherein the specific coding mode refers to user gender coding, and finally a numerical vector with the length of 19 is formed; the information of the movie times belongs to numerical information, and the information is zoomed to be between 0 and 1, and the specific zooming mode refers to the zooming mode of the age of the user. And splicing the numerical codes of the movie information according to the sequence of the category of the movie and the year of the movie to finally obtain a numerical vector with the length of 20.
For missing information, the value of the corresponding position is set to 0.
And step 3: for each user, random number random sampling is carried out on the history record of the user, and the history record and the target product are determined by repeatedly adopting leave-one-out mode on the sampling result.
Assuming that a user has N historical records, the following steps of random batch sampling are carried out:
randomly generating a random number x between 1 and N, representing the number to be sampled in the history;
sampling randomly without putting back for x times in the history of the user to obtain x history record product ids;
and selecting the historical record product id of each position as a target product id in turn for the product ids in the x historical records, wherein the corresponding label is correspondingly scored, and the rest historical record product ids are used as the historical records of the user. Finally obtaining N (user id, historical record product id, target product id, label) sequences;
the method for randomly adopting the random number of historical records has the advantages that more diversified training data can be generated, including the situation without the historical data, and the recommendation scene corresponding to various situations can be favorably dealt with.
And 4, step 4: and outputting the user id, the historical record product id list, the target product id and the label set which are orderly arranged.
The batch obtained by final sampling comprises N identical user ids, N historical record product id sets (each set comprises N-1 historical record products), N different target product ids and N labels, and the positions of the target product ids and the labels are corresponding to each other.
The content-based user product matching method is shown in fig. 2, and the specific implementation steps are as follows:
step 1: and setting aside input positions of the user id, the historical record product id list and the target product id, and obtaining corresponding user information, historical record product information and target product information codes from the information input at the corresponding position through a searching layer.
As shown in fig. 2, the first three items (user id, history product id, target product id, label) obtained by sampling using a random batch sampling method based on a user are input into a network and are obtained through a search layer (user information, user history product content information, user target product content information), and it is assumed that the user has n histories.
It should be noted here that the invention has the efficacy of coping with cold starts, i.e. when a new user, i.e. a user without history, enters the system, the invention will still make recommendations for it as long as it provides personal information. At this time, the history information is empty, and the input history record product feature vector should be an all-0 vector. In the aforementioned random batch sampling method based on the user, a batch with an empty history may be adopted, at this time, the history id is set to 0, and the corresponding searched product content vector is a full 0 vector. Therefore, the cold start problem is also considered during training, the complex batch selection process and the training process are avoided, and the calculation complexity and the training time are greatly reduced.
Step 2: and mapping the codes of the user information, the historical record product information and the target product information to the hidden factor space with the same dimensionality through a u network, a p network and a q network respectively.
The input content information is mapped through a neural network to obtain corresponding hidden factor representation, wherein three groups of content mapping networks are provided, wherein a u network maps user information to a user representation hidden factor space, a p network maps user historical record product content information to a user historical record product hidden factor space, and a q network maps target product content information to a target product hidden factor space. Note that the output of the u-network, p-network, q-network are of the same dimension, i.e. the steganographic factor dimensions mapped are equal, the length assumed in the experiment is 16. If the content information of a certain input user history record product is an all-0 vector, no matter what the parameters of the p-network couple are, the output history hidden factors are all-0 hidden factors.
Each network is composed of one or more layers of fully-connected neural networks, and the calculation method of each layer of fully-connected neural network is as shown in formula (1).
y=f(Wx+b)
Where y is the output of the layer of neural network, x is the input of the layer of neural network, W is a weight matrix whose shape is (output vector dimension, input vector dimension), b is a bias vector whose shape is (output vector dimension, 1), and f is an activation function, including but not limited to relu/tanh/sigmoid functions. Where W and b are trainable variables that may be updated by a gradient descent method.
In the experiment, a layer of forward neural network is adopted at the step, the length of a hidden factor, namely the length of an output layer y is set to be 16, and an activation function is set to be relu.
And step 3: and carrying out self-adaptive weighting on the hidden factors of the user historical record products according to the hidden factors of the target products and the hidden factors of the user historical record products to obtain hidden factors of the user historical preference.
The operation in equation (2) is first performed for each (target product hidden factor, user history product hidden factor) pair.
Figure BDA0002146158930000081
Wherein p and q represent a user history record product implicit factor and a target product implicit factor respectively,
Figure BDA0002146158930000082
the method is a mode for combining the implicit factor of the product of the user history record and the implicit factor of the target product, and comprises but is not limited to splicing, corresponding element multiplication and the like, and a mode for splicing p and q is adopted in an experiment. f (.) represents a layer of neural network, the output of the layer of neural network is a vector with the length of 16, and an inner product operation is carried out on the h vector with the length of 16 to obtain a scalar y which represents the contribution degree of the user history record product to the target product prediction. H, W, b in equation (2) are trainable variables that can be updated using a gradient descent method.
Through this step we obtain n contribution degrees ((y)1,y2,...,yn) One for each user history record product hiding factor).
The contribution degree is then normalized to within the (0,1) interval using the softmax function in equation (3) and their sum is guaranteed to be equal to 1.
Figure BDA0002146158930000083
The adaptive weighting method for the historical records takes into account that different historical record contributions are different for a specific product, for example, when our target product is a love film, the reference opinion of the science fiction film in the historical records on the prediction result is obviously small, and the network can reduce the weight value of the historical records. After passing through the adaptive weighting network, we obtain n weighting coefficients (weights)1,weight2,...,weightn) And each user history record product hiding factor corresponds to one user history record product hiding factor. Here, β of the denominator is in the (0,1) interval, which is set to 0.85 in the hyper-parametric experiment requiring manual adjustment. This hyper-parameter makes each of the users with excessive number of history recordsThe weight of the historical record is not a number close to 0, so that errors possibly caused by similar operation of a computer are avoided.
Finally, weighting and summing the corresponding length-16 user history record hidden factors by using the weighting coefficients to obtain the final length-16 user history preference hidden factors.
And 4, step 4: and weighting the user history preference implicit factor and the user representation implicit factor to obtain the user image.
It is noted that the implicit factor of the user's historical preference and the implicit factor of the user's representation should be of the same dimension, and a 16-dimensional vector is used in the experiment.
Firstly, calculating an approximate proportion of the user historical preference implicit factor and the user representation implicit factor by using an equation (4) and an equation (5).
a=hTf(Wph+b)
b=hTf(Wpp+b)
Wherein p ishIs an input implicit factor of the user's historical preference, ppThe hidden factors are represented by users, and a and b are respectively proportional scalars corresponding to the hidden factors of the user history preference and the hidden factors represented by the users. W, b, h in the two equations are trainable variables shared by the two variables and may be updated by a gradient descent algorithm.
And (4) obtaining the weighting of the user historical preference implicit factor and the user representation implicit factor by using the formula (6), namely the user image u. If no history exists for a user, the user representation obtained here is automatically adjusted to be a user representation hidden factor without redundant operation.
Figure BDA0002146158930000091
And 5: a matching score is predicted through the full link layer using the user representation and the target product hidden factor.
Here, the matching score refers to a degree of matching of the user with the target product obtained by some calculation using the user portrait and the target product hidden factor. The calculation process may be euclidean distance calculation, inner product calculation, or multilayer neural network. Splicing the two in the experiment, and passing through a two-layer neural network, wherein the number of neurons in the middle layer is 16, and the activation function adopts relu; the number of neurons in the output layer is 1 and no activation function is used.
Step 6: the match score is modified using coeff and bias.
The predicted matching score prediction needs to be corrected before being output as a true score. The correction process is as shown in formula (7):
score=coeff*prediction+bi+bu+b (7)
wherein b isi、buB are the product score bias, the user score bias and the total score bias, respectively. The three biases represent the scored preference of a specific product, the scoring preference of a user and the scoring preference of the whole system, the score predicted after correction is more accurate, each user and the product have independent scoring biases, the total scoring biases are common and are trainable parameters obtained through gradient descent training. The coeff calculation method is as follows (8):
Figure BDA0002146158930000101
wherein R+| represents the user history number, α is a hyperparameter within the (0,1) interval, and is set to 0.15 in the experiment. This arrangement balances the previous arrangement where the sum of the weights for each history for the user with the largest number of histories is greater than one, effectively making the score prediction more accurate.
And 7: and constructing a loss function.
The invention aims to predict the matching degree of a product and a user, and the product and the user are presented in a prediction scoring mode.
Figure BDA0002146158930000102
The user's history record includes products that have interacted with the user, and the label is the user's comment on the product to be predictedAnd (4) dividing. The training uses the mean square error loss, as in equation (9).
Figure BDA0002146158930000103
Where N represents the size of the batch, outputiRepresents the score of the ith prediction, and y represents the label corresponding to the product to be predicted.
The entire network is described by this point.
And 8: and (5) training the model.
The random batch sampling method is used for all users to obtain a plurality of batches of the users, and one training using the batches is called an epoch.
For each batch, the lot's loss is calculated and the parameters of the entire network are updated using a gradient descent back-propagation algorithm. The adam optimization function was used in this experiment, with the learning rate set to 0.00024.
And (3) predicting the matching degree of each user by using the data of the test set and calculating an evaluation index every time an epoch is trained, and noting that the history records of the users used in the method are not the history records obtained by random sampling any more, but all the history records in the training set.
When predicting the product score, the evaluation index to be used is the Mean square Error (RMSE), and the calculation formula of the index is shown in formula (10).
Figure BDA0002146158930000104
Where N is the number of all scores predicted, outputiIs the prediction result for each case score, y is outputiThe corresponding label. If the RMSE corresponding to the epoch is smaller than the minimum evaluation index in the epoch in the training history, the model at this time is saved. A total of 1000 epochs were trained.
To avoid the situation that the training effect is reduced due to too much time consumption, an early-stopping strategy is carried out at the end of each epoch. The early-stop strategy causes training to stop when the performance of the model on the validation set begins to decline to save time. The adopted early-stopping strategy is to record evaluation indexes on the last four epoch test sets, and if the evaluation indexes of the model on the test sets in the four epochs are continuously reduced, the model is out of circulation and stops training.
The specific simulation process is as follows, and the hardware conditions used in the experiment are as follows:
a processor: intel (R) core (TM) i7-7700 CPU @3.60GHz
A display card: nvidia GeForce GTX 1070 Ti (8GB)
RAM:8.00GB
The system type is as follows: x 64
Software conditions used for the experiment:
Linux:16.04
python:3.6.7
tensorflow-gpu:1.11.0
pandas:0.23.4
numpy1.15.2
and after the hyper-parameter tuning is performed on the verification set, the described hyper-parameter values are all optimal parameter values, and finally the RMSE reaching 0.9315 is obtained on the test set.
And step 9: a model is used to make recommendations to a particular user.
After the training is finished, the saved model should be the optimal model in the training process. When a user recommends, reading out the stored parameters of the model, processing personal information of the user into a vector form according to a mode of preprocessing the personal information of the user during training, processing content information of historical products of the user into a vector form according to a mode of preprocessing product content information during training, taking all products which do not interact with the user as products to be predicted, processing the content information of the products into a vector form according to a mode of preprocessing product content information during training, predicting the matching degree of the user and each product to be predicted once, sequencing all products to be predicted according to the matching degree, and recommending the user by taking a top-N recommendation result.

Claims (3)

1. A matching recommendation method of user personalized products based on contents is characterized in that: comprises the following steps of a random batch sampling method based on users, a user product matching method based on contents,
the random batch sampling method based on the user comprises the following steps:
step 1.1, arranging an interaction information file, a user information file and a product information file, and dividing a test set and a training set;
step 1.2, encoding the user information and the product information into numerical vectors;
step 1.3, for each user, randomly taking the history record in a random number, and repeatedly adopting a leave-one-out form to the taken result to determine the history record and the target product;
step 1.4, outputting the user id, the historical record product id list, the target product id and the label set which are orderly arranged;
the user product matching method based on the content comprises the following steps:
step 2.1, establishing a network;
step 2.1.1, input positions of a user id, a historical record product id list and a target product id are reserved, and the information input at the corresponding position passes through a search layer to obtain corresponding user information, historical record product information and a code of the target product information;
step 2.1.2, mapping the codes of the user information, the historical record product information and the target product information to a user representation hidden factor space, a user historical record product hidden factor space and a target product hidden factor space with the same dimensionality through a u network, a p network and a q network respectively;
step 2.1.3, carrying out self-adaptive weighting on the hidden factors of the user historical record products according to the hidden factors of the target products and the hidden factors of the user historical record products to obtain hidden factors of the user historical preference;
step 2.1.4, weighting by using a user historical preference hidden factor and a user representation hidden factor to obtain a user image;
step 2.1.5, predicting matching scores through a full connection layer by using the user portrait and the hidden factors of the target product;
step 2.1.6, correcting the matching score by using coeff and bias;
2.2, according to a random batch sampling method of batch input based on users, obtaining an ordered user id, a historical record product id list and a set of target product ids and labels, and respectively training, adjusting and evaluating a network model on a training set, a verification set and a test set;
and 2.3, inputting a specific user and a history record of the specific user, predicting the scores of the specific user on all unviewed movies by using a user product matching network based on the content, sequencing the scores, and finally outputting a top-N recommendation result.
2. The method for matching recommendation of a content based user personalized product according to claim 1, characterized by: in the step 1.3, a user has N history records, and random numbers x between 1 and N are randomly generated to represent the number of samples to be sampled in the history records; sampling randomly without putting back for x times in the history of the user to obtain x history record product ids; and selecting the historical record product id of each position as a target product id in turn for the product ids in the x historical records, correspondingly scoring the target product id as corresponding label, taking the rest historical record product ids as the historical records of the user, and finally obtaining N sequences containing the user id, the historical record product id list and the target product id and the label.
3. The method for matching recommendation of a content based user personalized product according to claim 1, characterized by: the content-based user product matching method comprises the following steps:
step 2.1.1, inputting the first three items of user id, historical record product id, target product id list and label which are obtained by sampling by using a random batch sampling method based on a user into a network, and obtaining user information, user historical record product content information and user target product content information codes through a search layer, wherein the user has n historical records; recommending according to personal information provided by a new user, wherein the historical information is null, and the input historical record product feature vector is a full 0 vector;
step 2.1.2, input content information codes are mapped through a neural network to obtain corresponding hidden factor representation, three groups of content mapping networks are provided, wherein a u network maps user information codes to a user representation hidden factor space, a p network maps user history record product content information codes to a user history record product hidden factor space, and a q network maps target product content information codes to a target product hidden factor space;
wherein each network is composed of one or more layers of fully-connected neural networks, and the calculation method of each layer of fully-connected neural network is as shown in formula (1);
y0=f(Wx+b0) (1)
wherein y is0Is the output of the layer of neural network, x is the input of the layer of neural network, W is a weight matrix whose shape is (output vector dimension, input vector dimension), b0Is a bias vector whose shape is (output vector dimension, 1), f is the activation function; wherein W and b0Is a trainable variable and is updated by gradient descent method, a layer of forward neural network is adopted, and the length of a hidden factor is the output layer y0Is set to 16, the activation function is set to relu;
step 2.1.3: firstly, carrying out operation in the formula (2) on each target product hidden factor and a user history record product hidden factor;
Figure FDA0003456272330000021
wherein p and q are a user history record product implicit factor and a target product implicit factor respectively,
Figure FDA0003456272330000022
the method is a mode of combining a product hidden factor and a target product hidden factor of a user historical record, wherein p and q are spliced; f represents a layer of neural network, the output of the layer of neural network is a vector with the length of 16, and the h vector with the length of 16 is used for inner product operation with the h vectorObtaining a scalar y which represents the contribution degree of the user history record product to the target product prediction; h, W, b in the formula (2)1Are trainable variables and are updated using a gradient descent method to obtain n contribution degrees (y)1,y2,...,yn) Each user history record product hidden factor corresponds to one user history record product hidden factor;
the contribution degree is normalized to be within the interval (0,1) using the softmax function in equation (3) below, and the sum of them is guaranteed to be equal to 1;
Figure FDA0003456272330000023
obtaining n weight coefficients (weight) after passing through the adaptive weighting network1,weight2,...,weightn) Wherein each user history record product hidden factor corresponds to one, and the beta of the denominator is in the (0,1) interval;
carrying out weighted summation on the corresponding hidden factors of the user history record with the length of 16 by using the weighting coefficients to obtain the hidden factors of the user history preference with the final length of 16;
step 2.1.4: the user history preference hidden factor and the user representation hidden factor both adopt 16-dimensional vectors, and a proportional scalar of the user history preference hidden factor and the user representation hidden factor is obtained by calculation by using a formula (4) and a formula (5);
a=hTf(Wph+b1) (4)
b=hTf(Wpp+b1) (5)
wherein p ishIs an input implicit factor of the user's historical preference, ppA and b are respectively proportional scalars corresponding to the user history preference hidden factor and the user representation hidden factor; w, b in two formulae1H is a trainable variable shared by two variables and is updated by a gradient descent algorithm;
obtaining the weight of the user historical preference implicit factor and the user expression implicit factor by using the formula (6), namely the user image u; if a user does not have a history record, the user portrait obtained here is automatically adjusted to be a user representation hidden factor without redundant operation;
Figure FDA0003456272330000031
step 2.1.5: splicing the user portrait and the hidden factors of the target product, and passing through a two-layer neural network, wherein the number of neurons in the middle layer is 16, and the activation function adopts relu; the number of neurons in the output layer is 1, and no activation function is adopted;
step 2.1.6: modifying the match score using coeff and the bias;
and (3) outputting the predicted matching score prediction as a real score after being corrected, wherein the correction process is as the following formula (7):
score=coeff*prediction+bi+bu+b2 (7)
wherein b isi、bu、b2The method comprises the following steps that product scoring bias, user scoring bias and total scoring bias are respectively adopted, each user and each product have independent scoring bias, the total scoring bias is common and is trainable parameters obtained through gradient descent method training; the coeff calculation method is as follows (8):
Figure FDA0003456272330000032
wherein R+I represents the number of user history records, and alpha is a hyperparameter in a (0,1) interval;
step 2.2, implementing a random batch sampling method based on users one by one to obtain an ordered user id, a historical record product id, a target product id and a label set to train, adjust and evaluate the model; namely, a loss function training model is constructed;
the historical records of the user comprise products which interact with the user, the label is the score of the user on the products to be predicted, and the training adopts the mean square error loss as shown in the formula (9);
Figure FDA0003456272330000033
where N represents the number of all scores predicted, outputiRepresents the score of the ith prediction, and y' represents a label corresponding to a product to be predicted; calculating Loss of each batch and updating parameters of the whole network by using a gradient descent back propagation algorithm, wherein an epoch means that a user-based random batch sampling method is performed on each user and input into the network for training, and when the user passes through the epoch, the matching degree of each user is predicted by using test set data and an evaluation index is calculated, wherein when the product evaluation is predicted, the used evaluation index is the root mean square error, and the formula is (10);
Figure FDA0003456272330000034
where N is the number of all scores predicted, outputiRepresenting the score of the ith prediction, y' represents a label corresponding to a product to be predicted, and if the RMSE corresponding to the epoch is smaller than the minimum evaluation index in the epoch in the training history, saving the model at the moment;
step 2.3, inputting a specific user and a history record of the specific user, predicting scores of the specific user on all unviewed movies by using a user product matching network based on content, sequencing the scores, and finally outputting a top-N recommendation result; namely, the optimal model after training is used for recommending a certain user, the parameters of the model are read, the personal information of the user is processed into a vector form according to the mode of preprocessing the personal information of the user during training, the content information of historical products of the user is processed into the vector form according to the mode of preprocessing the content information of products during training, all products which do not interact with the user are used as products to be predicted, the content information of the products is processed into the vector form according to the mode of preprocessing the content information of the products during training, the matching degree of the user and each product to be predicted is predicted once, all products to be predicted are ranked according to the matching degree, and the top-N recommendation result is used for recommending the user.
CN201910685469.6A 2019-07-27 2019-07-27 Content-based matching recommendation method for user personalized products Active CN110415081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910685469.6A CN110415081B (en) 2019-07-27 2019-07-27 Content-based matching recommendation method for user personalized products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910685469.6A CN110415081B (en) 2019-07-27 2019-07-27 Content-based matching recommendation method for user personalized products

Publications (2)

Publication Number Publication Date
CN110415081A CN110415081A (en) 2019-11-05
CN110415081B true CN110415081B (en) 2022-03-11

Family

ID=68363479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910685469.6A Active CN110415081B (en) 2019-07-27 2019-07-27 Content-based matching recommendation method for user personalized products

Country Status (1)

Country Link
CN (1) CN110415081B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308686B (en) * 2020-11-26 2021-05-18 江苏科源网络技术有限公司 Intelligent recommendation method
CN112784123B (en) * 2021-02-25 2023-05-16 电子科技大学 Cold start recommendation method for graph network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119112A (en) * 1997-11-19 2000-09-12 International Business Machines Corporation Optimum cessation of training in neural networks
CN108763362A (en) * 2018-05-17 2018-11-06 浙江工业大学 Method is recommended to the partial model Weighted Fusion Top-N films of selection based on random anchor point
CN109446430A (en) * 2018-11-29 2019-03-08 西安电子科技大学 Method, apparatus, computer equipment and the readable storage medium storing program for executing of Products Show

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119112A (en) * 1997-11-19 2000-09-12 International Business Machines Corporation Optimum cessation of training in neural networks
CN108763362A (en) * 2018-05-17 2018-11-06 浙江工业大学 Method is recommended to the partial model Weighted Fusion Top-N films of selection based on random anchor point
CN109446430A (en) * 2018-11-29 2019-03-08 西安电子科技大学 Method, apparatus, computer equipment and the readable storage medium storing program for executing of Products Show

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于深度学习的推荐系统研究综述";黄立威;《计算机学报》;20180731;全文 *
Xiangnan He ; Zhankui He ; Jingkuan Song ; Zhenguang Liu ; Yu-Gang J."NAIS_ Neural Attentive Item Similarity Model for Recommendation".《 IEEE Transactions on Knowledge and Data Engineering 》.2018, *

Also Published As

Publication number Publication date
CN110415081A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN112598462B (en) Personalized recommendation method and system based on collaborative filtering and deep learning
CN110046304B (en) User recommendation method and device
Shi et al. Transductive semi-supervised deep learning using min-max features
Ren et al. Compositional coding capsule network with k-means routing for text classification
US11494644B2 (en) System, method, and computer program for recommending items using a direct neural network structure
Lee et al. Deep learning based recommender system using cross convolutional filters
CN111079409B (en) Emotion classification method utilizing context and aspect memory information
CN113343125B (en) Academic accurate recommendation-oriented heterogeneous scientific research information integration method and system
CN117198466B (en) Diet management method and system for kidney disease patients
CN110415081B (en) Content-based matching recommendation method for user personalized products
CN117236410B (en) Trusted electronic file large language model training and reasoning method and device
CN112488301A (en) Food inversion method based on multitask learning and attention mechanism
CN113408582A (en) Training method and device of feature evaluation model
Ayyadevara Neural Networks with Keras Cookbook: Over 70 recipes leveraging deep learning techniques across image, text, audio, and game bots
Leathart et al. Temporal probability calibration
CN117216381A (en) Event prediction method, event prediction device, computer device, storage medium, and program product
CN116843410A (en) Commodity recommendation method and system based on size data fusion
CN116452293A (en) Deep learning recommendation method and system integrating audience characteristics of articles
Kwon et al. Improving RNN based recommendation by embedding-weight tying
CN111414539B (en) Recommendation system neural network training method and device based on feature enhancement
CN114936723A (en) Social network user attribute prediction method and system based on data enhancement
Jaya et al. Analysis of convolution neural network for transfer learning of sentiment analysis in Indonesian tweets
Hilmiaji et al. Identifying Emotion on Indonesian Tweets using Convolutional Neural Networks
CN118228718B (en) Encoder processing method, text processing method and related equipment
Nkhata et al. Sentiment analysis of movie reviews using bert

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant