CN113420221B - Interpretable recommendation method integrating implicit article preference and explicit feature preference of user - Google Patents

Interpretable recommendation method integrating implicit article preference and explicit feature preference of user Download PDF

Info

Publication number
CN113420221B
CN113420221B CN202110747613.1A CN202110747613A CN113420221B CN 113420221 B CN113420221 B CN 113420221B CN 202110747613 A CN202110747613 A CN 202110747613A CN 113420221 B CN113420221 B CN 113420221B
Authority
CN
China
Prior art keywords
user
article
feature
expression
explicit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110747613.1A
Other languages
Chinese (zh)
Other versions
CN113420221A (en
Inventor
刘柏嵩
江学勇
钦蒋承
董倩
张云冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202110747613.1A priority Critical patent/CN113420221B/en
Publication of CN113420221A publication Critical patent/CN113420221A/en
Application granted granted Critical
Publication of CN113420221B publication Critical patent/CN113420221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Abstract

The invention discloses an interpretable recommendation method fusing implicit article preference and explicit feature preference of a user, which can greatly improve the accuracy and interpretability of rating prediction by mining the feature preference of the user and the feature information of an article capturing time sequence information through extracting feature words in comments and mining the implicit expressions of the user and the article through the historical behaviors of the user; then, by extracting feature words in the user comments, capturing time sequence information in the comments by using a GRU (group-based Unit), and obtaining explicit expressions of the user and the article by introducing an attention mechanism and a feature attraction mechanism; and finally, combining the implicit expression and the display expression to obtain the final expression of the user and the article. The method has the advantages of high rationality of the recommendation result and capability of providing recommendation explanation while ensuring the recommendation precision.

Description

Interpretable recommendation method fusing implicit article preference and explicit feature preference of user
Technical Field
The invention relates to the technical field of computer personalized recommendation methods, in particular to an interpretable recommendation method fusing implicit article preference and explicit feature preference of a user.
Background
With the rapid development of internet technology in recent years, the problem of information overload is becoming more serious. The classified catalogue and the search engine alleviate the problem of information overload to a certain extent, but both have certain limitations. Therefore, the recommendation system has come up, helps users to complete information screening by analyzing the historical behaviors of the users, and can find out the potential interest preference of the users. The recommendation system does not need the user to provide explicit needs, but actively finds information that can satisfy the user's interests and needs from a large amount of information for the user.
Nowadays, the recommendation system is applied to all aspects of our life, a plurality of new recommendation technologies are promoted, the accuracy of a recommendation algorithm is improved, and the recommendation effect is greatly improved. Conventional recommendation systems often lack an explanation of the recommendation result, and interpretable recommendation refers to giving an explanation of recommending a commodity while recommending the commodity to a user. By giving an explanation of the recommendation, the transparency of the system and the confidence level of the user and the use experience can be increased, and the user is helped to make a selection more quickly and accurately.
In addition to using the scoring information of users in recommendation systems, attempts have been made in recent years to add user comment data to improve the accuracy and interpretability of recommendation systems. The evaluation of the user on the article reflects the display preference of the user to a certain extent, so that the explanation of the recommendation by using the user comment data is one of the research hotspots of many existing recommendation works. Although many related scholars have conducted research in recent years that could explain the field of recommendations, there are still some problems with existing work. Existing approaches almost all model with all comments of a user or an item and deal with the comments of the user and the item in the same way. On one hand, the noise in the comment data is large, and the information amount transferred by a large part of text content is extremely small. On the other hand, the comments of the users and the articles are greatly different, the comments of the users are sent by one user and can represent the characteristic preference of the users, and the comments of the articles are sent by a plurality of users, so that the evaluation dimensions are different and the subjectivity is strong, and therefore the characteristic information of the articles cannot be objectively reflected by the comments of the articles.
Although the current recommendation system also has the concept of introducing attention weight based on item characteristics and attention weight based on user characteristic preference, the attention weight based on item characteristics only describes the intensity of a certain characteristic of the item and is irrelevant to the target user; the attention weight based on the user feature preference describes the preference strength of a target user for a certain feature, and an article to be recommended may not have the feature or the feature weight is small, so that it is not reasonable to make a recommendation explanation only by using the attention weight based on the user feature preference or the article feature. For example, the attention weights of the target user u to the feature 1, the feature 2 and the feature 3 are 0.8, 0.1 and 0.1 respectively, and the attention weights of the item to be recommended v to the feature 1, the feature 2 and the feature 3 are 0.1, 0.1 and 0.8 respectively. When recommending an item v to a target user u, if feature 1 is used as the recommendation interpretation, although the preference degree of the user u for feature 1 is the highest, the intensity of feature 1 of the item v is very small, obviously unreasonable; if feature 3 is used as the recommended interpretation, although feature 3 of item v is the most intense, user u has a very small preference for feature 3, and it is obviously not reasonable. Therefore, it is urgently needed to develop a more reasonable and more accurate interpretable recommendation method integrating implicit item preference and explicit feature preference of a user, so that a recommendation system is more perfect.
Disclosure of Invention
The invention aims to provide an interpretable recommendation method fusing implicit article preference and explicit feature preference of a user. The method has the advantages of high recommendation result reasonability and capability of providing recommendation explanation while ensuring recommendation precision.
The technical scheme of the invention is as follows: an interpretable recommendation method fusing implicit article preference and explicit feature preference of a user comprises the following steps:
step A, extracting the latest p articles from the historical records of the user to obtain the implicit expression of the user;
b, extracting the latest q users from the preference records of the articles to obtain the implicit expression of the articles;
c, extracting feature preferences of the user from the comments of the user, calculating the attention weight of the user to each feature, and obtaining the explicit expression of the user;
d, extracting feature information of the article from the comment of the article, calculating the attention weight of each feature to the article, calculating the initial attraction of the feature information of the article to the user according to the explicit expression result of the user in the step C, calculating the final attraction of each feature of the article to the user according to the attention weight of each feature to the article and the initial attraction of the feature information of the article to the user, and obtaining the explicit expression of the article;
Step E, obtaining the final expression of the user according to the implicit expression of the user in the step A and the explicit expression of the user in the step C; obtaining the final expression of the article according to the implicit expression of the article in the step B and the explicit expression of the article in the step D;
and F, generating a recommendation list and giving a recommendation explanation.
Compared with the prior art, the invention has the beneficial effects that: the attractiveness of each feature of the object to the target user is calculated according to the feature preference of the target user, and the explicit feature expression of the object is determined according to the attractiveness of the feature, so that the recommendation precision and reasonability are guaranteed. Specifically, when the explicit expression of the user is obtained, the attention mechanism is introduced based on the feature words, and the attention weight of the user to each feature is calculated, so that the explicit expression of the user is obtained; when the explicit expression of the article is obtained, introducing an attraction mechanism, firstly calculating attention weight of each feature to the article, then calculating initial attraction of feature information of the article to a user according to an explicit expression result of the user, and then calculating final attraction of each feature of the article to the user according to the attention weight and the initial attraction so as to obtain the most accurate and reasonable explicit expression of the article; and finally, fusing the implicit preference and the explicit characteristic preference of the user to obtain the final expression of the user, fusing the implicit expression and the explicit characteristic information of the article to obtain the final expression of the article, and effectively guaranteeing the recommendation precision and rationality.
In the aforementioned interpretable recommendation method combining the implicit item preference and the explicit feature preference of the user, the obtaining of the user explicit expression in step C specifically includes the following steps:
step c1, extracting feature words: extracting characteristic words from the comments of the user, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the user
Figure BDA0003143491230000041
Figure BDA0003143491230000042
A word vector representing the ith feature word of user u;
step c2, capturing time sequence information: feature word sequence
Figure BDA0003143491230000043
Sending the data into a GRU network according to time to obtain a new feature word sequence
Figure BDA0003143491230000044
Figure BDA0003143491230000045
The vector representation containing the time sequence information is obtained after the ith characteristic word representing the user u passes through the GRU;
step c3, attention-drawing mechanism: and measuring the preference degree of the user to the features by using the attention weight based on the features, wherein the calculation formula of the attention weight of the user u to the ith feature is as follows:
Figure BDA0003143491230000046
wherein, W u ,b u Learnable parameters for the layer network;
step c4, obtaining the explicit expression of the user:
Figure BDA0003143491230000047
in the aforementioned interpretable recommendation method fusing the implicit item preference and the explicit feature preference of the user, the obtaining of the explicit expression of the item in the step D specifically includes the following steps:
step d1, extracting feature words: extracting characteristic words from the comments of the article, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the article
Figure BDA0003143491230000048
Figure BDA0003143491230000049
A word vector representing the ith feature word of item v;
step d2, capturing timing information: feature word sequence
Figure BDA00031434912300000410
Sending the data into a GRU network according to time to obtain a new feature word sequence
Figure BDA00031434912300000411
Figure BDA00031434912300000412
The vector representation containing time sequence information is obtained after the ith characteristic word representing the article v passes through the GRU;
step d3, attention-drawing mechanism: the expression strength of the feature to the item is measured by using the feature-based attention weight, and the calculation formula of the attention weight of the ith feature to the item v is as follows:
Figure BDA0003143491230000051
wherein, W v ,b v Learnable parameters for the layer network;
step d4, introducing an attraction mechanism: calculating the preliminary attraction of each feature of the item to the user according to the user features, wherein the preliminary attraction of the ith feature of the item v to the user u is calculated according to the formula:
Figure BDA0003143491230000052
wherein, W u,v ,b u,v Learnable parameters for the layer network;
step d5, calculating the final attraction of the ith characteristic of the item v to the user u, wherein the calculation formula is as follows:
Figure BDA0003143491230000053
step d6, obtaining an explicit representation of the item:
Figure BDA0003143491230000054
in the aforementioned interpretable recommendation method for fusing implicit item preference and explicit feature preference of user, in the step a, v is set to { v ═ v 1 ,v 2 ,…,v k V is the set of items preferred by the user, v i Representing the ith item, and setting the item vector sequence v as { v } 1 ,v 2 ,…,v p Inputting into a multilayer perceptron to obtain an implicit expression of a userTo U y
In the interpretable recommendation method combining the implicit item preference and the explicit feature preference of the user, in the step B, u is set to { u ═ 1 ,u 2 ,…,u q U is the set of users who prefer the item i Representing the ith user, and setting the user vector sequence u as { u } 1 ,u 2 ,…,u q Inputting into a multilayer perceptron to obtain an implicit expression V of an article y
In the interpretable recommendation method fusing the implicit item preference and the explicit feature preference of the user, a prediction of a score is further included between the step E and the step F, so that the final expression of the user is U ═ U x :U y ]The final expression of the article is V ═ V x :V y ]The prediction score of user i for item j is:
Figure BDA0003143491230000055
in the aforementioned interpretable recommendation method fusing the implicit item preference and the explicit feature preference of the user, the parameters of the predictive scoring model can be optimized as follows:
Figure BDA0003143491230000061
theta is the set of model parameters and lambda is the regular term coefficient.
In the aforementioned interpretable recommendation method fusing implicit item preferences and explicit feature preferences of a user, the model evaluation uses MAE and RMSE.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples, which are not to be construed as limiting the invention.
Example (b): an interpretable recommendation method fusing implicit article preference and explicit feature preference of a user is shown in a main flow model diagram of fig. 1, and comprises the following steps:
step A, extracting the latest p articles from the historical records of the user and orderingv={v 1 ,v 2 ,…,v k Is a collection of items, v i Representing the ith item, and setting the item vector sequence v as { v } 1 ,v 2 ,…,v p Inputting a multilayer sensing machine to obtain implicit article preference of a user, namely implicitly expressing U y
Figure BDA0003143491230000062
Figure BDA0003143491230000063
For general symbols, representing a vector U y Is d dimension.
Step B, extracting the latest q users from the preference records of the articles, and making u ═ { u ═ 1 ,u 2 ,…,u q Is the user set, u i Representing the ith user, and setting the user vector sequence u as { u } 1 ,u 2 ,…,u q Inputting into a multilayer perceptron to obtain audience information of the article, namely implicit expression V y ,
Figure BDA0003143491230000064
C, extracting feature preference of the user from the comments of the user, calculating attention weight of the user to each feature, and obtaining explicit expression of the user;
the step C of obtaining the user explicit expression specifically comprises the following steps:
step c1, extracting feature words: extracting characteristic words from the comments of the user, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the user
Figure BDA0003143491230000065
Wherein the content of the first and second substances,
Figure BDA0003143491230000066
a word vector representing the ith feature word of user u,
Figure BDA0003143491230000067
Step c2, capturing timing sequenceInformation: feature word sequence
Figure BDA0003143491230000071
Sending the data into a GRU network according to time to obtain a new feature word sequence
Figure BDA0003143491230000072
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003143491230000073
the ith characteristic word representing the user u passes through the GRU to obtain a vector representation containing time sequence information,
Figure BDA0003143491230000074
the calculation mode of the characteristic word sequence in the GRU is as follows:
Figure BDA0003143491230000075
Figure BDA0003143491230000076
Figure BDA0003143491230000077
Figure BDA0003143491230000078
wherein z is t To refresh the door, r t A reset gate, sigma (·) is a Logistic function, the output interval is (0, 1),
Figure BDA0003143491230000079
is a candidate state for the current time instant,
Figure BDA00031434912300000710
is an input for the current time of day,
Figure BDA00031434912300000711
is the external state at the previous moment, W * ,U * ,b * E { z, r, f } is the learnable network parameter.
Step c3, attention-drawing mechanism: and measuring the preference degree of the user to the features by using the attention weight based on the features, wherein the calculation formula of the attention weight of the user u to the ith feature is as follows:
Figure BDA00031434912300000712
wherein, W u ,b u Learnable parameters for the layer network;
step c4, obtaining the explicit expression of the user:
Figure BDA00031434912300000713
d, extracting feature information of the article from the comment of the article, calculating the attention weight of each feature to the article, calculating the initial attraction of the feature information of the article to the user according to the explicit expression result of the user in the step C, calculating the final attraction of each feature of the article to the user according to the attention weight of each feature to the article and the initial attraction of the feature information of the article to the user, and obtaining the explicit expression of the article;
The step D of obtaining the explicit expression of the article specifically comprises the following steps:
step d1, extracting feature words: extracting characteristic words from the comments of the article, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the article
Figure BDA00031434912300000714
Wherein the content of the first and second substances,
Figure BDA0003143491230000081
a word vector representing the ith feature word of item v,
Figure BDA0003143491230000082
step d2, capturing timing information: feature word sequence
Figure BDA0003143491230000083
Sending the data into a GRU network according to time to obtain a new feature word sequence
Figure BDA0003143491230000084
Wherein the content of the first and second substances,
Figure BDA0003143491230000085
the ith characteristic word representing the item v is represented by a vector containing time sequence information obtained after passing through a GRU,
Figure BDA0003143491230000086
the calculation mode of the feature word sequence in the GRU is consistent with the calculation mode in the step c 2;
step d3, attention-drawing mechanism: the expression strength of the feature to the item is measured by using the feature-based attention weight, and the calculation formula of the attention weight of the ith feature to the item v is as follows:
Figure BDA0003143491230000087
wherein, W v ,b v Learnable parameters for the layer network;
step d4, introducing an attraction mechanism: calculating the preliminary attraction of each feature of the item to the user according to the user features, wherein the preliminary attraction of the ith feature of the item v to the user u is calculated according to the formula:
Figure BDA0003143491230000088
wherein, W u,v ,b u,v Learnable parameters for the layer network;
step d5, calculating The ith characteristic of the article v is finally attractive to the user u, and the calculation formula is as follows:
Figure BDA0003143491230000089
step d6, obtaining an explicit representation of the item:
Figure BDA00031434912300000810
step E, obtaining the final expression U ═ U of the user according to the implicit expression of the user in the step A and the explicit expression of the user in the step C x :U y ],
Figure BDA00031434912300000811
Obtaining a final expression V ═ V of the article according to the implicit expression of the article in the step B and the explicit expression of the article in the step D x :V y ],
Figure BDA00031434912300000812
The user i's prediction score for item j is:
Figure BDA00031434912300000813
the parameters of the model can be obtained by the following optimization problem:
Figure BDA00031434912300000814
Figure BDA0003143491230000091
wherein r is ij And (4) the user i truly scores the item j, theta is a set of model parameters, and lambda is a regular term coefficient.
The evaluation of the recommendation model uses MAE and RMSE, which are calculated as follows:
Figure BDA0003143491230000092
Figure BDA0003143491230000093
wherein r is ij The user i is given a true score for item j.
Step F, scoring according to the prediction
Figure BDA0003143491230000096
And recommending the Top-K item to the user. At the same time according to
Figure BDA0003143491230000094
And
Figure BDA0003143491230000095
generating a recommended interpretation in the form of: you may be interested in [ featured word ], and the item is more relevant to [ featured word ].
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned examples, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to those skilled in the art without departing from the principles of the present invention should also be considered as within the scope of the present invention.

Claims (7)

1. An interpretable recommendation method fusing implicit article preference and explicit feature preference of a user is characterized in that: the method comprises the following steps: step A, extracting the latest data from the user's history
Figure DEST_PATH_IMAGE001
An item, obtaining an implicit expression of a user;
step B, extracting the latest item from the preference records of the items
Figure 699378DEST_PATH_IMAGE002
The user obtains the implicit expression of the article;
c, extracting feature preferences of the user from the comments of the user, calculating the attention weight of the user to each feature, and obtaining the explicit expression of the user;
d, extracting feature information of the article from the comment of the article, calculating the attention weight of each feature to the article, calculating the initial attraction of the feature information of the article to the user according to the explicit expression result of the user in the step C, calculating the final attraction of each feature of the article to the user according to the attention weight of each feature to the article and the initial attraction of the feature information of the article to the user, and obtaining the explicit expression of the article;
step E, obtaining the final expression of the user according to the implicit expression of the user in the step A and the explicit expression of the user in the step C; obtaining the final expression of the article according to the implicit expression of the article in the step B and the explicit expression of the article in the step D;
Step F, generating a recommendation list and giving a recommendation explanation;
the step D of obtaining the explicit expression of the article specifically comprises the following steps:
step d1, extracting feature words: extracting characteristic words from the comments of the article, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the article
Figure DEST_PATH_IMAGE003
Figure 712333DEST_PATH_IMAGE004
Representing an article
Figure DEST_PATH_IMAGE005
To (1)
Figure 594839DEST_PATH_IMAGE006
A word vector of the individual feature words;
step d2, capturing time sequence information: feature word sequence
Figure 760372DEST_PATH_IMAGE003
Sending the data into GRU network according to time to obtain new characteristicsToken sequence
Figure DEST_PATH_IMAGE007
Figure 702920DEST_PATH_IMAGE008
Representing an article
Figure 203172DEST_PATH_IMAGE005
To (1) a
Figure 623789DEST_PATH_IMAGE006
Vector representation containing time sequence information is obtained after the characteristic words pass through GRU;
step d3, attention-drawing mechanism: using feature-based attention weights to measure the expressive intensity of features on an item, the first
Figure 96358DEST_PATH_IMAGE006
Characteristic pair article
Figure 524322DEST_PATH_IMAGE005
The formula for calculating the attention weight of (1) is:
Figure DEST_PATH_IMAGE009
wherein, the first and the second end of the pipe are connected with each other,
Figure 449553DEST_PATH_IMAGE010
,
Figure DEST_PATH_IMAGE011
learnable parameters for the layer network;
step d4, introducing an attraction mechanism: calculating a preliminary appeal of each feature of the item to the user based on the user features, the item
Figure 736177DEST_PATH_IMAGE005
To (1) a
Figure 610724DEST_PATH_IMAGE006
Individual characteristics to the user
Figure 895074DEST_PATH_IMAGE012
Preliminary attractive force of
The calculation formula is as follows:
Figure DEST_PATH_IMAGE013
wherein, the first and the second end of the pipe are connected with each other,
Figure 369918DEST_PATH_IMAGE014
,
Figure DEST_PATH_IMAGE015
learnable parameters for the layer network;
step d5, calculating item
Figure 709502DEST_PATH_IMAGE005
To (1) a
Figure 891084DEST_PATH_IMAGE006
Individual characteristics to the user
Figure 80757DEST_PATH_IMAGE012
And final attraction force is calculated according to the formula:
Figure 42897DEST_PATH_IMAGE016
Step d6, obtaining an explicit representation of the item:
Figure DEST_PATH_IMAGE017
2. the method of claim 1, wherein the method comprises: the step C of obtaining the user explicit expression specifically comprises the following steps:
step c1, extracting feature words: extracting characteristic words from the comments of the user, converting the characteristic words into word vectors, and obtaining a characteristic word sequence of the user
Figure 609007DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
Representing a user
Figure 723725DEST_PATH_IMAGE012
To (1) a
Figure 84299DEST_PATH_IMAGE006
A word vector of the individual feature words;
step c2, capturing timing information: feature word sequence
Figure 533735DEST_PATH_IMAGE018
Sending the data into a GRU network according to time to obtain a new feature word sequence
Figure 903537DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
Representing a user
Figure 794132DEST_PATH_IMAGE012
To (1) a
Figure 905701DEST_PATH_IMAGE006
Vector representation containing time sequence information is obtained after the characteristic words pass through GRU;
step c3, attention-drawing mechanism: using feature-based attention weights to measure user preference for features
Figure 514537DEST_PATH_IMAGE012
To the first
Figure 688029DEST_PATH_IMAGE006
The attention weight of each feature is calculated by the formula:
Figure 761028DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE023
,
Figure 728984DEST_PATH_IMAGE024
learnable parameters for the layer network;
step c4, obtaining the explicit expression of the user:
Figure DEST_PATH_IMAGE025
3. the interpretable recommendation method fusing user implicit item preferences with explicit feature preferences according to claim 1, wherein: in the step A, order
Figure 638165DEST_PATH_IMAGE026
For the set of items preferred by the user,
Figure DEST_PATH_IMAGE027
is shown as
Figure 412086DEST_PATH_IMAGE006
An article, vector-sequencing the article
Figure 277274DEST_PATH_IMAGE028
Inputting a multi-layer perceptron to obtain implicit expression of a user
Figure DEST_PATH_IMAGE029
4. The method of claim 1, wherein the method comprises: in the step B
Figure 727715DEST_PATH_IMAGE030
To favor the set of users of the item,
Figure DEST_PATH_IMAGE031
is shown as
Figure 311143DEST_PATH_IMAGE006
A user, a user vector sequence
Figure 623176DEST_PATH_IMAGE030
Inputting the multi-layer perceptron to obtain the implicit expression of the article
Figure 608450DEST_PATH_IMAGE032
5. The interpretable recommendation method fusing user implicit item preferences with explicit feature preferences according to claim 1, wherein: the step E and the step F also comprise the prediction of scores, and the final expression of the user is as
Figure DEST_PATH_IMAGE033
The final expression of the article is
Figure 731258DEST_PATH_IMAGE034
User of
Figure 801982DEST_PATH_IMAGE006
To the article
Figure DEST_PATH_IMAGE035
The prediction score of (a) is:
Figure 917705DEST_PATH_IMAGE036
6. the interpretable recommendation method fusing user implicit item preferences with explicit feature preferences according to claim 5, wherein: the parameters of the predictive scoring model can be optimized as follows:
Figure DEST_PATH_IMAGE037
Figure 757485DEST_PATH_IMAGE038
is a set of parameters of the model that are,
Figure DEST_PATH_IMAGE039
is a regular term coefficient.
7. The interpretable recommendation method fusing user implicit item preferences with explicit feature preferences according to claim 6, wherein: the model was evaluated using MAE and RMSE.
CN202110747613.1A 2021-07-01 2021-07-01 Interpretable recommendation method integrating implicit article preference and explicit feature preference of user Active CN113420221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747613.1A CN113420221B (en) 2021-07-01 2021-07-01 Interpretable recommendation method integrating implicit article preference and explicit feature preference of user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747613.1A CN113420221B (en) 2021-07-01 2021-07-01 Interpretable recommendation method integrating implicit article preference and explicit feature preference of user

Publications (2)

Publication Number Publication Date
CN113420221A CN113420221A (en) 2021-09-21
CN113420221B true CN113420221B (en) 2022-09-09

Family

ID=77720060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747613.1A Active CN113420221B (en) 2021-07-01 2021-07-01 Interpretable recommendation method integrating implicit article preference and explicit feature preference of user

Country Status (1)

Country Link
CN (1) CN113420221B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850656B (en) * 2021-11-15 2022-08-23 内蒙古工业大学 Personalized clothing recommendation method and system based on attention perception and integrating multi-mode data
CN114519097B (en) * 2022-04-21 2022-07-19 宁波大学 Academic paper recommendation method for heterogeneous information network enhancement
WO2023225987A1 (en) * 2022-05-27 2023-11-30 京东方科技集团股份有限公司 Correlation degree prediction method and apparatus, and machine learning model training method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8977654B1 (en) * 2012-09-21 2015-03-10 Google Inc. Assigning classes to users of an online community
CN109902229B (en) * 2019-02-01 2019-12-24 中森云链(成都)科技有限责任公司 Comment-based interpretable recommendation method
CN111488524B (en) * 2020-04-08 2022-08-16 吉林大学 Attention-oriented semantic-sensitive label recommendation method

Also Published As

Publication number Publication date
CN113420221A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN113420221B (en) Interpretable recommendation method integrating implicit article preference and explicit feature preference of user
CN111339415B (en) Click rate prediction method and device based on multi-interactive attention network
CN108959603B (en) Personalized recommendation system and method based on deep neural network
CN111222332B (en) Commodity recommendation method combining attention network and user emotion
CN104935963A (en) Video recommendation method based on timing sequence data mining
CN113158033A (en) Collaborative recommendation model construction method based on knowledge graph preference propagation
CN111242729A (en) Serialization recommendation method based on long-term and short-term interests
CN115917535A (en) Recommendation model training method, recommendation device and computer readable medium
CN115048586B (en) Multi-feature-fused news recommendation method and system
CN112288554B (en) Commodity recommendation method and device, storage medium and electronic device
CN107016122A (en) Knowledge recommendation method based on time-shift
Fakhfakh et al. Deep learning-based recommendation: Current issues and challenges
Yakhchi et al. Towards a deep attention-based sequential recommender system
Dai et al. BTR: a feature-based Bayesian task recommendation scheme for crowdsourcing system
Khan et al. Comparative analysis on Facebook post interaction using DNN, ELM and LSTM
CN113190751B (en) Recommendation method fusing keyword generation
CN112364245B (en) Top-K movie recommendation method based on heterogeneous information network embedding
CN111966888A (en) External data fused interpretable recommendation method and system based on aspect categories
CN110851694A (en) Personalized recommendation system based on user memory network and tree structure depth model
CN114817692A (en) Method, device and equipment for determining recommended object and computer storage medium
CN115712780A (en) Information pushing method and device based on cloud computing and big data
CN114358364A (en) Attention mechanism-based short video frequency click rate big data estimation method
Mao et al. TCR: Temporal-CNN for reviews based recommendation system
Li et al. Research on recommendation algorithm based on e-commerce user behavior sequence
Ma et al. In-depth Recommendation Model Based on Self-Attention Factorization.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant