CN110659411B - Personalized recommendation method based on neural attention self-encoder - Google Patents
Personalized recommendation method based on neural attention self-encoder Download PDFInfo
- Publication number
- CN110659411B CN110659411B CN201910773079.4A CN201910773079A CN110659411B CN 110659411 B CN110659411 B CN 110659411B CN 201910773079 A CN201910773079 A CN 201910773079A CN 110659411 B CN110659411 B CN 110659411B
- Authority
- CN
- China
- Prior art keywords
- representation
- item
- vector
- user
- attention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Machine Translation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a personalized recommendation method based on a nerve attention self-encoder, which comprises the following steps: coding the binary scores of the user on the items by using a self-coder to generate score feature representation of the items; generating a descriptive feature representation of the item by a word attention module; utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project; calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user; carrying out inner product operation on the user expression vector and the expression vector of the candidate item to obtain the prediction probability of the user accessing the candidate item; and (4) arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a personalized recommendation method based on a neural attention self-encoder.
Background
With the rapid development of the internet, the online contents have explosive growth, how to help users obtain information in which the users are interested from massive online contents, and further improve the information overload problem is a main problem to be solved by a personalized recommendation system. The ultimate goal of the personalized recommendation system is to understand users, and one of the core technologies is user preference modeling. Therefore, how to construct the preference characteristics of the user according to the historical interaction records of the user is the key of the personalized recommendation system. Most of the traditional user preference modeling methods characterize the preference of a user according to the historical interactive items of the user. However, the items of the user history interaction only contain limited feature information, the user preference generated according to the item features is not accurate enough and is too single, the interest of the user cannot be comprehensively represented, meanwhile, the problems of sparsity and cold start of user-item interaction data are not considered, and the interested items cannot be accurately recommended to the user according to the personalized preference of the user.
The invention patent published at present is a recommendation system and a recommendation method based on an attention mechanism, and the publication number is CN 109087130 a, items in a user history record and candidate items are mapped into feature vectors of the items through a feature embedding layer, the feature vectors of the user are obtained through learning the representation of the user through the attention mechanism, and a predicted value of the user on the items is output through a fusion output layer according to the feature vectors of the items and the feature vectors of the user. According to the method, the feature vectors of the items are generated through the one-hot coding, and the semantic information of the items and the semantic information between the items are ignored, so that the feature information contained in the obtained item feature vectors is extremely limited, and the personalized preference of the user cannot be comprehensively and accurately modeled. The invention describes a personalized recommendation method based on a neural attention self-encoder, which comprises the steps of firstly generating a rating characteristic representation of an item by the self-encoder according to an implicit rating (the rating is only 0 and 1, also called binary rating, 0 represents dislike and 1 represents like) of the item by a user, and then generating a description characteristic representation of the item by a word attention module according to the description content of the item. The neural gating module fuses feature representations of the item scores and feature representations of the item descriptions to generate a representation vector of the user historical access items and candidate items. And then, the representation vectors of the user historical access items and the candidate items are input into the attention network to calculate the representation vector of the user. And after the expression vectors of the user and the candidate item are obtained, calculating the probability of the user accessing the candidate item through inner product operation, and generating a recommendation list of the user by arranging the prediction probabilities of different candidate items from large to small.
Disclosure of Invention
In view of this, the present invention provides a personalized recommendation method based on a neuro-attention self-encoder, which uses a word attention module to fuse description information of an item to obtain a description feature representation of the item, so as to more fully mine feature information of the item.
The invention solves the technical problems by the following technical means:
a personalized recommendation method based on a neural attention self-encoder comprises the following steps:
coding a binary score of a user on a project by using a self-coder, and generating a score feature representation of the project;
generating a descriptive feature representation of the item by a word attention module;
utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project;
calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user;
carrying out inner product operation on the user expression vector and the candidate item expression vector to obtain the prediction probability of the user accessing the candidate item;
and (4) arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
Further, the self-encoder generates a scoring feature representation of the item i using the following formula
WhereinIs a weight matrix; m is the number of users, h1Is the dimension of the first hidden layer, and h is the dimension of the bottleneck layer; r isiIs a multiple heat vector, r u,i1 indicates that the user u likes the item i.
Further, the generating of the description feature representation of the item through the word attention module specifically includes:
the word attention module carries out one-hot coding operation on each word in the word sequence of the item so as to represent each word in the word sequence of the item into one-hot vector;
converting the one-hot vector into a dense vector representation of a low-dimensional real value through a word embedding matrix;
calculating the attention weight of the word attention module through a two-layer neural network;
according to the attention weight, carrying out weighted summation on word embedding in the dense vector representation to obtain description feature representation of the item;
expanding the attention weight to obtain an attention weight matrix, and multiplying the attention weight matrix and the word embedding matrix to obtain a matrix representation of the item;
and converting the matrix representation of the item through an aggregation layer to generate the description characteristic representation of the item.
Further, the grading feature representation and the description feature representation of the project are fused adaptively by using the neural gating layer, and a representation vector of the project is obtained.
Further, the neural gating layer adaptively fuses grading feature representation and description feature representation of the project according to the following formula to obtain a representation vector of the project;
wherein G is a neural gate control unit,a final project representation vector is obtained after the scoring feature representation of the project and the description feature representation of the project are fused; wg1∈Rh×h,Wg2∈Rh×h,bg∈RhAre parameters in the neural gated layer.
The invention has the beneficial effects that:
1. considering that most of the existing inventions cannot fully utilize the description information of the project, the description information of the project as text information containing rich semantics has a very important role in generating the project representation vector. Therefore, the invention provides a method for obtaining the description feature representation of the project by using the word attention module, and effectively enriches the feature information of the project representation vector by fusing the scoring feature representation and the description feature representation of the project.
2. The existing technology utilizes inner product operation to predict the preference of the user to the item, and the method of linearly combining the potential factors of the user and the item can greatly limit the expressive force of matrix decomposition. In order to obtain the preference of the user to the items, the invention utilizes the self-encoder to encode the binary scores of the user to the items, and the score characteristic representation of the items is obtained. The use of an auto-encoder enables learning of feature representations that contain more abundant information.
3. The existing technology learns the feature representation of the item through a bag-of-words model, and ignores the problem that different words have different importance. The word attention module proposed by the present invention can adaptively select words containing more information and generate higher weights for these words than learning the feature representation of the item through a package of words. The generated item description feature representation is made to be more representative of item description information.
4. The prior art carries out regularization processing on the grading feature representation and the description feature representation of the project, and the most important part of the two feature representations cannot be extracted. The invention provides a neural gating module for adaptively fusing the feature representation of project scoring and the feature representation of project description, wherein the neural gating module can extract and combine the most important parts of the two feature representations.
5. The prior art generates the representation vector of the user by simply averaging the representation vectors of the user historical access items, neglects the problem that the user has different interests in different items, and does not consider that the user historical access items have different influences on candidate items. In order to represent diversified interests of the user, the attention network is used for calculating different influences of historical access items of the user on the candidate items, so that the generated user expression vector can more accurately and comprehensively express personalized preference of the user, and the items recommended to the user are more accurate and diversified.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a diagram of a model framework of the present invention;
FIG. 3 is a schematic diagram of a project representation generation module of the present invention;
fig. 4 is a diagram of an attention network architecture of the present invention.
Detailed Description
The invention will be described in detail below with reference to the following figures and specific examples:
as shown in fig. 1 to 4, a personalized recommendation method based on a neural attention self-encoder of the present invention includes:
and S1, encoding the binary scores of the items by the user by using the self-encoder, and generating a score feature representation of the items.
In order to obtain the user's preference for an item, the invention proposes to encode the user's binary score r for the item i using an autoencoderi∈Rm. I.e. by the following formulaiScoring feature representation encoded as item
WhereinIs a weight matrix, m is the number of users, h1Is the dimension of the first hidden layer and h is the dimension of the bottleneck layer. r isiIs a multiple heat vector, r u,i1 indicates that user u likes item i.
And S2, generating description feature representation of the item through the word attention module.
Considering that the generation of the representation vector of the project through the scoring feature representation of the project is not enough to sufficiently mine the feature information of the project, the invention provides a method for obtaining the description feature representation of the project by fusing the description information of the project through a word attention module;
the description feature representation of the item generated by the word attention module comprises the following specific steps:
step 2, in order to facilitate subsequent calculation, embedding the vector subjected to the one-hot encoding in the step 1 into a matrix E epsilon R through a wordh×vAnd converting the word into a dense vector representation of a low-dimensional real value, wherein h and v are the dimension of word embedding and the size of a vocabulary respectively. After the above conversion operation, the text description of the item is represented as:
And 3, in actual conditions, the user may be more concerned with information such as names or patterns of the items rather than the relation between words in the word sequence. For the reasons stated above, the present invention does not use complex cyclic or convolutional neural networks, but rather processes word sequences using a multidimensional attention mechanism to learn descriptive feature representations of items. Word attention is intended to generate different influence weights for different words, and then to weight and sum the words according to the weights to learn the descriptive feature representation of the item. Dense vector representation matrix D for given item iiThe attention weight is calculated by a two-layer neural network:
wherein Softmax ensures that the sum of all calculated weights is 1 for the parameter to be learned;
step 4, according to the attention weight calculated in the step 3, comparing DiThe word embedding in the Chinese character is weighted and summed to obtain the description characteristic representation of the item
Step 5, considering that the operation is just word embedding eiGeneratingA single weight value will typically cause the model to focus on only a particular aspect of the item description, and not on the entire profile. When a description of an item contains many words, characteristic information from many aspects may better characterize the item. Therefore, the present invention proposes to pass a matrix instead of a single weight value aiTo obtain multi-dimensional attention and generate an attention weight vector for each word embedding calculation. Each dimension of the attention weight vector represents DiAll words in (1) are embedded in one aspect of the relationship between them. If attention is to be drawn from word embeddingaCharacteristic information of aspect, w needs to be setaIs extended toThe effect of this operation is to extract words in the item description that contain more information. This results in an attention weight matrix:
whereinSoftmax operates along the second dimension of the input, attention weight matrix and bias term, respectively. Multiplying the attention weight matrix by the word embedding matrix to obtain a matrix representation of the item i:
step 6, in order to facilitate subsequent calculation, the matrix representation of the items needs to be converted into a vector representation through the aggregation layer. Representing the matrix of item iWhen the input is input into the neural network layer, the description characteristics of the item are expressed as:
whereinIs a parameter of the polymeric layer, atFor activating functions, according to the invention atIs a function of tanh.
And S3, fusing the scoring characteristic representation and the description characteristic representation of the project by using a neural gating module to obtain a representation vector of the project.
The scoring feature representation of the item and the description feature representation of the item are obtained through steps S1 and S2, respectively, and in order to better learn the preference of the user for the candidate item, the scoring feature representation of the item and the description feature representation of the item need to be fused to obtain a final representation vector of the item. The project representation generation module of the present invention is shown in FIG. 3. Unlike prior art regularization of the scoring feature representation of a project and the description feature representation of the project, the present invention proposes to adaptively fuse the two different feature representations using a neural gating layer:
wherein G is a neural gate control unit,and fusing the scoring feature representation of the project and the final project representation vector after the description feature representation of the project. Wg1∈Rh×h,Wg2∈Rh×h,bg∈RhAre parameters in the neural gated layer. The neural gating module can extract and combine the most important parts in the scoring feature representation and the description feature representation of the items, so that the generated item representation vector contains more useful feature information, the personalized preference of the user is more comprehensively obtained, and the recommended result is more in line with the interest of the user.
And S4, calculating by the item representation generation module to obtain a candidate item representation vector and a representation vector of the user historical access item, inputting the candidate item representation vector and the representation vector of the user historical access item into the attention network, and generating a representation vector of the user.
After the candidate item representation vector and the representation vector of the user historical access item are calculated by the item representation generation module, a common method is to average the representation vectors of all the historical access items of the user as the preference vector of the user relative to the candidate item. However, as previously mentioned, the preferences of a user for different items are different, and when calculating whether a user will access a candidate item, the user's historical access items are considered to have different effects on the candidate item. To characterize the diverse interests of a user, the present invention proposes to use an attention network to model the different effects of a user's historical access items on the candidates.
And calculating the influence weight of each historical access item of the user through the attention network, and performing weighted summation on the historical access items of the user according to the influence weight to generate a final expression vector of the user. The structure of the attention network is shown in fig. 4, and the specific steps include:
Step 2, after the influence weight of the user historical access item on the candidate item is generated through the step 1, the user u on the candidate item vcPreference vector ofAccessing items by computing user u historyIs represented by a vectorThe weighted sum of (a) and (b) yields:
whereinIs when the candidate item is vcThe representation vector of the time user u, NuThe number of access items for user u has been historically.
And S5, performing inner product operation on the user expression vector and the expression vector of the candidate item to obtain the prediction probability of the user accessing the candidate item.
Obtaining the user's expression vector after the operation of the above stepsAnd a representation vector of the candidate itemUser u visits candidate item vcBy computing a representation vector of the userAnd a representation vector of the candidate itemThe inner product of (a) is obtained:
wherein InrPro is an inner product operation,is a scalar quantity, and represents that user u visits candidate item vcThe value range of the probability of (2) is (0, 1).
And S6, arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
After the above operations of step S1 to step S5, the predicted probability that the user accesses a certain candidate is obtained. And for different candidate items, arranging the candidate items according to the calculated prediction probability values in a descending order, and taking the item K before the ranking as a personalized item recommendation list of the user.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims. The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.
Claims (3)
1. A personalized recommendation method based on a neural attention self-encoder is characterized by comprising the following steps:
coding a binary score of a user on a project by using a self-coder, and generating a score feature representation of the project;
generating a descriptive feature representation of the item by a word attention module;
utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project;
calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user;
carrying out inner product operation on the user expression vector and the candidate item expression vector to obtain the prediction probability of the user accessing the candidate item;
arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before ranking as a personalized recommendation list of the user;
WhereinIs a weight matrix; m is the number of users, h1Is the dimension of the first hidden layer, and h is the dimension of the bottleneck layer; riIs a multiple heat vector;
the generating of the description feature representation of the item through the word attention module specifically includes:
the word attention module carries out one-hot coding operation on each word in the word sequence of the item so as to represent each word in the word sequence of the item into one-hot vector;
converting the one-hot vector into a dense vector representation of a low-dimensional real value through a word embedding matrix;
calculating the attention weight of the word attention module through a two-layer neural network;
according to the attention weight, carrying out weighted summation on word embedding in the dense vector representation to obtain description feature representation of the item;
expanding the attention weight to obtain an attention weight matrix, and multiplying the attention weight matrix and the word embedding matrix to obtain a matrix representation of the item;
and converting the matrix representation of the item through an aggregation layer to generate the description characteristic representation of the item.
2. The personalized recommendation method based on the neural attention self-encoder according to claim 1, wherein: and adaptively fusing the scoring feature representation and the description feature representation of the project by using a neural gating layer to obtain a representation vector of the project.
3. The personalized recommendation method based on the neural attention self-encoder according to claim 2, characterized in that: the neural gating layer adaptively fuses grading feature representation and description feature representation of the project according to the following formula to obtain a representation vector of the project;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773079.4A CN110659411B (en) | 2019-08-21 | 2019-08-21 | Personalized recommendation method based on neural attention self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773079.4A CN110659411B (en) | 2019-08-21 | 2019-08-21 | Personalized recommendation method based on neural attention self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110659411A CN110659411A (en) | 2020-01-07 |
CN110659411B true CN110659411B (en) | 2022-03-11 |
Family
ID=69037655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910773079.4A Active CN110659411B (en) | 2019-08-21 | 2019-08-21 | Personalized recommendation method based on neural attention self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659411B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111460249B (en) * | 2020-02-24 | 2022-09-09 | 桂林电子科技大学 | Personalized learning resource recommendation method based on learner preference modeling |
CN111625710B (en) * | 2020-04-09 | 2021-12-24 | 北京百度网讯科技有限公司 | Processing method and device of recommended content, electronic equipment and readable storage medium |
CN111695591B (en) * | 2020-04-26 | 2024-05-10 | 平安科技(深圳)有限公司 | AI-based interview corpus classification method, AI-based interview corpus classification device, AI-based interview corpus classification computer equipment and AI-based interview corpus classification medium |
CN111814044B (en) * | 2020-06-30 | 2024-06-18 | 广州视源电子科技股份有限公司 | Recommendation method, recommendation device, terminal equipment and storage medium |
CN112951443B (en) * | 2021-04-16 | 2023-08-04 | 平安科技(深圳)有限公司 | Syndrome monitoring and early warning method, device, computer equipment and storage medium |
CN113435685A (en) * | 2021-04-28 | 2021-09-24 | 桂林电子科技大学 | Course recommendation method of hierarchical Attention deep learning model |
CN114154071B (en) * | 2021-12-09 | 2023-05-09 | 电子科技大学 | Emotion time sequence recommendation method based on attention mechanism |
CN115438259A (en) * | 2022-09-05 | 2022-12-06 | 淮阴工学院 | Cold-chain logistics vehicle source recommendation method and device based on dynamic interest and knowledge graph |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647226A (en) * | 2018-03-26 | 2018-10-12 | 浙江大学 | A kind of mixing recommendation method based on variation autocoder |
CN109033294A (en) * | 2018-07-13 | 2018-12-18 | 东北师范大学 | A kind of mixed recommendation method incorporating content information |
CN109408702A (en) * | 2018-08-29 | 2019-03-01 | 昆明理工大学 | A kind of mixed recommendation method based on sparse edge noise reduction autocoding |
CN109635291A (en) * | 2018-12-04 | 2019-04-16 | 重庆理工大学 | A kind of recommended method of fusion score information and item contents based on coorinated training |
CN109740064A (en) * | 2019-01-18 | 2019-05-10 | 北京化工大学 | A kind of CF recommended method of fusion matrix decomposition and excavation user items information |
CN109783739A (en) * | 2019-01-23 | 2019-05-21 | 北京工业大学 | A kind of collaborative filtering recommending method based on the sparse noise reduction self-encoding encoder enhancing of stacking |
CN110059220A (en) * | 2019-04-12 | 2019-07-26 | 北京工业大学 | A kind of film recommended method based on deep learning Yu Bayesian probability matrix decomposition |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170024663A1 (en) * | 2015-07-24 | 2017-01-26 | Ebay Inc. | Category recommendation using statistical language modeling and a gradient boosting machine |
-
2019
- 2019-08-21 CN CN201910773079.4A patent/CN110659411B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647226A (en) * | 2018-03-26 | 2018-10-12 | 浙江大学 | A kind of mixing recommendation method based on variation autocoder |
CN109033294A (en) * | 2018-07-13 | 2018-12-18 | 东北师范大学 | A kind of mixed recommendation method incorporating content information |
CN109408702A (en) * | 2018-08-29 | 2019-03-01 | 昆明理工大学 | A kind of mixed recommendation method based on sparse edge noise reduction autocoding |
CN109635291A (en) * | 2018-12-04 | 2019-04-16 | 重庆理工大学 | A kind of recommended method of fusion score information and item contents based on coorinated training |
CN109740064A (en) * | 2019-01-18 | 2019-05-10 | 北京化工大学 | A kind of CF recommended method of fusion matrix decomposition and excavation user items information |
CN109783739A (en) * | 2019-01-23 | 2019-05-21 | 北京工业大学 | A kind of collaborative filtering recommending method based on the sparse noise reduction self-encoding encoder enhancing of stacking |
CN110059220A (en) * | 2019-04-12 | 2019-07-26 | 北京工业大学 | A kind of film recommended method based on deep learning Yu Bayesian probability matrix decomposition |
Non-Patent Citations (3)
Title |
---|
Attentive Autoencoder Matrix Factorization for Recommender System;Dawei Zang等;《IEEE》;20190620;全文 * |
基于SVR的协同过滤与用户画像融合的电影个性化推荐研究;毕闰芳;《中国优秀博硕士学位论文全文数据库(硕士)经济与管理科学辑》;20190115(第1期);全文 * |
基于图卷积与神经协同过滤的融合信息推荐模型;江原;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190115(第1期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110659411A (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110659411B (en) | Personalized recommendation method based on neural attention self-encoder | |
WO2021203819A1 (en) | Content recommendation method and apparatus, electronic device, and storage medium | |
CN109299396B (en) | Convolutional neural network collaborative filtering recommendation method and system fusing attention model | |
CN112214685B (en) | Knowledge graph-based personalized recommendation method | |
CN111222332B (en) | Commodity recommendation method combining attention network and user emotion | |
CN111797321B (en) | Personalized knowledge recommendation method and system for different scenes | |
CN111339415B (en) | Click rate prediction method and device based on multi-interactive attention network | |
CN109522474B (en) | Recommendation method for mining deep user similarity based on interactive sequence data | |
CN109614471B (en) | Open type problem automatic generation method based on generation type countermeasure network | |
CN111581510A (en) | Shared content processing method and device, computer equipment and storage medium | |
CN112328900A (en) | Deep learning recommendation method integrating scoring matrix and comment text | |
CN109189862A (en) | A kind of construction of knowledge base method towards scientific and technological information analysis | |
CN111079409A (en) | Emotion classification method by using context and aspect memory information | |
CN111178986B (en) | User-commodity preference prediction method and system | |
CN114840747B (en) | News recommendation method based on contrast learning | |
CN112016002A (en) | Mixed recommendation method integrating comment text level attention and time factors | |
CN114648031B (en) | Text aspect emotion recognition method based on bidirectional LSTM and multi-head attention mechanism | |
Liu et al. | High-quality domain expert finding method in CQA based on multi-granularity semantic analysis and interest drift | |
CN115964560A (en) | Information recommendation method and equipment based on multi-mode pre-training model | |
CN116595975A (en) | Aspect-level emotion analysis method for word information enhancement based on sentence information | |
Yuan et al. | Deep learning from a statistical perspective | |
Rauf et al. | BCE4ZSR: Bi-encoder empowered by teacher cross-encoder for zero-shot cold-start news recommendation | |
CN114780841A (en) | KPHAN-based sequence recommendation method | |
Li et al. | Research on recommendation algorithm based on e-commerce user behavior sequence | |
CN114358364A (en) | Attention mechanism-based short video frequency click rate big data estimation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |