CN110659411A - Personalized recommendation method based on neural attention self-encoder - Google Patents

Personalized recommendation method based on neural attention self-encoder Download PDF

Info

Publication number
CN110659411A
CN110659411A CN201910773079.4A CN201910773079A CN110659411A CN 110659411 A CN110659411 A CN 110659411A CN 201910773079 A CN201910773079 A CN 201910773079A CN 110659411 A CN110659411 A CN 110659411A
Authority
CN
China
Prior art keywords
representation
item
vector
user
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910773079.4A
Other languages
Chinese (zh)
Other versions
CN110659411B (en
Inventor
古天龙
田冰冰
李龙
常亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201910773079.4A priority Critical patent/CN110659411B/en
Publication of CN110659411A publication Critical patent/CN110659411A/en
Application granted granted Critical
Publication of CN110659411B publication Critical patent/CN110659411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a personalized recommendation method based on a nerve attention self-encoder, which comprises the following steps: coding the binary scores of the user on the items by using a self-coder to generate score feature representation of the items; generating a descriptive feature representation of the item by a word attention module; utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project; calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user; carrying out inner product operation on the user expression vector and the expression vector of the candidate item to obtain the prediction probability of the user accessing the candidate item; and (4) arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.

Description

Personalized recommendation method based on neural attention self-encoder
Technical Field
The invention relates to the technical field of computers, in particular to a personalized recommendation method based on a neural attention self-encoder.
Background
With the rapid development of the internet, the online contents have explosive growth, how to help users obtain information in which the users are interested from massive online contents, and further improve the information overload problem is a main problem to be solved by a personalized recommendation system. The ultimate goal of the personalized recommendation system is to understand users, and one of the core technologies is user preference modeling. Therefore, how to construct the preference characteristics of the user according to the historical interaction records of the user is the key of the personalized recommendation system. Most of the traditional user preference modeling methods characterize the preference of a user according to the historical interactive items of the user. However, the items of the user history interaction only contain limited feature information, the user preference generated according to the item features is not accurate enough and is too single, the interest of the user cannot be comprehensively represented, meanwhile, the problems of sparsity and cold start of user-item interaction data are not considered, and the interested items cannot be accurately recommended to the user according to the personalized preference of the user.
The invention patent published at present is a recommendation system and a recommendation method based on an attention mechanism, and the publication number is CN 109087130 a, items in a user history record and candidate items are mapped into feature vectors of the items through a feature embedding layer, the feature vectors of the user are obtained through learning the representation of the user through the attention mechanism, and a predicted value of the user on the items is output through a fusion output layer according to the feature vectors of the items and the feature vectors of the user. According to the method, the feature vectors of the items are generated through the one-hot coding, and the semantic information of the items and the semantic information between the items are ignored, so that the feature information contained in the obtained item feature vectors is extremely limited, and the personalized preference of the user cannot be comprehensively and accurately modeled. The invention describes a personalized recommendation method based on a neural attention self-encoder, which comprises the steps of firstly generating a rating characteristic representation of an item by the self-encoder according to an implicit rating (the rating is only 0 and 1, also called binary rating, 0 represents dislike and 1 represents like) of the item by a user, and then generating a description characteristic representation of the item by a word attention module according to the description content of the item. The neural gating module fuses feature representations of the item scores and feature representations of the item descriptions to generate a representation vector of the user historical access items and candidate items. And then, the representation vectors of the user historical access items and the candidate items are input into the attention network to calculate the representation vector of the user. And after the expression vectors of the user and the candidate item are obtained, calculating the probability of the user accessing the candidate item through inner product operation, and generating a recommendation list of the user by arranging the prediction probabilities of different candidate items from large to small.
Disclosure of Invention
In view of this, the present invention provides a personalized recommendation method based on a neuro-attention self-encoder, which uses a word attention module to fuse description information of an item to obtain a description feature representation of the item, so as to more fully mine feature information of the item.
The invention solves the technical problems by the following technical means:
a personalized recommendation method based on a neural attention self-encoder comprises the following steps:
coding a binary score of a user on a project by using a self-coder, and generating a score feature representation of the project;
generating a descriptive feature representation of the item by a word attention module;
utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project;
calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user;
carrying out inner product operation on the user expression vector and the candidate item expression vector to obtain the prediction probability of the user accessing the candidate item;
and (4) arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
Further, the self-encoder generates a scoring feature representation of the item i using the following formula
Figure BDA0002174199940000031
Figure BDA0002174199940000032
Wherein
Figure BDA0002174199940000033
Is a weight matrix; m is the number of users, h1Is the dimension of the first hidden layer, and h is the dimension of the bottleneck layer; r isiIs a multiple heat vector, r u,i1 indicates that the user u likes the item i.
Further, the generating of the description feature representation of the item through the word attention module specifically includes:
the word attention module carries out one-hot coding operation on each word in the word sequence of the item so as to represent each word in the word sequence of the item into one-hot vector;
converting the one-hot vector into a dense vector representation of a low-dimensional real value through a word embedding matrix;
calculating the attention weight of the word attention module through a two-layer neural network;
according to the attention weight, carrying out weighted summation on word embedding in the dense vector representation to obtain description feature representation of the item;
expanding the attention weight to obtain an attention weight matrix, and multiplying the attention weight matrix and the word embedding matrix to obtain a matrix representation of the item;
and converting the matrix representation of the item through an aggregation layer to generate the description characteristic representation of the item.
Further, the grading feature representation and the description feature representation of the project are fused adaptively by using the neural gating layer, and a representation vector of the project is obtained.
Further, the neural gating layer adaptively fuses grading feature representation and description feature representation of the project according to the following formula to obtain a representation vector of the project;
Figure BDA0002174199940000041
wherein G is a neural gate control unit,
Figure BDA0002174199940000043
a final project representation vector is obtained after the scoring feature representation of the project and the description feature representation of the project are fused;
Figure BDA0002174199940000044
are parameters in the neural gated layer.
The invention has the beneficial effects that:
1. considering that most of the existing inventions cannot fully utilize the description information of the project, the description information of the project as text information containing rich semantics has a very important role in generating the project representation vector. Therefore, the invention provides a method for obtaining the description feature representation of the project by using the word attention module, and effectively enriches the feature information of the project representation vector by fusing the scoring feature representation and the description feature representation of the project.
2. The existing technology utilizes inner product operation to predict the preference of the user to the item, and the method of linearly combining the potential factors of the user and the item can greatly limit the expressive force of matrix decomposition. In order to obtain the preference of the user to the items, the invention utilizes the self-encoder to encode the binary scores of the user to the items, and the score characteristic representation of the items is obtained. The use of an auto-encoder enables learning of feature representations that contain more abundant information.
3. The existing technology learns the feature representation of the item through a bag-of-words model, and ignores the problem that different words have different importance. The word attention module proposed by the present invention can adaptively select words containing more information and generate higher weights for these words than learning the feature representation of the item through a package of words. The generated item description feature representation is made to be more representative of item description information.
4. The prior art carries out regularization processing on the grading feature representation and the description feature representation of the project, and the most important part of the two feature representations cannot be extracted. The invention provides a neural gating module for adaptively fusing the feature representation of project scoring and the feature representation of project description, wherein the neural gating module can extract and combine the most important parts of the two feature representations.
5. The prior art generates the representation vector of the user by simply averaging the representation vectors of the user historical access items, neglects the problem that the user has different interests in different items, and does not consider that the user historical access items have different influences on candidate items. In order to represent diversified interests of the user, the attention network is used for calculating different influences of historical access items of the user on the candidate items, so that the generated user expression vector can more accurately and comprehensively express personalized preference of the user, and the items recommended to the user are more accurate and diversified.
Drawings
FIG. 1 is an overall flow chart of the present invention;
FIG. 2 is a diagram of a model framework of the present invention;
FIG. 3 is a schematic diagram of a project representation generation module of the present invention;
fig. 4 is a diagram of an attention network architecture of the present invention.
Detailed Description
The invention will be described in detail below with reference to the following figures and specific examples:
as shown in fig. 1 to 4, a personalized recommendation method based on a neural attention self-encoder of the present invention includes:
and S1, encoding the binary scores of the items by the user by using the self-encoder, and generating a score feature representation of the items.
In order to obtain the user's preference for an item, the invention proposes to encode the user's binary score r for the item i using an autoencoderi∈Rm. I.e. by the following formulaiScoring feature representation encoded as item
Figure BDA0002174199940000051
Figure BDA0002174199940000052
Wherein
Figure BDA0002174199940000053
Is a weight matrix, m is the number of users, h1Is the dimension of the first hidden layer and h is the dimension of the bottleneck layer. r isiIs a multiple heat vector, r u,i1 indicates that user u likes item i.
And S2, generating description feature representation of the item through the word attention module.
Considering that the generation of the representation vector of the project through the scoring feature representation of the project is not enough to sufficiently mine the feature information of the project, the invention provides a method for obtaining the description feature representation of the project by fusing the description information of the project through a word attention module;
the description feature representation of the item generated by the word attention module comprises the following specific steps:
step 1, unlike learning the feature representation of the item through a word package, the word attention module provided by the invention can adaptively select words containing more information and generate higher weights for the words, so that the words with the higher weights have larger influence on the feature representation of the item description. In the word attention module, the item i is input by the length liThe word sequence is obtained from the text description of i, each word in the word sequence is represented as a unique heat vector, and the maximum length of the word sequence is 300;
step 2, in order to facilitate subsequent calculation, the one-hot coded data obtained in the step 1 are subjected to one-hot codingVector embedding matrix E ∈ R through a wordh×vAnd converting the word into a dense vector representation of a low-dimensional real value, wherein h and v are the dimension of word embedding and the size of a vocabulary respectively. After the above conversion operation, the text description of the item is represented as:
Figure BDA0002174199940000062
wherein
Figure BDA0002174199940000063
ej∈Rh
And 3, in actual conditions, the user may be more concerned with information such as names or patterns of the items rather than the relation between words in the word sequence. For the reasons stated above, the present invention does not use complex cyclic or convolutional neural networks, but rather processes word sequences using a multidimensional attention mechanism to learn descriptive feature representations of items. Word attention is intended to generate different influence weights for different words, and then to weight and sum the words according to the weights to learn the descriptive feature representation of the item. Dense vector representation matrix D for given item iiThe attention weight is calculated by a two-layer neural network:
Figure BDA0002174199940000064
wherein
Figure BDA0002174199940000065
Softmax ensures that the sum of all calculated weights is 1 for the parameter to be learned;
step 4, according to the attention weight calculated in the step 3, comparing DiThe word embedding in the Chinese character is weighted and summed to obtain the description characteristic representation of the item
Figure BDA0002174199940000072
Step 5, considering that the operation is just word embedding ejGenerating a single weight value typically results in a model that focuses only on a particular aspect of the item description, but does not take into account the overall characteristic information. When a description of an item contains many words, characteristic information from many aspects may better characterize the item. Therefore, the present invention proposes to pass a matrix instead of a single weight value aiTo obtain multi-dimensional attention and generate an attention weight vector for each word embedding calculation. Each dimension of the attention weight vector represents DiAll words in (1) are embedded in one aspect of the relationship between them. If attention is to be drawn from word embeddingaCharacteristic information of aspect, w needs to be setaIs extended to
Figure BDA0002174199940000073
The effect of this operation is to extract words in the item description that contain more information. This results in an attention weight matrix:
Figure BDA0002174199940000074
wherein
Figure BDA0002174199940000075
Softmax operates along the second dimension of the input, attention weight matrix and bias term, respectively. Multiplying the attention weight matrix by the word embedding matrix to obtain a matrix representation of the item i:
Figure BDA0002174199940000076
wherein
Figure BDA0002174199940000077
Is a matrix representation of item i;
Step 6, in order to facilitate subsequent calculation, the matrix representation of the items needs to be converted into a vector representation through the aggregation layer. Representing the matrix of item i
Figure BDA0002174199940000078
When the input is input into the neural network layer, the description characteristics of the item are expressed as:
wherein
Figure BDA00021741999400000710
Parameter for the polymeric layer, αtFor activating the function, alpha in the inventiontIs a function of tanh.
And S3, fusing the scoring characteristic representation and the description characteristic representation of the project by using a neural gating module to obtain a representation vector of the project.
The scoring feature representation of the item and the description feature representation of the item are obtained through steps S1 and S2, respectively, and in order to better learn the preference of the user for the candidate item, the scoring feature representation of the item and the description feature representation of the item need to be fused to obtain a final representation vector of the item. The project representation generation module of the present invention is shown in FIG. 3. Unlike prior art regularization of the scoring feature representation of a project and the description feature representation of the project, the present invention proposes to adaptively fuse the two different feature representations using a neural gating layer:
Figure BDA0002174199940000081
wherein G is a neural gate control unit,
Figure BDA0002174199940000083
and fusing the scoring feature representation of the project and the final project representation vector after the description feature representation of the project.
Figure BDA0002174199940000084
Are parameters in the neural gated layer. The neural gating module can extract and combine the most important parts in the scoring feature representation and the description feature representation of the items, so that the generated item representation vector contains more useful feature information, the personalized preference of the user is more comprehensively obtained, and the recommended result is more in line with the interest of the user.
And S4, calculating by the item representation generation module to obtain a candidate item representation vector and a representation vector of the user historical access item, inputting the candidate item representation vector and the representation vector of the user historical access item into the attention network, and generating a representation vector of the user.
After the candidate item representation vector and the representation vector of the user historical access item are calculated by the item representation generation module, a common method is to average the representation vectors of all the historical access items of the user as the preference vector of the user relative to the candidate item. However, as previously mentioned, the preferences of a user for different items are different, and when calculating whether a user will access a candidate item, the user's historical access items are considered to have different effects on the candidate item. To characterize the diverse interests of a user, the present invention proposes to use an attention network to model the different effects of a user's historical access items on the candidates.
And calculating the influence weight of each historical access item of the user through the attention network, and performing weighted summation on the historical access items of the user according to the influence weight to generate a final expression vector of the user. The structure of the attention network is shown in fig. 4, and the specific steps include:
step 1, giving a historical access item of a user u
Figure BDA0002174199940000093
And candidate item vcFirst, the history generated by the project representation generation module is accessedAsking items
Figure BDA0002174199940000094
Is represented by a vector
Figure BDA0002174199940000095
And candidate item vcIs represented by vector vcAnd (6) splicing. In order to reduce the computational overhead and improve the computational efficiency, the present invention uses a 3-layer Neural Network (NN) as the attention network H. Input of H is history access item
Figure BDA0002174199940000096
Is represented by a vector
Figure BDA0002174199940000097
And candidate item vcIs represented by vector vcAfter the hidden layer multiplies the spliced expression vector by the weight, the bias term is added, and then the influence weight after normalization is calculated through the softmax function
Figure BDA0002174199940000098
Figure BDA0002174199940000091
Step 2, after the influence weight of the user historical access item on the candidate item is generated through the step 1, the user u on the candidate item vcPreference vector ofAccessing items by computing user u history
Figure BDA00021741999400000910
Is represented by a vector
Figure BDA00021741999400000911
The weighted sum of (a) and (b) yields:
Figure BDA0002174199940000092
whereinIs when the candidate item is vcThe representation vector of the time user u, NuThe number of access items for user u has been historically.
And S5, performing inner product operation on the user expression vector and the expression vector of the candidate item to obtain the prediction probability of the user accessing the candidate item.
Obtaining the user's expression vector after the operation of the above steps
Figure BDA00021741999400000913
And a representation vector of the candidate item
Figure BDA00021741999400000914
User u visits candidate item vcBy computing a representation vector of the user
Figure BDA00021741999400000915
And a representation vector of the candidate item
Figure BDA00021741999400000916
The inner product of (a) is obtained:
Figure BDA0002174199940000101
wherein InrPro is an inner product operation,
Figure BDA0002174199940000102
is a scalar quantity, and represents that user u visits candidate item vcThe value range of the probability of (2) is (0, 1).
And S6, arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
After the above operations of step S1 to step S5, the predicted probability that the user accesses a certain candidate is obtained. And for different candidate items, arranging the candidate items according to the calculated prediction probability values in a descending order, and taking the item K before the ranking as a personalized item recommendation list of the user.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims. The techniques, shapes, and configurations not described in detail in the present invention are all known techniques.

Claims (5)

1. A personalized recommendation method based on a neural attention self-encoder is characterized by comprising the following steps:
coding a binary score of a user on a project by using a self-coder, and generating a score feature representation of the project;
generating a descriptive feature representation of the item by a word attention module;
utilizing a neural gating module to fuse the grading feature representation and the description feature representation of the project to obtain a representation vector of the project;
calculating to obtain a candidate item representation vector and a representation vector of a user historical access item through a item representation generation module, and inputting the candidate item representation vector and the representation vector of the user historical access item into an attention network to generate a representation vector of a user;
carrying out inner product operation on the user expression vector and the candidate item expression vector to obtain the prediction probability of the user accessing the candidate item;
and (4) arranging the calculated prediction probabilities of different candidate items from large to small, and taking the candidate item K before the ranking as a personalized recommendation list of the user.
2. The personalized recommendation method based on the neural attention self-encoder according to claim 1, wherein: the self-encodingThe scoring characteristic representation of the item i is generated by the device according to the following formula
Figure FDA0002174199930000011
Figure FDA0002174199930000012
WhereinIs a weight matrix; m is the number of users, h1Is the dimension of the first hidden layer, and h is the dimension of the bottleneck layer; r isiIs a multiple heat vector, ru,i1 indicates that the user u likes the item i.
3. The personalized recommendation method based on the neural attention self-encoder according to claim 1, wherein: the generating of the description feature representation of the item through the word attention module specifically includes:
the word attention module carries out one-hot coding operation on each word in the word sequence of the item so as to represent each word in the word sequence of the item into one-hot vector;
converting the one-hot vector into a dense vector representation of a low-dimensional real value through a word embedding matrix;
calculating the attention weight of the word attention module through a two-layer neural network;
according to the attention weight, carrying out weighted summation on word embedding in the dense vector representation to obtain description feature representation of the item;
expanding the attention weight to obtain an attention weight matrix, and multiplying the attention weight matrix and the word embedding matrix to obtain a matrix representation of the item;
and converting the matrix representation of the item through an aggregation layer to generate the description characteristic representation of the item.
4. The personalized recommendation method based on the neural attention self-encoder according to claim 3, wherein: and adaptively fusing the scoring feature representation and the description feature representation of the project by using a neural gating layer to obtain a representation vector of the project.
5. The personalized recommendation method based on the neural attention self-encoder according to claim 4, wherein: the neural gating layer adaptively fuses grading feature representation and description feature representation of the project according to the following formula to obtain a representation vector of the project;
Figure FDA0002174199930000021
Figure FDA0002174199930000022
wherein G is a neural gate control unit,
Figure FDA0002174199930000023
a final project representation vector is obtained after the scoring feature representation of the project and the description feature representation of the project are fused;
Figure FDA0002174199930000024
bg∈Rhare parameters in the neural gated layer.
CN201910773079.4A 2019-08-21 2019-08-21 Personalized recommendation method based on neural attention self-encoder Active CN110659411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910773079.4A CN110659411B (en) 2019-08-21 2019-08-21 Personalized recommendation method based on neural attention self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910773079.4A CN110659411B (en) 2019-08-21 2019-08-21 Personalized recommendation method based on neural attention self-encoder

Publications (2)

Publication Number Publication Date
CN110659411A true CN110659411A (en) 2020-01-07
CN110659411B CN110659411B (en) 2022-03-11

Family

ID=69037655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910773079.4A Active CN110659411B (en) 2019-08-21 2019-08-21 Personalized recommendation method based on neural attention self-encoder

Country Status (1)

Country Link
CN (1) CN110659411B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN111625710A (en) * 2020-04-09 2020-09-04 北京百度网讯科技有限公司 Processing method and device of recommended content, electronic equipment and readable storage medium
CN111814044A (en) * 2020-06-30 2020-10-23 广州视源电子科技股份有限公司 Recommendation method and device, terminal equipment and storage medium
CN112951443A (en) * 2021-04-16 2021-06-11 平安科技(深圳)有限公司 Syndrome monitoring and early warning method and device, computer equipment and storage medium
CN113435685A (en) * 2021-04-28 2021-09-24 桂林电子科技大学 Course recommendation method of hierarchical Attention deep learning model
WO2021217772A1 (en) * 2020-04-26 2021-11-04 平安科技(深圳)有限公司 Ai-based interview corpus classification method and apparatus, computer device and medium
CN114154071A (en) * 2021-12-09 2022-03-08 电子科技大学 Emotion time sequence recommendation method based on attention mechanism

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170024663A1 (en) * 2015-07-24 2017-01-26 Ebay Inc. Category recommendation using statistical language modeling and a gradient boosting machine
CN108647226A (en) * 2018-03-26 2018-10-12 浙江大学 A kind of mixing recommendation method based on variation autocoder
CN109033294A (en) * 2018-07-13 2018-12-18 东北师范大学 A kind of mixed recommendation method incorporating content information
CN109408702A (en) * 2018-08-29 2019-03-01 昆明理工大学 A kind of mixed recommendation method based on sparse edge noise reduction autocoding
CN109635291A (en) * 2018-12-04 2019-04-16 重庆理工大学 A kind of recommended method of fusion score information and item contents based on coorinated training
CN109740064A (en) * 2019-01-18 2019-05-10 北京化工大学 A kind of CF recommended method of fusion matrix decomposition and excavation user items information
CN109783739A (en) * 2019-01-23 2019-05-21 北京工业大学 A kind of collaborative filtering recommending method based on the sparse noise reduction self-encoding encoder enhancing of stacking
CN110059220A (en) * 2019-04-12 2019-07-26 北京工业大学 A kind of film recommended method based on deep learning Yu Bayesian probability matrix decomposition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170024663A1 (en) * 2015-07-24 2017-01-26 Ebay Inc. Category recommendation using statistical language modeling and a gradient boosting machine
CN108647226A (en) * 2018-03-26 2018-10-12 浙江大学 A kind of mixing recommendation method based on variation autocoder
CN109033294A (en) * 2018-07-13 2018-12-18 东北师范大学 A kind of mixed recommendation method incorporating content information
CN109408702A (en) * 2018-08-29 2019-03-01 昆明理工大学 A kind of mixed recommendation method based on sparse edge noise reduction autocoding
CN109635291A (en) * 2018-12-04 2019-04-16 重庆理工大学 A kind of recommended method of fusion score information and item contents based on coorinated training
CN109740064A (en) * 2019-01-18 2019-05-10 北京化工大学 A kind of CF recommended method of fusion matrix decomposition and excavation user items information
CN109783739A (en) * 2019-01-23 2019-05-21 北京工业大学 A kind of collaborative filtering recommending method based on the sparse noise reduction self-encoding encoder enhancing of stacking
CN110059220A (en) * 2019-04-12 2019-07-26 北京工业大学 A kind of film recommended method based on deep learning Yu Bayesian probability matrix decomposition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAWEI ZANG等: "Attentive Autoencoder Matrix Factorization for Recommender System", 《IEEE》 *
毕闰芳: "基于SVR的协同过滤与用户画像融合的电影个性化推荐研究", 《中国优秀博硕士学位论文全文数据库(硕士)经济与管理科学辑》 *
江原: "基于图卷积与神经协同过滤的融合信息推荐模型", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460249A (en) * 2020-02-24 2020-07-28 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN111460249B (en) * 2020-02-24 2022-09-09 桂林电子科技大学 Personalized learning resource recommendation method based on learner preference modeling
CN111625710A (en) * 2020-04-09 2020-09-04 北京百度网讯科技有限公司 Processing method and device of recommended content, electronic equipment and readable storage medium
CN111625710B (en) * 2020-04-09 2021-12-24 北京百度网讯科技有限公司 Processing method and device of recommended content, electronic equipment and readable storage medium
WO2021217772A1 (en) * 2020-04-26 2021-11-04 平安科技(深圳)有限公司 Ai-based interview corpus classification method and apparatus, computer device and medium
CN111814044A (en) * 2020-06-30 2020-10-23 广州视源电子科技股份有限公司 Recommendation method and device, terminal equipment and storage medium
CN112951443A (en) * 2021-04-16 2021-06-11 平安科技(深圳)有限公司 Syndrome monitoring and early warning method and device, computer equipment and storage medium
CN112951443B (en) * 2021-04-16 2023-08-04 平安科技(深圳)有限公司 Syndrome monitoring and early warning method, device, computer equipment and storage medium
CN113435685A (en) * 2021-04-28 2021-09-24 桂林电子科技大学 Course recommendation method of hierarchical Attention deep learning model
CN114154071A (en) * 2021-12-09 2022-03-08 电子科技大学 Emotion time sequence recommendation method based on attention mechanism

Also Published As

Publication number Publication date
CN110659411B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN110659411B (en) Personalized recommendation method based on neural attention self-encoder
WO2021203819A1 (en) Content recommendation method and apparatus, electronic device, and storage medium
CN109299396B (en) Convolutional neural network collaborative filtering recommendation method and system fusing attention model
CN112214685B (en) Knowledge graph-based personalized recommendation method
CN111339415B (en) Click rate prediction method and device based on multi-interactive attention network
CN109614471B (en) Open type problem automatic generation method based on generation type countermeasure network
CN109753566A (en) The model training method of cross-cutting sentiment analysis based on convolutional neural networks
CN111209386B (en) Personalized text recommendation method based on deep learning
CN110083770B (en) Sequence recommendation method based on deeper feature level self-attention network
CN111222332A (en) Commodity recommendation method combining attention network and user emotion
CN112328900A (en) Deep learning recommendation method integrating scoring matrix and comment text
CN112579778A (en) Aspect-level emotion classification method based on multi-level feature attention
CN111325571B (en) Automatic generation method, device and system for commodity comment labels for multitask learning
CN111079409A (en) Emotion classification method by using context and aspect memory information
CN113569001A (en) Text processing method and device, computer equipment and computer readable storage medium
CN111143705A (en) Recommendation method based on graph convolution network
CN115964560B (en) Information recommendation method and equipment based on multi-mode pre-training model
Liu et al. High-quality domain expert finding method in CQA based on multi-granularity semantic analysis and interest drift
Zhang et al. SEMA: Deeply learning semantic meanings and temporal dynamics for recommendations
CN114840747A (en) News recommendation method based on comparative learning
Yuan et al. Deep learning from a statistical perspective
CN113065027A (en) Video recommendation method and device, electronic equipment and storage medium
CN114911940A (en) Text emotion recognition method and device, electronic equipment and storage medium
CN114780841A (en) KPHAN-based sequence recommendation method
CN114565436A (en) Vehicle model recommendation system, method, device and storage medium based on time sequence modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant