CN113159892B - Commodity recommendation method based on multi-mode commodity feature fusion - Google Patents

Commodity recommendation method based on multi-mode commodity feature fusion Download PDF

Info

Publication number
CN113159892B
CN113159892B CN202110444726.4A CN202110444726A CN113159892B CN 113159892 B CN113159892 B CN 113159892B CN 202110444726 A CN202110444726 A CN 202110444726A CN 113159892 B CN113159892 B CN 113159892B
Authority
CN
China
Prior art keywords
commodity
user
word
representation
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110444726.4A
Other languages
Chinese (zh)
Other versions
CN113159892A (en
Inventor
蔡国永
宋亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202110444726.4A priority Critical patent/CN113159892B/en
Publication of CN113159892A publication Critical patent/CN113159892A/en
Application granted granted Critical
Publication of CN113159892B publication Critical patent/CN113159892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Accounting & Taxation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the field of commodity recommendation, and particularly relates to a commodity recommendation method based on multi-mode commodity feature fusion. The commodity recommendation method comprises the following steps: constructing a user-commodity bipartite graph according to a commodity sequence purchased by a user, and obtaining vector representation of user nodes and vector representation of commodity nodes through graph convolution; extracting features of comment texts obtained by the commodities through a convolutional neural network to obtain vector representation of the commodity comments; performing feature extraction on the title and description information of the commodity through a convolutional neural network to obtain vector representation of commodity content; and connecting the vector representations of the commodity nodes, the comments and the contents to obtain a final representation of the commodity, and taking the vector representation of the user node as a final representation of the user. According to the method, the problem of data sparsity in commodity recommendation can be greatly relieved by utilizing the multi-modal characteristics of the commodities, and the recommendation accuracy is improved.

Description

Commodity recommendation method based on multi-mode commodity feature fusion
Technical Field
The invention relates to a commodity recommendation method, and belongs to the field of commodity recommendation.
Background
Most of the existing commodity recommendation methods only utilize the id of a commodity to extract a collaborative signal hidden in interaction between a user and the commodity in the process of modeling the commodity, so that the commodity is modeled, which generally faces a serious data sparsity problem and greatly restricts the performance of a recommendation system. Although there is some work to take into account review information to capture the product characteristic information contained in the review while mitigating the data sparsity problem, the title and description information of the product itself is rarely utilized. However, comment information is given by users, where information contained in different comments often has different informativeness due to differences in user expression habits and points of interest, and even a lot of noise information may be contained. Unlike review information, the title and description of the item is typically written by the merchant, which may include more and more comprehensive characteristics of the item, and may be more specialized and accurate in presentation. Therefore, when modeling a commodity in recommendation, on the basis of the commodity id and the commodity comment information, the combination of the commodity title and the description information can help to achieve better recommendation performance.
Disclosure of Invention
In order to solve the problems, the invention provides a commodity recommendation method based on multi-mode commodity feature fusion, which comprises the following steps:
s1: constructing a user-commodity bipartite graph according to a commodity sequence purchased by a user in history, and obtaining vector representation of user nodes and vector representation of commodity nodes through graph convolution;
s2: obtaining a comment document of a commodity, and extracting vector representation of the commodity comment through a convolutional neural network;
s3: acquiring the title and description information of the commodity, and extracting vector representation of the commodity content through a convolutional neural network;
s4: obtaining a final representation of the user and a final representation of the good;
s5: calculating the similarity between the user and the commodity;
s6: parameters in the method are provided through Bayes personalized sorting loss optimization.
Further, the constructing the user-commodity bipartite graph in S1 includes:
s11: obtaining a user historical commodity purchase sequence according to implicit feedback or explicit feedback, constructing a user-commodity bipartite graph through the historical commodity purchase sequence, and using a user-commodity adjacency matrix
Figure GDA0003560489150000021
Figure GDA0003560489150000022
Is represented by the formula (I) in which nuAnd npThe number of users and the number of commodities,
Figure GDA0003560489150000023
is a user-commodity interaction matrix, RTIs the transpose of the R and is,
Figure GDA0003560489150000024
s12: to utilize user-commodity dichotomyAdding an identity matrix to A according to the information of the nodes in the graph
Figure GDA0003560489150000025
Meanwhile, to avoid gradient disappearance or gradient explosion during training, a diagonal matrix is used
Figure GDA0003560489150000026
Carrying out normalization processing, wherein the value on the diagonal line is the degree of each node in the user-commodity bipartite graph, thereby obtaining
Figure GDA0003560489150000027
Further, the obtaining of the vector representation of the user node and the vector representation of the commodity node in S1 includes:
s13: and performing neighbor propagation and aggregation operation on the user-commodity bipartite graph through graph convolution to obtain vector representation of user nodes and vector representation of commodity nodes.
Further, the specific steps of the graph convolution in S13 are as follows:
s131: converting the unique corresponding id of each user and each commodity into a dense vector through an embedding layer to obtain a user characteristic vector
Figure GDA0003560489150000028
And commodity feature vector
Figure GDA0003560489150000029
Where d is the dimension of the feature vector;
s132: building an embedded table
Figure GDA00035604891500000210
To represent a feature matrix of the user-commodity bipartite graph;
s133: aggregating features of node neighbors using graph convolution of the t layers, wherein the propagation process is defined as:
Figure GDA0003560489150000031
s134: by convolution of the t layers, is obtained from
Figure GDA0003560489150000032
To
Figure GDA0003560489150000033
The t feature matrixes are connected to obtain a final feature matrix
Figure GDA0003560489150000034
Figure GDA0003560489150000035
E is then divided into two parts of the feature matrix
Figure GDA0003560489150000036
And
Figure GDA0003560489150000037
Figure GDA0003560489150000038
vector representations as user nodes, respectively
Figure GDA0003560489150000039
And vector representation of commodity nodes
Figure GDA00035604891500000310
Further, the vector representation for extracting the commodity comment in S2 includes:
s21: integrating comments obtained from each commodity into a comment document of the commodity, and performing preprocessing such as word segmentation, word shape restoration, stop word removal, word removal with extremely high occurrence frequency, word with extremely low occurrence frequency and the like on the comment document of the commodity;
s22: feature extraction is carried out on the commodity comment document through a text feature extractor, and vector representation of the commodity comment is obtained
Figure GDA00035604891500000311
Further, the text feature extractor in S22 includes:
s221: representing a sequence of words of the input text as [ w ]1,w2,…,wl]Where l is the length of the input text;
s222: converting the word sequence representation of S5 into a word vector representation sequence by a word embedding layer
Figure GDA00035604891500000312
Wherein d isvIs the word embedding dimension;
s223: processing the word vector representation sequence using a convolutional neural network to obtain a context word vector representation sequence c1,c2,…,cl]Wherein the context of the ith word represents ciThe calculation method comprises the following steps: c. Ci=LeakyReLU(Wt×v(i-k):(i+k)+bt);
S224: computing a weight [ alpha ] for each word vector representation in a sequence of context word vector representations using an attention mechanism12,…,αl]The sequence of context word vector representations is then multiplied by the corresponding weights to obtain a final representation of the input text
Figure GDA0003560489150000041
Wherein alpha isiThe calculation method comprises the following steps:
Figure GDA0003560489150000042
Figure GDA0003560489150000043
further, the extracting the vector representation of the commodity content in S3 includes:
s31: acquiring the title and description information of the commodity, and carrying out preprocessing such as word segmentation, word shape restoration, stop word removal, word removal with extremely high occurrence frequency, word with extremely low occurrence frequency and the like on the information;
s32: extracting the title and description information of the commodity through the same text characteristic extractor in S22 to obtain the vector representation of the commodity content
Figure GDA0003560489150000044
Further, the obtaining of the user final representation and the commodity final representation in S4 includes:
s41: expressing the vector of the commodity node as epVector representation of comments arVector representation of content atConnecting to obtain a final expression p of the commodity; representing the node of the user as euAs the final representation u of the user.
Further, the calculating the similarity between the user and the commodity in S5 includes:
calculating the similarity of the user and the commodity through the dot product of the user final representation and the commodity final representation:
Figure GDA0003560489150000045
further, the parameters in the method for proposing the ranking loss optimization through bayesian personalization in S6 include:
Figure GDA0003560489150000046
drawings
Fig. 1 is a flow chart of a commodity recommendation method according to the present invention.
Fig. 2 is a schematic structural diagram of a commodity recommendation method according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in FIG. 1, the invention provides a commodity recommendation method based on multi-mode commodity feature fusion, which comprises the following steps:
step 1: constructing a user-commodity bipartite graph according to a commodity sequence purchased by a user in history, and obtaining vector representation of user nodes and vector representation of commodity nodes through graph convolution;
specifically, the graph convolution specifically comprises the following steps:
firstly, a user-commodity bipartite graph is constructed according to historical interaction of users and commodities, and a user-commodity adjacency matrix is used
Figure GDA0003560489150000051
Is represented by the formula (I) in which nuAnd npThe number of users and the number of commodities,
Figure GDA0003560489150000052
is a user-commodity interaction matrix, RTIs the transpose of the R and is,
Figure GDA0003560489150000053
Figure GDA0003560489150000054
in order to utilize the information of the nodes in the user-commodity bipartite graph, an identity matrix is added to A
Figure GDA0003560489150000055
Meanwhile, to avoid gradient disappearance or gradient explosion during training, a diagonal matrix is used
Figure GDA0003560489150000056
Carrying out normalization processing, wherein the value on the diagonal line is the degree of each node in the user-commodity bipartite graph, thereby obtaining
Figure GDA0003560489150000057
Then, the unique corresponding id of each user and each commodity is converted into a dense vector through an embedding layer, and a user characteristic vector is obtained
Figure GDA0003560489150000058
And commodity feature vector
Figure GDA0003560489150000059
Where d is the dimension of the feature vector; we build the following embedding Table E0To represent the feature matrix of the user-product bipartite graph:
Figure GDA00035604891500000510
then, we use the graph convolution of the t-layer to aggregate the features of the node neighbors, where the propagation process is defined as:
Figure GDA0003560489150000061
wherein
Figure GDA0003560489150000062
Is a trainable weight matrix and σ is the LeakyRelu activation function.
By convolution of the t layers, is obtained from
Figure GDA0003560489150000063
To
Figure GDA0003560489150000064
The t feature matrices are connected to obtain a final feature matrix E, and then the E is divided into two parts of the feature matrix
Figure GDA0003560489150000065
And
Figure GDA0003560489150000066
vector representations as user nodes, respectively
Figure GDA0003560489150000067
And vector representation of commodity nodes
Figure GDA0003560489150000068
Figure GDA0003560489150000069
Step 2: integrating comments obtained by each commodity into a comment document of the commodity, carrying out word segmentation, word shape restoration, stop word removal, extremely high word removal, extremely low word removal and other processing on the comment document of the commodity, and then processing the comment document of the commodity by using a text feature extractor to obtain vector representation of the commodity comment
Figure GDA00035604891500000610
Specifically, the text feature extractor comprises the following specific steps:
first, word embedding, representing the word sequence of the input text as [ w ]1,w2,…,wl]Where l is the length of the input text, which is then converted into a sequence of word vector representations by the word embedding layer
Figure GDA00035604891500000611
Wherein d isvIs the word embedding dimension.
To exploit local context information in the input text, we process the word vector representation sequence using a convolutional neural network to obtain a context word vector representation sequence c1,c2,…,cl]Wherein the context of the ith word represents ciThe calculation method comprises the following steps:
ci=LeakyReLU(Wt×v(i-k):(i+k)+bt)
wherein v is(i-k):(i+k)Is the concatenation of word embeddings from the i-k word to the i + k word, WtAnd btRespectively convolution kernel and offset.
Considering that different words in the input text have different informativeness, we use a mechanism of attentionComputing a weight [ alpha ] for each word vector representation in the sequence of context word vector representations12,…,αl]The sequence of context word vector representations is then multiplied by the corresponding weights to obtain a final representation of the input text
Figure GDA0003560489150000071
Wherein alpha isiThe calculation method comprises the following steps:
Figure GDA0003560489150000072
wherein WaAnd baTrainable weight matrices and biases, respectively, and q is an attention query vector.
And step 3: acquiring the title and description information of the commodity, performing word segmentation, word shape restoration, stop word removal, word with extremely high appearance frequency, word with extremely low appearance frequency and the like on the title and description information of the commodity, and processing the title and description information of the commodity by using the text feature extractor which is the same as the step 2 to obtain vector representation of the content of the commodity
Figure GDA0003560489150000073
And 4, step 4: expressing the vector of the commodity node as epVector representation of comments arVector representation of content atConnecting to obtain a final expression p of the commodity; representing the node of the user as euAs the final representation u of the user.
And 5: calculating the similarity of the user and the commodity through the dot product of the user final representation and the commodity final representation:
Figure GDA0003560489150000074
step 6: parameters in the proposed method are optimized using bayesian personalized ranking loss:
Figure GDA0003560489150000075
wherein
Figure GDA0003560489150000076
Representing the training data in pairs of training data,
Figure GDA0003560489150000077
representing the set of items purchased by user u,
Figure GDA0003560489150000078
indicating a commodity set that the user u has not purchased; σ is a sigmoid function; θ represents all trainable model parameters and λ controls the L2 regularization strength to prevent overfitting.
The experimental data set is Amazon review three panels of data set CDs _ and _ Vinyl, Movies _ and _ TV and Books. The following table describes the statistics of three data sets:
Figure GDA0003560489150000081
for each data set, 70% of all its interactions were taken as training set, 10% as validation set, and 20% as test set.
Recall @ K and NDCG @ K were chosen as evaluation criteria, and in the experiment, K is 20.
The selected comparison method comprises the following steps: BPRMF, NGCF, deepconnn, the following table shows the corresponding experimental results:
Figure GDA0003560489150000082
from experimental results, it can be seen that the method provided by the invention achieves superior performance to the comparative method on all three data sets.

Claims (7)

1. A commodity recommendation method based on multi-mode commodity feature fusion is characterized by comprising the following steps:
1.1, constructing a user-commodity bipartite graph, and obtaining vector representation of user nodes and vector representation of commodity nodes through graph convolution;
1.2, obtaining a comment document of a commodity, and extracting vector representation of the commodity comment through a convolutional neural network, wherein the vector representation of the commodity comment is extracted by the following method: integrating all comments obtained by each commodity into a comment document of the commodity, and performing word segmentation, word form restoration, stop word removal, extremely high-frequency word removal and extremely low-frequency word pretreatment on the comment document of the commodity; then, feature extraction is carried out on the commodity comment document through a text feature extractor, and vector representation of the commodity comment is obtained
Figure FDA0003560489140000011
The text feature extractor comprises the following steps: first, a word sequence of an input text is represented as [ w ]1,w2,…,wl]Where l is the length of the input text; the word sequence representation is then converted into a word vector representation sequence by a word embedding layer
Figure FDA0003560489140000012
Wherein d isvIs the word embedding dimension; the word vector representation sequence is then processed using a convolutional neural network to obtain a context word vector representation sequence c1,c2,…,cl]Wherein the context of the ith word represents ciThe calculation method comprises the following steps: c. Ci=LeakyReLU(Wt×v(i-k):(i+k)+bt) (ii) a Finally, a weight [ alpha ] is calculated for each word vector representation in the sequence of context word vector representations using the attention mechanism12,…,αl]The sequence of context word vector representations is then multiplied by the corresponding weights to obtain a final representation of the input text
Figure FDA0003560489140000013
Wherein alpha isiThe calculation method comprises the following steps:
Figure FDA0003560489140000014
Figure FDA0003560489140000015
1.3, acquiring the title and description information of the commodity, and extracting the vector representation of the commodity content through a convolutional neural network, wherein the extraction method of the vector representation of the commodity content is as follows: firstly, acquiring the title and description information of a commodity, and performing word segmentation, word shape restoration, stop word removal, extremely high-frequency word removal and extremely low-frequency word preprocessing on the information; then, feature extraction is carried out on the commodity title and the description information through a text feature extractor, and vector representation of the commodity content is obtained
Figure FDA0003560489140000021
1.4, obtaining a user final representation and a commodity final representation;
1.5 calculating the similarity between the user and the commodity;
1.6 parameters in the method are proposed through Bayes personalized ranking loss optimization.
2. The method for recommending commodities based on multi-modal fusion of commodity features according to claim 1, wherein the specific method for obtaining the vector representation of the user node and the commodity node in 1.1 is as follows:
2.1, constructing a user-commodity bipartite graph according to the historical records of commodities purchased by users;
and 2.2, carrying out neighbor propagation and aggregation on the user-commodity bipartite graph through graph convolution to obtain vector representation of the user node and vector representation of the commodity node.
3. The method for recommending commodities based on multi-modal commodity feature fusion according to claim 2, wherein the specific steps for constructing the user-commodity bipartite graph in 2.1 are as follows:
3.1 historical interaction records from user and merchandiseConstructing a user-commodity bipartite graph using a user-commodity adjacency matrix
Figure FDA0003560489140000022
Is represented by the formula (I) in which nuAnd npThe number of users and the number of commodities,
Figure FDA0003560489140000023
is a matrix of user-goods interactions,
Figure FDA0003560489140000029
is the transpose of the R, and,
Figure FDA0003560489140000024
Figure FDA0003560489140000025
3.2 to exploit the information of the nodes themselves in the user-commodity bipartite graph, an identity matrix is added to A
Figure FDA0003560489140000026
Meanwhile, to avoid gradient disappearance or gradient explosion during training, a diagonal matrix is used
Figure FDA0003560489140000027
Carrying out normalization processing, wherein the value on the diagonal line is the degree of each node in the user-commodity bipartite graph, thereby obtaining
Figure FDA0003560489140000028
4. The commodity recommendation method based on multi-modal commodity feature fusion as claimed in claim 2, wherein the specific steps of the 2.2 middle graph convolution are as follows:
4.1 converting the unique corresponding id of each user and each commodity into a dense vector through an embedding layer to obtain the user characteristics(Vector)
Figure FDA0003560489140000031
And commodity feature vector
Figure FDA0003560489140000032
Where d is the dimension of the feature vector;
4.2 building Embedded tables
Figure FDA0003560489140000033
To represent a feature matrix of the user-commodity bipartite graph;
4.3 use the graph convolution of the t layers to aggregate the characteristics of the node neighbors, wherein the propagation process is defined as:
Figure FDA0003560489140000034
4.4 obtaining the data from t layers by graph convolution
Figure FDA0003560489140000035
To
Figure FDA0003560489140000036
The t feature matrixes are connected to obtain a final feature matrix
Figure FDA0003560489140000037
Figure FDA0003560489140000038
E is then divided into two parts of the feature matrix
Figure FDA0003560489140000039
And
Figure FDA00035604891400000310
Figure FDA00035604891400000311
vector representations as user nodes, respectively
Figure FDA00035604891400000312
And vector representation of commodity nodes
Figure FDA00035604891400000313
5. The method for recommending commodities based on multi-modal fusion of commodity features according to claim 1, wherein the specific method for obtaining the final user representation and the final commodity representation in 1.4 is as follows:
5.1 representing the vectors of the commodity nodes by epVector representation of comments arVector representation of content atConnecting to obtain a final expression p of the commodity; representing the vector of the user node as euAs the final representation u of the user.
6. The method for recommending commodities based on multi-modal commodity feature fusion according to claim 1, wherein the calculation formula of the similarity between the user and the commodity is as follows:
Figure FDA00035604891400000314
where u is the final representation of the user,
Figure FDA00035604891400000315
representing the transpose of u and p the final representation of the good.
7. The method for recommending commodities based on multi-modal commodity feature fusion according to claim 1, wherein the specific formula of the parameters in the method for proposing loss optimization through Bayesian personalized ranking is as follows:
Figure FDA0003560489140000041
wherein
Figure FDA0003560489140000042
Representing the training data in pairs of training data,
Figure FDA0003560489140000043
representing the set of items purchased by user u,
Figure FDA0003560489140000044
indicating a set of goods not purchased by user u; σ is a sigmoid function; θ represents all trainable model parameters and λ controls the L2 regularization strength to prevent overfitting.
CN202110444726.4A 2021-04-24 2021-04-24 Commodity recommendation method based on multi-mode commodity feature fusion Active CN113159892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110444726.4A CN113159892B (en) 2021-04-24 2021-04-24 Commodity recommendation method based on multi-mode commodity feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110444726.4A CN113159892B (en) 2021-04-24 2021-04-24 Commodity recommendation method based on multi-mode commodity feature fusion

Publications (2)

Publication Number Publication Date
CN113159892A CN113159892A (en) 2021-07-23
CN113159892B true CN113159892B (en) 2022-05-06

Family

ID=76870143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110444726.4A Active CN113159892B (en) 2021-04-24 2021-04-24 Commodity recommendation method based on multi-mode commodity feature fusion

Country Status (1)

Country Link
CN (1) CN113159892B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155512A (en) * 2021-12-07 2022-03-08 南京理工大学 Fatigue detection method and system based on multi-feature fusion of 3D convolutional network
CN114936901B (en) * 2022-05-21 2024-05-28 山东大学 Visual perception recommendation method and system based on cross-modal semantic reasoning and fusion
CN114943588B (en) * 2022-06-15 2024-07-02 厦门大学 Commodity recommendation method based on neural network noise data
CN117786234B (en) * 2024-02-28 2024-04-26 云南师范大学 Multimode resource recommendation method based on two-stage comparison learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227815A (en) * 2015-07-22 2016-12-14 Tcl集团股份有限公司 The personalized application program function of a kind of multi-modal clue recommends method and system thereof
CN109559209A (en) * 2019-01-18 2019-04-02 深圳创新奇智科技有限公司 A kind of electric business clothes based on multi-modal information, which are worn, takes recommended method
CN110263256A (en) * 2019-06-21 2019-09-20 西安电子科技大学 Personalized recommendation method based on multi-modal heterogeneous information
EP3557499A1 (en) * 2018-04-20 2019-10-23 Facebook, Inc. Assisting users with efficient information sharing among social connections
CN111222332A (en) * 2020-01-06 2020-06-02 华南理工大学 Commodity recommendation method combining attention network and user emotion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227815A (en) * 2015-07-22 2016-12-14 Tcl集团股份有限公司 The personalized application program function of a kind of multi-modal clue recommends method and system thereof
EP3557499A1 (en) * 2018-04-20 2019-10-23 Facebook, Inc. Assisting users with efficient information sharing among social connections
CN109559209A (en) * 2019-01-18 2019-04-02 深圳创新奇智科技有限公司 A kind of electric business clothes based on multi-modal information, which are worn, takes recommended method
CN110263256A (en) * 2019-06-21 2019-09-20 西安电子科技大学 Personalized recommendation method based on multi-modal heterogeneous information
CN111222332A (en) * 2020-01-06 2020-06-02 华南理工大学 Commodity recommendation method combining attention network and user emotion

Also Published As

Publication number Publication date
CN113159892A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113159892B (en) Commodity recommendation method based on multi-mode commodity feature fusion
CN111222332B (en) Commodity recommendation method combining attention network and user emotion
CN108509573B (en) Book recommendation method and system based on matrix decomposition collaborative filtering algorithm
CN111242729A (en) Serialization recommendation method based on long-term and short-term interests
Clinchant et al. A domain adaptation regularization for denoising autoencoders
CN109460508B (en) Efficient spam comment user group detection method
CN111209386A (en) Personalized text recommendation method based on deep learning
CN111737578A (en) Recommendation method and system
CN113468227A (en) Information recommendation method, system, device and storage medium based on graph neural network
CN112100512A (en) Collaborative filtering recommendation method based on user clustering and project association analysis
CN106157156A (en) A kind of cooperation recommending system based on communities of users
CN110781401A (en) Top-n project recommendation method based on collaborative autoregressive flow
CN110727855A (en) Personalized recommendation method based on improved factorization machine
CN117252665B (en) Service recommendation method and device, electronic equipment and storage medium
CN114943034A (en) Intelligent news recommendation method and system based on fine-grained aspect characteristics
CN113988951A (en) Commodity recommendation learning model construction method based on tensor decomposition and collaborative filtering
CN107169830A (en) A kind of personalized recommendation method based on cluster PU matrix decompositions
CN113449200B (en) Article recommendation method and device and computer storage medium
CN113763031A (en) Commodity recommendation method and device, electronic equipment and storage medium
CN111930926A (en) Personalized recommendation algorithm combined with comment text mining
CN110321565B (en) Real-time text emotion analysis method, device and equipment based on deep learning
KR20210120977A (en) Interactive customized search method based on limited Boltzmann machine drive
CN111967973A (en) Bank client data processing method and device
CN113159891B (en) Commodity recommendation method based on fusion of multiple user representations
CN112734519B (en) Commodity recommendation method based on convolution self-encoder network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210723

Assignee: Guangxi wisdom Valley Technology Co.,Ltd.

Assignor: GUILIN University OF ELECTRONIC TECHNOLOGY

Contract record no.: X2022450000202

Denomination of invention: A Product Recommendation Method Based on Multimodal Product Feature Fusion

Granted publication date: 20220506

License type: Common License

Record date: 20221125