CN114238765A - Block chain-based position attention recommendation method - Google Patents

Block chain-based position attention recommendation method Download PDF

Info

Publication number
CN114238765A
CN114238765A CN202111561154.4A CN202111561154A CN114238765A CN 114238765 A CN114238765 A CN 114238765A CN 202111561154 A CN202111561154 A CN 202111561154A CN 114238765 A CN114238765 A CN 114238765A
Authority
CN
China
Prior art keywords
session
user
embedding
self
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111561154.4A
Other languages
Chinese (zh)
Inventor
董立岩
朱广通
刘元宁
朱晓冬
李永丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202111561154.4A priority Critical patent/CN114238765A/en
Publication of CN114238765A publication Critical patent/CN114238765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a position attention recommendation method based on a block chain, which comprises the following steps: firstly, deploying a recommendation system on a block chain platform; secondly, uploading user data; thirdly, constructing a graph; fourthly, establishing an attention network; fifthly, determining position information; sixthly, establishing a self-attention layer; and step seven, establishing a prediction layer. Has the advantages that: the conversion relation between adjacent items in the conversation can be fully trained, and the sequential significance of the data item relation can be obtained while the data item relation is obtained. Reducing the loss of the training process. The block chain technology is used, the behavior data are uploaded to the block chain through the behavior recording contract, and accurate recommendation is performed according to user preference on the premise that the safety of the user data is guaranteed.

Description

Block chain-based position attention recommendation method
Technical Field
The invention relates to a position attention recommending method, in particular to a block chain-based position attention recommending method.
Background
At present, with the continuous progress of science and technology, the problems of information explosion, overload and the like become more and more serious, and the recommendation system can well solve the problems. However, the traditional recommendation model cannot acquire the information of the short-term transactions of the user, so that the short-term preference prediction of the user is greatly reduced. The recommendation model based on the conversation effectively solves the problems, and the conversation is the recent affairs of the user, such as the purchase of articles by the user, the click sequence of the user and the like.
The author combines the session with a K-neighborhood algorithm and makes recommendations according to K pieces of session information similar to the current session, and the SKNN takes context information into consideration. The FPMC model proposed by Rendle et al combines a user-project matrix decomposition technique with a Markov chain, firstly constructs a personalized transfer matrix based on the Markov chain, and then solves the matrix sparsity problem by utilizing a matrix decomposition model. GRU4Rec by Hidasi et al first applies a recurrent neural network to the recommendations, the model having multiple levels of gating cells to control the amount of information passed to the next level in the neural network. Wu et al propose an SR-GNN model to represent sessions as graphs, which can enhance the transformation information of adjacent items, then obtain long-term session embedding through a gated neural network, regard the last clicked item as short-term session embedding, and combine the two session embedding to form final session embedding. Although these models provide a great improvement in recommendation accuracy, they do not take into account the impact on recommendations of the relevance of the order of the session items to the higher-order items in the session, and the privacy of the user data. In the era of easy data acquisition, all large e-commerce can acquire behavior data of users through some technologies, but the user data is usually stored in a centralized database for each large module to call and access. This is prone to data leakage problems.
It has been found through research that many of the problems of the conventional recommendation system are not solved. The first problem is that the order information of items in the user historical data is not integrated, the second problem is that the high-order correlation information of the items in the data cannot be obtained, and the third problem is that the privacy problem is not solved. To solve the above problem, the GPAN recommendation system is proposed herein, which applies session item location representation and self-attention neural network in a blockchain platform to obtain user preferences.
Disclosure of Invention
The method mainly aims to solve the problem that the conventional recommendation model is not integrated into item sequence information in data;
the second purpose of the invention is to solve the problem that the existing recommendation model can not obtain the high-order correlation information of the items in the data;
the third purpose of the invention is to solve the problem of how to improve the recommendation precision on the premise of ensuring the privacy of the user data;
the present invention provides a location attention recommendation method based on a block chain to achieve the above object and solve the above problems.
The invention provides a block chain-based position attention recommendation method, which comprises the following steps:
firstly, deploying a recommendation system on a block chain platform: the deployment recording intelligent contract compiling module is used for distributing an area to record user information for each block in the block chain while a user registers as a user in an e-commerce webpage;
and secondly, uploading user data: uploading the history records of the users to a system, and then compiling a behavior record intelligent contract through a behavior record intelligent contract compiling module, wherein each block in a block chain records the encrypted user history records;
thirdly, constructing a graph: in the construction of the graph, the method comprises an item set, a session composed of user behaviors ordered according to time stamps, an item to be clicked next by a user, a generated probability sequence and the like, wherein the item set forms nodes in the network graph of the knowledge graph, and the session is shown by edges linking the nodes;
the existing item set Let V ═ V1,v2,v3,...,v|n|Denotes a set of all unique items involved in a session, n is the total number of items, and each session is denoted by a time stamp as S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]In which V iss,iE.g. V, m is the session length, and the purpose is to predict the next item to be clicked by the user, namely Vs,m+1Session S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]Can be regarded as a node, wherein (V)s,i-1,Vs,i) Indicating that the user is clicking Vs,i-1After the node is connected with Vs,iInteractions occur so that each session can be represented as a graph;
the interaction between the nodes is divided into four types, the first type is self-connection, each node is self-connected firstly, the purpose is to strengthen the influence of self information in the following network, and the self-connection is represented by 1; the second type is out-degree connection, which represents that the current node is out and is represented by 2; the third type is an inbound connection, which indicates that the current node is inbound and is represented by 3; the fourth type is interconnection, which means that the current node and other nodes are mutually interacted and is represented by 4; if no interaction is represented by 0, the session representation space and the node vector V epsilon R are setdD is the dimension, and the session representation consists of node vectors;
step four, establishing an attention network: acquiring conversion information between adjacent items by using an attention network, multiplying adjacent nodes first, and then multiplying corresponding weights, wherein a specific formula is as follows:
Figure BDA0003414605670000033
where σ is the LeakyRuLU activation function,
Figure BDA0003414605670000032
rijis a node viAnd vjThe relationship between the nodes is trained by four different weight matrixes, and is Aself,Ain,Aout,AinoutRespectively corresponding to the self-connection, in-degree connection, out-degree connection and two-way connection relations; w is a linear transformation of the shared parameter applied to all nodes,
Figure BDA0003414605670000041
the specific formula is as follows:
Figure BDA0003414605670000042
then the weights are normalized to facilitate calculation,
Figure BDA0003414605670000043
is a node vjAfter the weights of the node neighbors are obtained, weighted multiplication is carried out, the purpose is to aggregate each neighbor node information, and the calculation formula is as follows:
Figure BDA0003414605670000044
through the establishment of the attention network, a local session representation only focusing on the characteristics of the local session representation and the neighbor is obtained;
fifthly, determining the position information: the position information is the reverse learning item position embedding, and a position matrix R is setl=[Ρ123,...,Ρm]M is the length of the current session, l is the number of sessions, and a session indicates that S ═ hs,1,hs,2,hs,3,...,hs,m]Then, the formula for embedding the ith item fusion position is as follows:
Figure BDA0003414605670000045
σ is the tan h activation function,
Figure BDA0003414605670000046
W1the position weight matrix is a position weight matrix, b is a bias term and is a trainable parameter, the weight matrix is fused in the embedding matrix of the position information, so that the position information can obtain a proper weight, because the position information is only auxiliary information, the real key point is conversation representation, and the specific formula is as follows:
Figure BDA0003414605670000047
Figure BDA0003414605670000048
Figure BDA0003414605670000049
W2,W3,W4is a weight matrix, c is an offset term, H*Is the average value of all the items in the current session, and the specific formula is as follows:
Figure BDA00034146056700000410
Figure BDA0003414605670000051
for average embedding of t-th article in conversation, multiplying the matrix with adjusted position information weight to corresponding conversation representation to obtain S*Embedding for short-term session, then embedding S*Afferent from the attention layer;
and a sixth step of establishing a self-attention layer: the long-term preference is obtained by aggregating all item embeddings in a conversation with self-attention, which has resulted in a conversation representation S incorporating location embedding*And aggregating the conversation expressions into a final conversation expression through a self-attention layer, wherein the specific formula is as follows:
Figure BDA0003414605670000052
WQ,WK,
Figure BDA0003414605670000053
are all self-attention linear transformation matrixes, all derived from S*Self-attention is divided into two processes, the first process being based on WQAnd WKPerforming weight coefficient calculation, and performing the second process based on the weight coefficient pair WVSumming is carried out;
and finally, a brand-new residual error connection is used to enable the attention network to be more stable, ReLU activation is firstly carried out during training, then linear connection is carried out, two layers of residual error connection are used, more information of the previous layer can be kept, and the training loss is reduced, wherein the specific formula is as follows:
C′=σ(σ(F)W5+b2)W6+b3
C*=W7C′+(1-W7)F
where σ is the ReLU activation function, W5,W6
Figure BDA0003414605670000054
b3And b2For d-dimension bias phase, each linear transformation adds a "Dropout" layer to prevent overfitting, and finally long-term embedding of the session into C*Short-term embedding with session S*And performing linear combination to obtain the final session embedding, wherein the specific formula is as follows:
C=W8C*+(E-W8)S*
Figure BDA0003414605670000055
e is a unit matrix of dxd, and finally, the final session embedding including short-term and long-term embedding is obtained;
step seven, establishing a prediction layer: and predicting the item clicked next by the user by using the final session embedding, wherein the specific formula is as follows:
Figure BDA0003414605670000061
Figure BDA0003414605670000062
using the cross entropy as a loss function for the predicted probability of the next user clicked on the item, the formula is as follows:
Figure BDA0003414605670000063
y is the representation of the real item, we get the minimum loss through Adam's optimization, and finally recommend the item with higher score to the user.
The invention has the beneficial effects that:
compared with the traditional recommendation model, the block chain technology is added in the block chain-based position attention recommendation method provided by the invention, the reconstructed neural network structure is combined, the session position information is fused, the conversion relation between adjacent items in the session can be fully trained, and the sequential significance can be obtained while the data item relation is obtained. After the representation fused with the data position information is obtained, the long-term preference representation of the user is obtained through a self-attention layer. The invention improves the self-attention network, and provides a new residual error network after self-attention training to make the network more stable and reduce the loss of the training process. The long-term preferences of the user are linearly linked with the short-term preferences to form end-user preferences. According to the invention, a redundant hypergraph is not required to be established, and a graph used for both long-term preference and short-term preference is obtained, so that the time complexity and the space complexity are reduced while the accuracy is ensured. The invention divides the relation between projects into self-linking, unidirectional linking and bidirectional linking, wherein the unidirectional linking is divided into a node a to a node b and a node b to a node a. The algorithm is trained by using different weight matrixes according to the relation among different projects, is more suitable for the scene of the user historical behavior in reality, and has a more closed structure with the user historical behavior and more accurate prediction result. The invention uses the block chain technology, the behavior data is uploaded to the block chain through the behavior recording contract, and accurate recommendation is carried out according to the user preference on the premise of ensuring the safety of the user data.
Drawings
FIG. 1 is a schematic diagram of the overall operation of the method of the present invention.
Detailed Description
Please refer to fig. 1:
the invention provides a block chain-based position attention recommendation method, which comprises the following steps:
the first step is to deploy the system on a block chain platform: and the deployment recording intelligent contract compiling module is used for distributing an area to record user information for each block in the block chain while a user registers as a user in the e-commerce webpage.
And secondly, uploading user data: uploading the history records of the users to a system, then compiling the behavior record intelligent contract through a behavior record intelligent contract compiling module, and recording the encrypted user history records in each block in the block chain.
Thirdly, constructing a graph: in the construction of the graph, the method comprises an item set, a session composed of user behaviors ordered according to time stamps, an item to be clicked next by a user, a generated probability sequence and the like, wherein the item set forms nodes in the network graph of the knowledge graph, and the session is shown by edges linking the nodes;
the existing item set Let V ═ V1,v2,v3,...,v|n|Denotes a set of all unique items involved in a session, n is the total number of items, and each session is denoted by a time stamp as S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]In which V iss,iE.g. V, m is the session length, and the purpose is to predict the next item to be clicked by the user, namely Vs,m+1Session S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]Each item in (1) canTo be seen as a node, where (V)s,i-1,Vs,i) Indicating that the user is clicking Vs,i-1After the node is connected with Vs,iInteractions occur so that each session can be represented as a graph;
the interaction between the nodes is divided into four types, the first type is self-connection, each node is self-connected firstly, the purpose is to strengthen the influence of self information in the following network, and the self-connection is represented by 1; the second type is out-degree connection, which represents that the current node is out and is represented by 2; the third type is an inbound connection, which indicates that the current node is inbound and is represented by 3; the fourth type is interconnection, which means that the current node and other nodes are mutually interacted and is represented by 4; if no interaction is represented by 0, the session representation space and the node vector V epsilon R are setdD is the dimension, and the session representation consists of node vectors;
step four, establishing an attention network: acquiring conversion information between adjacent items by using an attention network, multiplying adjacent nodes first, and then multiplying corresponding weights, wherein a specific formula is as follows:
Figure BDA0003414605670000081
where σ is the LeakyRuLU activation function,
Figure BDA0003414605670000082
rijis a node viAnd vjThe relationship between the nodes is trained by four different weight matrixes, and is Aself,Ain,Aout,AinoutRespectively corresponding to the self-connection, in-degree connection, out-degree connection and two-way connection relations; w is a linear transformation of the shared parameter applied to all nodes,
Figure BDA0003414605670000083
the specific formula is as follows:
Figure BDA0003414605670000084
then the weights are normalized to facilitate calculation,
Figure BDA0003414605670000085
is a node vjAfter the weights of the node neighbors are obtained, weighted multiplication is carried out, the purpose is to aggregate each neighbor node information, and the calculation formula is as follows:
Figure BDA0003414605670000086
through the establishment of the attention network, a local session representation only focusing on the characteristics of the local session representation and the neighbor is obtained;
fifthly, determining the position information: the position information is the reverse learning item position embedding, and a position matrix R is setl=[Ρ123,...,Ρm]M is the length of the current session, l is the number of sessions, and a session indicates that S ═ hs,1,hs,2,hs,3,...,hs,m]Then, the formula for embedding the ith item fusion position is as follows:
Figure BDA0003414605670000091
σ is the tan h activation function,
Figure BDA0003414605670000092
W1the position weight matrix is a position weight matrix, b is a bias term and is a trainable parameter, the weight matrix is fused in the embedding matrix of the position information, so that the position information can obtain a proper weight, because the position information is only auxiliary information, the real key point is conversation representation, and the specific formula is as follows:
Figure BDA0003414605670000093
Figure BDA0003414605670000094
Figure BDA0003414605670000095
W2,W2,W4is a weight matrix, c is an offset term, H*Is the average value of all the items in the current session, and the specific formula is as follows:
Figure BDA0003414605670000096
Figure BDA0003414605670000097
for average embedding of t-th article in conversation, multiplying the matrix with adjusted position information weight to corresponding conversation representation to obtain S*Embedding for short-term session, then embedding S*Afferent from the attention layer;
and a sixth step of establishing a self-attention layer: the long-term preference is obtained by aggregating all item embeddings in a conversation with self-attention, which has resulted in a conversation representation S incorporating location embedding*And aggregating the conversation expressions into a final conversation expression through a self-attention layer, wherein the specific formula is as follows:
Figure BDA0003414605670000098
WQ,WK,
Figure BDA0003414605670000099
are all self-attention linear transformation matrixes, all derived from S*Self-attention is divided into two processes, the first process being based on WQAnd WKPerforming weight coefficient calculation, and performing the second process based on the weight coefficient pair WVSumming is carried out;
and finally, a brand-new residual error connection is used to enable the attention network to be more stable, ReLU activation is firstly carried out during training, then linear connection is carried out, two layers of residual error connection are used, more information of the previous layer can be kept, and the training loss is reduced, wherein the specific formula is as follows:
C′=σ(σ(F)W5+b2)W6+b3
C*=W7C′+(1-W7)F
where σ is the ReLU activation function, W5,W6
Figure BDA0003414605670000101
b3And b2For d-dimension bias phase, each linear transformation adds a "Dropout" layer to prevent overfitting, and finally long-term embedding of the session into C*Short-term embedding with session S*And performing linear combination to obtain the final session embedding, wherein the specific formula is as follows:
C=W8C*+(E-W8)S*
Figure BDA0003414605670000102
e is a unit matrix of dxd, and finally, the final session embedding including short-term and long-term embedding is obtained;
step seven, establishing a prediction layer: and predicting the item clicked next by the user by using the final session embedding, wherein the specific formula is as follows:
Figure BDA0003414605670000103
Figure BDA0003414605670000104
using the cross entropy as a loss function for the predicted probability of the next user clicked on the item, the formula is as follows:
Figure BDA0003414605670000105
y is a representation of the real term, and we get the least loss through Adam's optimization. And finally recommending the items with higher scores to the user.
The specific embodiment is as follows:
example 1:
in the digenetica dataset, the whole process of operation is carried out:
the first step is to deploy a system on a block chain platform and deploy an intelligent contract compiling module for behavior record.
And secondly, transmitting Diginetica data into the system, wherein each block in the block chain records the encrypted user history.
And thirdly, initializing a Digimetia data set, constructing the data into a graph structure, and dividing the relationships among the projects into four types, namely self-linking, out-degree linking, in-degree linking and bidirectional linking.
And fourthly, inputting the items in the conversation into the attention network for training to obtain an initial conversation representation.
And fifthly, training the reverse conversation item sequence to obtain a conversation position representation. The session location representation is fused with the initial session representation to obtain a user short-term preference representation.
And sixthly, inputting the short-term preference representation of the user into the self-attention layer to obtain the long-term preference of the user.
And seventhly, linearly linking the long-term preference of the user with the short-term preference of the user to obtain the final user preference, multiplying the final preference by the item representation to obtain the final score of each item, and selecting the top 20 items with the highest scores to recommend to the user.
And step eight, comparing eight reference models such as SR-GNN, NARM, STAMP and the like respectively, and obtaining an optimal result by the algorithm.
Example 2:
in the nowcasting dataset, the whole process of operation is carried out:
the first step is to deploy a system on a block chain platform and deploy an intelligent contract compiling module for behavior record.
And secondly, transmitting the Nowplaying data into the system, wherein each block in the block chain records the encrypted user history.
And thirdly, initializing a Digimetia data set, constructing the data into a graph structure, and dividing the relationships among the projects into four types, namely self-linking, out-degree linking, in-degree linking and bidirectional linking.
And fourthly, inputting the items in the conversation into the attention network for training to obtain an initial conversation representation.
And fifthly, training the reverse conversation item sequence to obtain a conversation position representation. The session location representation is fused with the initial session representation to obtain a user short-term preference representation.
And sixthly, inputting the short-term preference representation of the user into the self-attention layer to obtain the long-term preference of the user.
And seventhly, linearly linking the long-term preference of the user with the short-term preference of the user to obtain the final user preference, multiplying the final preference by the item representation to obtain the final score of each item, and selecting the top 20 items with the highest scores to recommend to the user.
And step eight, comparing eight reference models such as SR-GNN, NARM, STAMP and the like respectively, and obtaining an optimal result by the algorithm.
Example 3:
in the Tll data set, the whole process of operation is as follows:
the first step is to deploy a system on a block chain platform and deploy an intelligent contract compiling module for behavior record.
And secondly, transmitting the Tmax data into the system, wherein each block in the block chain records the encrypted user history.
And thirdly, initializing a Digimetia data set, constructing the data into a graph structure, and dividing the relationships among the projects into four types, namely self-linking, out-degree linking, in-degree linking and bidirectional linking.
And fourthly, inputting the items in the conversation into the attention network for training to obtain an initial conversation representation.
And fifthly, training the reverse conversation item sequence to obtain a conversation position representation. The session location representation is fused with the initial session representation to obtain a user short-term preference representation.
And sixthly, inputting the short-term preference representation of the user into the self-attention layer to obtain the long-term preference of the user.
And seventhly, linearly linking the long-term preference of the user with the short-term preference of the user to obtain the final user preference, multiplying the final preference by the item representation to obtain the final score of each item, and selecting the top 20 items with the highest scores to recommend to the user.
And step eight, comparing eight reference models such as SR-GNN, NARM, STAMP and the like respectively, and obtaining an optimal result by the algorithm.

Claims (1)

1. A location attention recommendation method based on a block chain is characterized in that: the method comprises the following steps:
firstly, deploying a recommendation system on a block chain platform: in the block chain platform deployment recommendation system, an intelligent contract compiling module is deployed and recorded, and when a user registers as a user in an e-commerce webpage, each block in a block chain is allocated with a block of area to record user information;
and secondly, uploading user data: uploading user data, uploading a history record of a user to a system, and then compiling a behavior record intelligent contract through a behavior record intelligent contract compiling module, wherein each block in a block chain records the encrypted user history record;
thirdly, constructing a graph: in the construction of the graph, the method comprises an item set, a session composed of user behaviors ordered according to time stamps, an item to be clicked next by a user, a generated probability sequence and the like, wherein the item set forms nodes in the network graph of the knowledge graph, and the session is shown by edges linking the nodes;
the existing item set Let V ═ V1,v2,v3,...,v|n|Represents a set of all unique items involved in a session, n is the total number of items, each session is represented by a timestampIs S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]In which V iss,iE.g. V, m is the session length, and the purpose is to predict the next item to be clicked by the user, namely Vs,m+1Session S ═ Vs,1,Vs,2,Vs,3,...,Vs,m]Can be regarded as a node, wherein (V)s,i-1,Vs,i) Indicating that the user is clicking Vs,i-1After the node is connected with Vs,iInteractions occur so that each session can be represented as a graph;
the interaction between the nodes is divided into four types, the first type is self-connection, each node is self-connected firstly, the purpose is to strengthen the influence of self information in the following network, and the self-connection is represented by 1; the second type is out-degree connection, which represents that the current node is out and is represented by 2; the third type is an inbound connection, which indicates that the current node is inbound and is represented by 3; the fourth type is interconnection, which means that the current node and other nodes are mutually interacted and is represented by 4; if no interaction is represented by 0, the session representation space and the node vector V epsilon R are setdD is the dimension, and the session representation consists of node vectors;
step four, establishing an attention network: acquiring conversion information between adjacent items by using an attention network, multiplying adjacent nodes first, and then multiplying corresponding weights, wherein a specific formula is as follows:
Figure FDA0003414605660000021
where σ is the LeakyRuLU activation function,
Figure FDA0003414605660000022
rijis a node viAnd vjThe relationship between the nodes is trained by four different weight matrixes, and is Aself,Ain,Aout,AinoutRespectively corresponding to the self-connection, in-degree connection, out-degree connection and two-way connection relations; w is an applicationA linear transformation of the shared parameters of all nodes,
Figure FDA0003414605660000023
the specific formula is as follows:
Figure FDA0003414605660000024
then the weights are normalized to facilitate calculation,
Figure FDA0003414605660000025
is a node vjAfter the weights of the node neighbors are obtained, weighted multiplication is carried out, the purpose is to aggregate each neighbor node information, and the calculation formula is as follows:
Figure FDA0003414605660000026
through the establishment of the attention network, a local session representation only focusing on the characteristics of the local session representation and the neighbor is obtained;
fifthly, determining the position information: the position information is the reverse learning item position embedding, and a position matrix R is setl=[Ρ123,...,Ρm]M is the length of the current session, l is the number of sessions, and a session indicates that S ═ hs,1,hs,2,hs,3,...,hs,m]Then, the formula for embedding the ith item fusion position is as follows:
Figure FDA0003414605660000027
σ is the tan h activation function,
Figure FDA0003414605660000028
W1the position weight matrix and b is a bias term, which are trainable parameters, and the position isThe weight matrix is fused in the embedded matrix of the information, so that the position information can obtain a proper weight, because the position information is only auxiliary information, the real key point is conversation representation, and the specific formula is as follows:
Figure FDA0003414605660000031
Figure FDA0003414605660000032
Figure FDA0003414605660000033
W2,W3,W4is a weight matrix, c is an offset term, H*Is the average value of all the items in the current session, and the specific formula is as follows:
Figure FDA0003414605660000034
Figure FDA0003414605660000035
for average embedding of t-th article in conversation, multiplying the matrix with adjusted position information weight to corresponding conversation representation to obtain S*Embedding for short-term session, then embedding S*Afferent from the attention layer;
and a sixth step of establishing a self-attention layer: the long-term preference is obtained by aggregating all item embeddings in a conversation with self-attention, which has resulted in a conversation representation S incorporating location embedding*And aggregating the conversation expressions into a final conversation expression through a self-attention layer, wherein the specific formula is as follows:
Figure FDA0003414605660000036
Figure FDA0003414605660000037
are all self-attention linear transformation matrixes, all derived from S*Self-attention is divided into two processes, the first process being based on WQAnd WKPerforming weight coefficient calculation, and performing the second process based on the weight coefficient pair WVSumming is carried out;
and finally, a brand-new residual error connection is used to enable the attention network to be more stable, ReLU activation is firstly carried out during training, then linear connection is carried out, two layers of residual error connection are used, more information of the previous layer can be kept, and the training loss is reduced, wherein the specific formula is as follows:
C′=σ(σ(F)W5+b2)W6+b3
C*=W7C′+(1-W7)F
where σ is the ReLU activation function,
Figure FDA0003414605660000041
b3and b2For d-dimension bias phase, each linear transformation adds a "Dropout" layer to prevent overfitting, and finally long-term embedding of the session into C*Short-term embedding with session S*And performing linear combination to obtain the final session embedding, wherein the specific formula is as follows:
C=W8C*+(E-W8)S*
Figure FDA0003414605660000042
e is a unit matrix of dxd, and finally, the final session embedding including short-term and long-term embedding is obtained;
step seven, establishing a prediction layer: and predicting the item clicked next by the user by using the final session embedding, wherein the specific formula is as follows:
Figure FDA0003414605660000043
Figure FDA0003414605660000044
using the cross entropy as a loss function for the predicted probability of the next user clicked on the item, the formula is as follows:
Figure FDA0003414605660000045
y is the representation of the real item, we get the minimum loss through Adam's optimization, and finally recommend the item with higher score to the user.
CN202111561154.4A 2021-12-16 2021-12-16 Block chain-based position attention recommendation method Pending CN114238765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111561154.4A CN114238765A (en) 2021-12-16 2021-12-16 Block chain-based position attention recommendation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111561154.4A CN114238765A (en) 2021-12-16 2021-12-16 Block chain-based position attention recommendation method

Publications (1)

Publication Number Publication Date
CN114238765A true CN114238765A (en) 2022-03-25

Family

ID=80759086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111561154.4A Pending CN114238765A (en) 2021-12-16 2021-12-16 Block chain-based position attention recommendation method

Country Status (1)

Country Link
CN (1) CN114238765A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370673A (en) * 2023-12-08 2024-01-09 中电科大数据研究院有限公司 Data management method and device for algorithm recommendation service

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370673A (en) * 2023-12-08 2024-01-09 中电科大数据研究院有限公司 Data management method and device for algorithm recommendation service
CN117370673B (en) * 2023-12-08 2024-02-06 中电科大数据研究院有限公司 Data management method and device for algorithm recommendation service

Similar Documents

Publication Publication Date Title
CN110119467B (en) Project recommendation method, device, equipment and storage medium based on session
CN112150210A (en) Improved neural network recommendation method and system based on GGNN (global warming network)
KR20170136357A (en) Apparatus and Method for Generating Prediction Model based on Artificial Neural Networks
CN108876044B (en) Online content popularity prediction method based on knowledge-enhanced neural network
CN110659411B (en) Personalized recommendation method based on neural attention self-encoder
CN114817663B (en) Service modeling and recommendation method based on class perception graph neural network
CN111737578A (en) Recommendation method and system
CN111563770A (en) Click rate estimation method based on feature differentiation learning
CN116128461B (en) Bidirectional recommendation system and method for online recruitment
Navgaran et al. Evolutionary based matrix factorization method for collaborative filtering systems
Chen et al. A survey on heterogeneous one-class collaborative filtering
CN111723305B (en) Method for predicting next track point of user
CN113487018A (en) Global context enhancement graph neural network method based on session recommendation
CN113516133A (en) Multi-modal image classification method and system
CN114971784B (en) Session recommendation method and system based on graph neural network by fusing self-attention mechanism
CN114595383A (en) Marine environment data recommendation method and system based on session sequence
CN116663419A (en) Sensorless equipment fault prediction method based on optimized Elman neural network
Fang Session-based recommendation with self-attention networks
CN114238765A (en) Block chain-based position attention recommendation method
CN114925270A (en) Session recommendation method and model
CN113821724A (en) Graph neural network recommendation method based on time interval enhancement
CN112559904A (en) Conversational social recommendation method based on door mechanism and multi-modal graph network
CN116975686A (en) Method for training student model, behavior prediction method and device
Chen et al. Combine temporal information in session-based recommendation with graph neural networks
CN116680456A (en) User preference prediction method based on graph neural network session recommendation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination