CN112733030A - User interest preference capturing method - Google Patents
User interest preference capturing method Download PDFInfo
- Publication number
- CN112733030A CN112733030A CN202110043271.5A CN202110043271A CN112733030A CN 112733030 A CN112733030 A CN 112733030A CN 202110043271 A CN202110043271 A CN 202110043271A CN 112733030 A CN112733030 A CN 112733030A
- Authority
- CN
- China
- Prior art keywords
- user
- sequence
- interest
- interest preference
- capturing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Machine Translation (AREA)
Abstract
The invention relates to a user interest preference capturing method, and belongs to the field of click rate prediction. The method comprises the steps of constructing an interest preference capturing system by using a double-cognition process integrating experience intuition and logic reasoning of a user, learning historical interactive sequence data of the user in parallel through the interest preference capturing system, mining dynamic change and sequential relation of historical interactive behaviors, and capturing interest preference of the user at the next moment; the method comprises the following steps: s1, inputting data; s2, processing data; s3, capturing potential interest preferences; s4, capturing dynamic interest preference; and S5, fusing interest and preference. The method and the device can describe the interest preference of the user more accurately and improve the interactive experience of the user and the items.
Description
Technical Field
The invention belongs to the field of click rate prediction, and relates to a user interest preference capturing method.
Background
With the development of the internet and the expansion of human interaction and communication demands, social networks and social media have begun to affect people's lives. Social networks are in a period of explosive development, and various forms of social network applications have emerged to make social networks evolve from homogeneous social networks containing only users or items to heterogeneous social networks with diverse entities. Meanwhile, the mass of internet information enables the individual content appeal of people to be gradually aroused, and the conventional pure pursuit of information 'quantity' is gradually changed into the craving of higher-quality and more accurate content. And the massive data makes it possible to analyze and predict user behavior. Therefore, the recommendation system is used as an information filtering tool and already embodies the role of the recommendation system in personalized content distribution. Where click-through rate prediction evaluates the probability of a user clicking on a given item as a key task for the recommendation system.
In recent years, with the great success of deep learning in many research fields such as computer vision and natural language processing, researchers at home and abroad have conducted intensive research in the field of click rate prediction based on deep learning. Huang T, Zhang Z, Zhang J provides a new click rate prediction model combining feature importance and bilinear feature interaction In the 'FiBiNET: combining feature importance and bilinear feature interaction for click-through rate prediction Systems, pp.167-177,2019', FiBiNET utilizes SENET mechanism to dynamically learn feature importance, and utilizes bilinear function to effectively learn feature interaction. Liu B, Tang R, Chen Y, et al propose a new Feature generation model based on convolutional neural network In the Feature generation by volumetric neural network for click-through rate prediction "[ In the World Wide Web Conference, pp.1119-1129,2019 ], Feature generation utilizes the intensity of CNN to generate local patterns, and recombines them to generate new features, depth classifier adopts IPNN structure to learn interaction from enhanced Feature space.
The existing click rate prediction model has the following problems:
1) although the feature extraction method which starts from the perspective of experience and intuition of the user and researches focuses on the interaction between the interest and preference features of the user, the actual user item interaction process also comprises the decision based on user logical reasoning;
2) the interest preference of the user is dynamically changed and continuously developed over time and has sequence dependency, and the traditional model built based on the static mode cannot capture the dynamic evolution process. Therefore, an interest preference capturing method conforming to the cognitive process of the real user is required to be provided to describe the interest preference characteristics of the user at the next stage more accurately and improve the interactive experience between the user and the item.
Disclosure of Invention
In view of the above, the present invention provides a method for capturing user interest preferences, which first divides a historical interaction sequence of a user into a plurality of homogeneous sequences according to interaction characteristics of the user and items, and uses the homogeneous sequences as inputs of a potential interest preference capturing stage and a dynamic interest preference capturing stage; secondly, inspired by a user double-cognition process, the potential interest preference capturing stage captures the potential interest preference of the user from a homogeneous sequence nearest to the current based on user experience intuition cognition, and meanwhile, the dynamic interest preference capturing stage captures the dynamic interest preference of the user from a divided homogeneous sequence set based on user logical inference cognition; and finally, splicing and fusing the potential interest preference and the dynamic interest preference generated in the two stages to finish the capture of the interest preference of the user in the next stage. The method and the device can describe the interest preference of the user more accurately and improve the interactive experience of the user and the items.
In order to achieve the purpose, the invention provides the following technical scheme:
a user interest preference capturing method comprises the steps of constructing an interest preference capturing system by using a double-cognition process integrating experience intuition and logic reasoning of a user, learning historical interactive sequence data of the user in parallel through the interest preference capturing system, mining dynamic change and sequence relation of historical interactive behaviors, and capturing interest preference of the user at the next moment; the method comprises the following steps:
s1, data input: using as input a sequence X of MN historical interactions of each user constructed in chronological order, and for users less than MN interactions, filling in zeros at the beginning of its sequence;
s2, data processing: calculating the similarity between the current item and the previous and subsequent items in the historical interactive sequence data of the user from the second item in the historical interactive sequence data, dividing the homogeneous sequence by comparing the sizes of the current item and the previous and subsequent items, and embedding each divided sequence into a low-dimensional tensor;
s3, capturing potential interest preferences: learning second-order interest preference of the user by using a bilinear product module, simultaneously learning local and global interest preference of the user by using a multi-head attention mechanism and combining a convolutional neural network module and a global pooling module respectively, and splicing and recombining feature tensors output by the bilinear product module, the convolutional neural network module and the global pooling module to further obtain potential interest preference of the user at the next moment;
s4, capturing dynamic interest preference: extracting interest points contained in each homogeneous sequence by using a convolutional neural network, constructing an interest preference evolutionary graph of a user through interlayer sampling, mapping the graph into an interest sequence, and inputting the interest sequence into a double-masking gating cyclic unit to learn the dynamic interest preference of the user at the next moment;
s5, fusing interest and preference: and fusing the potential interest preferences and the dynamic interest preferences of the user captured in the steps S3 and S4, outputting the interest preferences of the user at the next moment, and completing the whole interest preference feature capturing process.
Optionally, step S2 specifically includes:
s21, obtaining user history interaction sequence X ═ { X ═ by using Jaccard coefficient1,x2,…xMNCurrent term x in }iWith its antecedent xi-1And the following term xi+1Similarity of (D) Ji-1,i、Ji,i+1;
S22, comparison Ji-1,iAnd Ji,i+1Size of (1), set forth Ji-1,i>Ji,i+1The current term belongs to the next term of the current homogenous sequence; let Ji-1,i<Ji+1,iThe current item is the first item of the next homogeneous sequence;
s23, judging the current item xiIf the result is the last item of the sequence X, outputting a sequence division result if the result is the last item of the sequence X, otherwise, i is i +1 and returning to step S21;
s24, dividing each homogeneous sequence Xi∈{X1,…,XMEmbedding as a low-dimensional feature tensor Ei=[e1,…,eN]Wherein e isiA vector representation for each term after the embedding operation.
Optionally, step S3 specifically includes:
s31, embedding the homogeneous sequence into the matrix E in the potential interest preference capturing phaseMInputting the two-order interest preference feature matrix F of the learning user into a bilinear product moduledl=[ei·Wij⊙ej]Wherein W isijEmbedding a parameter matrix shared between vector interactions for all fields, eiAnd ejVector representations of each term after the embedding operation, respectively;
s32, embedding the homogeneous sequence into the matrix E in the potential interest preference capturing phaseMEarly fusion interest feature output weighting matrix F input into multi-head attention modulea=concat(H1,H2,…,Hk)WOWherein W isOFor an additional weight matrix, HiAn interest preference subspace output for each attention head;
s33, outputting the early fusion interest characteristics to a weighting matrix FaOutputting a height fitting matrix F after passing through two full-connection layersaff;
S34, determining the convolution kernel size C in the convolution neural network stage convolution layerlWith sliding window size P in pooling layerl;
S35, resetting the input of the first layer convolution layer, executing convolution and pooling operation until the loop is finished;
s36, taking the output of the last pooling layer as a local interest preference feature matrix Fmp;
S37, utilizing and inputting feature momentsObtaining global interest preference feature matrix F by sliding windows with the same size of arraygp;
S38, splicing the feature matrixes output by the steps S31, S36 and S37, and outputting the potential interest preference I of the user at the next stagec=concat(Fdl,Fmp,Fgp)。
Optionally, step S4 specifically includes:
s41, extracting each homogeneous sequence embedding matrix E by utilizing convolution layer and pooling layer of convolution neural networkkInclusion of interest node setsAnd as a node of the k-th layer, wherein akThe total number of interest nodes of the k layer;
s42, sampling lower nodes by using an optimal sampler and constructing a dynamic interest preference evolutionary graph G, wherein the optimal sampler is defined as:
wherein the content of the first and second substances,for a given k-layer nodeSampling k' layer nodesThe probability of (a) of (b) being,is an autocorrelation function calculated based on node characteristics;
s43, utilizing mapping function mapgsMapping G (V, E) to a set of interest preference sequences S (map) with variable lengthgs(G,Sk)={S1,…,SM};
S44, gating and circulating by using double maskingLoop unit learning sequenceThe manner of interconnection between nodes in the sequence, thereby capturing how nodes in the next sequence are linked to previous nodes;
s45, updating all interest preference sequences through the transfer function to obtain the dynamic interest preference of the user at the next stage
Optionally, step S5 specifically includes:
s51, capturing the user potential interest preference I obtained in the steps S38 and S45cWith dynamic interest preferences IhSplicing to obtain interest preference I after splicing;
and S52, taking the spliced interest preference I as the finally captured interest preference feature of the user in the next stage.
Optionally, MN, M in step S1 indicates the number of short sequences after the long history interactive sequence of the user is subjected to sequence division; n refers to the number of entries contained in the short sequence.
Alternatively, the homogeneous sequence in step S4 is a short sequence of length N after sequence division, and the items in the sequence have high similarity.
Optionally, the dual-masking-gating cycle unit in step S4 includes an outer reset gate, an outer masking gate, an inner reset gate, and an inner masking gate.
The invention has the beneficial effects that: the invention provides an interest preference capturing method fusing a user double-cognition process aiming at the characteristic that the interest preference of a user has dynamic and sequential dependence over time, so that the traditional method for constructing the user interest preference in a static mode has the problem of lower accuracy and diversity, and the interest preference capturing method utilizes an interest preference capturing system fusing a user experience intuition and a logic reasoning double-cognition process to learn the second-order interest preference of the user by utilizing a bilinear product and simultaneously combines a multi-head attention mechanism with a convolutional neural network to learn the potential interest preference of the user; and then, the convolutional neural network extracts sequence features and takes the sequence features as feature nodes, a dynamic interest preference evolutionary graph is constructed through interlayer sampling, and then a mapping function is used for mapping the dynamic interest preference evolutionary graph into a feature evolutionary sequence, so that the double-masking gating cyclic unit is used for learning sequence feature evolution and updating the interest preference of the user at the next moment. The method and the device can describe the interest preference of the user more accurately and improve the interactive experience of the user and the items.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a flow chart of the present invention for homogeneous sequence partitioning;
FIG. 3 is a diagram of a potential interest preference capture model in accordance with the present invention;
FIG. 4 is a diagram of a dynamic interest preference capture model in accordance with the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Please refer to fig. 1 to 4, which illustrate a user interest preference capturing method.
Step S1, data input: using a sequence X constructed chronologically from the last MN interactions of each user, for less than MN interactions, filling in zeros at the beginning of its sequence;
step S2, data processing: as shown in FIG. 2, starting from the second item, calculating the similarity between the current item and its preceding and following items in the user history interaction sequence, dividing the homogeneous sequence by comparing the sizes of the two items, and embedding each divided sequence into a low-dimensional matrix. The preferable method specifically comprises the following steps:
step S21, using the Jaccard coefficient, finds the user history interaction sequence X ═ X1,x2,…xMNCurrent term x in }iWith its antecedent xi-1And the following term xi+1Similarity of (2):
step S22, comparison Ji-1,iAnd Ji,i+1Will meet the constraint condition J according to the interaction sequencei-1,i>Ji,i+1Is divided into homogeneous sequences, and when Ji-1,i<Ji,i+1Time indicates the start of the next homogeneous sequence;
step S23, judging the current item xiWhether the last item of the sequence X is obtained or not, and if the last item of the sequence X is obtained, outputting a sequence division result set { X }1,X2,…,XMOtherwise, making i equal to i +1 and executing steps S21 to S3 again;
step S24, dividing each homogeneous sequence Xi={x1,x2,…xNEmbedding as a low dimensional feature matrix Ei=[e1,…,eN]Where i ∈ {1,2, …, M }.
Step S3, capturing potential interest preference: as shown in fig. 3, the bilinear product is used to learn the second-order interest preference of the user, and meanwhile, the multi-head attention mechanism is respectively combined with the convolutional neural network and the global pooling operation to learn the local and global interest preferences of the user, and the outputs of the modules are spliced and recombined to obtain the potential interest preference of the user at the next moment. The preferable method specifically comprises the following steps:
step S31, embedding the nearest homogeneous sequence into matrix EMAs input of a potential interest preference capturing stage, a bilinear product module is utilized to learn a user second-order interest preference feature matrix Fdl=[ei·Wij⊙ej]Wherein W isijEmbedding a parameter matrix shared among vector interactions for all fields;
step S32, embedding the nearest homogeneous sequence into matrix EMAs an input of the potential interest preference capturing stage, a multi-head attention module is utilized to early fuse interest feature output weighting matrixes:
Fa=Mulit-attention(Q,K,V)=concat(H1,H2,…,Hk)WO
q, K, V is: a query matrix, a key matrix and a value matrix for each attention head;
WOcomprises the following steps: an additional weight matrix;
Hicomprises the following steps: an interest preference subspace output by each attention head;
step S33, in order to prevent the situation that the fitting degree of the multi-head attention module is not enough, FaOutputting a height fitting matrix F through two full-connection layersaff=σ(W2σ(W1Fa+b1)+b2);
W1、W2Comprises the following steps: a weight parameter;
b1、b2comprises the following steps: biasing;
σ is: a non-linear activation function.
Step S34, determining the convolution kernel size in the convolution neural network stageSliding window size with pooling layer
Step S35, resetting the firstThe input of the convolutional layer isThen circularly obtain the output matrix of each convolution layerAnd pooling layer output matrixThe output of the previous round of pooling layers is the input of the next round of convolutional layers, and the convolutional pooling output matrix is formulated as:
σc(. is): a nonlinear activation function of the convolutional layer;
FCicomprises the following steps: the ith feature mapping matrix output by the ith convolutional layer;
step S36, taking the output matrix of the last layer of the pooling layer as the local interest preference feature matrix
Step S37, in the global pooling stage, a global interest preference feature matrix F is obtained by using a sliding window with the same size as the input feature matrixgp=avg(Faff);
Step S38, splicing and recombining step S31, second-order interest preference feature matrix, local interest preference feature matrix and global interest preference feature matrix generated in steps S36 and S37 obtain next-stage user potential interest preference Ic=concat(Fdl,Fmp,Fgp)。
Step S4, dynamic interest preference capture: as shown in fig. 4, a convolutional neural network is used to extract interest points contained in each homogeneous sequence, and an interest preference evolutionary graph is constructed by interlayer sampling, and the graph is mapped into an interest sequence and is input into a double-masking gating cyclic unit to learn the dynamic interest preference of the user at the next moment. The preferable method specifically comprises the following steps:
step S41, mixing each homogeneous sequence Xk∈{X1,X2,…,XMEmbedding matrix EkInput to convolutional neural network to extract a contained thereinkIndividual interest preferenceAnd add the elements thereofAs interest preference evolution nodes, and then a scoring function quantifies importance between nodes in layersAnd will utilize the softmax functionAll the selected lower nodes are normalized toOrder toThe forward propagation can be expressed as:
comprises the following steps: given that all nodes of k layer are sampled in k' layerThe probability of (d);
Wk'comprises the following steps: the weight of the k' layer where the node is located;
step S42, the optimal sampler samples the lower node to construct a dynamic interest preference evolutionary graph G ═ V, E, and the optimal sampler is defined as:
comprises the following steps: an autocorrelation function calculated based on the node characteristics;
step S43, mapping function mapgsMapping G (V, E) to a set of interest preference sequences S (map) with variable lengthgs(G,Sk)={S1,…,SMAre multiplied byHere, theIllustrating a k-th level nodeAnd the k' th layer nodeThere is an edge between them, otherwise there is no connection;
Skcomprises the following steps: node of k layerWith all layer nodes in front of itWhether there is a linked adjacency vector;
step S4, capturing the M +1 th sequence node according to the interconnection mode between the nodes in the front of the sequenceHow to link to previous nodes, particularly using a double-masked gated loop unit to assist in learning the mapping sequence S in step S43kThe user dynamic interest preference transfer, the gated loop unit learning process can be formulated as:
ricomprises the following steps: the external reset gate control retention tail sequence is Si-1Graph state vector sti-1State information of (2);
micomprises the following steps: the tail sequence used for outer mask gate control is SiInput information of the graph;
comprises the following steps: the cooperative result of the external reset gate and the masking gate;
rjcomprises the following steps: internal reset gate control reserved tail node as Si-1,j-1Sequence state vector sti-1,j-1State information of (2);
mjcomprises the following steps: internal maskingThe tail node used for door control is Si-1,jInput information of the sequence;
foutcomprises the following steps: specifying layer i nodesA function of the adjacency vector distribution;
ftranscomprises the following steps: a state transfer function;
step S45, updating all sequences by the transfer function to obtain the user dynamic interest preference of the next stage of the user
Step S5, interest preference fusion: capturing the steps S38 and S45 to obtain the user potential interest preference IcWith dynamic interest preferences IhFusing, and outputting the finally captured dynamic interest preference I ═ concat (I) of the user at the next momentc,Ih)。
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (8)
1. A user interest preference capture method, characterized by: the method comprises the steps of constructing an interest preference capturing system by using a double-cognition process integrating experience intuition and logic reasoning of a user, learning historical interactive sequence data of the user in parallel through the interest preference capturing system, mining dynamic change and sequential relation of historical interactive behaviors, and capturing interest preference of the user at the next moment; the method comprises the following steps:
s1, data input: using as input a sequence X of MN historical interactions of each user constructed in chronological order, and for users less than MN interactions, filling in zeros at the beginning of its sequence;
s2, data processing: calculating the similarity between the current item and the previous and subsequent items in the historical interactive sequence data of the user from the second item in the historical interactive sequence data, dividing the homogeneous sequence by comparing the sizes of the current item and the previous and subsequent items, and embedding each divided sequence into a low-dimensional tensor;
s3, capturing potential interest preferences: learning second-order interest preference of the user by using a bilinear product module, simultaneously learning local and global interest preference of the user by using a multi-head attention mechanism and combining a convolutional neural network module and a global pooling module respectively, and splicing and recombining feature tensors output by the bilinear product module, the convolutional neural network module and the global pooling module to further obtain potential interest preference of the user at the next moment;
s4, capturing dynamic interest preference: extracting interest points contained in each homogeneous sequence by using a convolutional neural network, constructing an interest preference evolutionary graph of a user through interlayer sampling, mapping the graph into an interest sequence, and inputting the interest sequence into a double-masking gating cyclic unit to learn the dynamic interest preference of the user at the next moment;
s5, fusing interest and preference: and fusing the potential interest preferences and the dynamic interest preferences of the user captured in the steps S3 and S4, outputting the interest preferences of the user at the next moment, and completing the whole interest preference feature capturing process.
2. The method for capturing user interest preference according to claim 1, wherein the step S2 is specifically as follows:
s21, obtaining user history interaction sequence X ═ { X ═ by using Jaccard coefficient1,x2,…xMNCurrent term x in }iWith its antecedent xi-1And the following term xi+1Similarity of (D) Ji-1,i、Ji,i+1;
S22, comparison Ji-1,iAnd Ji,i+1The size of (a) is (b),let Ji-1,i>Ji,i+1The current term belongs to the next term of the current homogenous sequence; let Ji-1,i<Ji+1,iThe current item is the first item of the next homogeneous sequence;
s23, judging the current item xiIf the result is the last item of the sequence X, outputting a sequence division result if the result is the last item of the sequence X, otherwise, i is i +1 and returning to step S21;
s24, dividing each homogeneous sequence Xi∈{X1,…,XMEmbedding as a low-dimensional feature tensor Ei=[e1,…,eN]Wherein e isiA vector representation for each term after the embedding operation.
3. The method for capturing user interest preference according to claim 1, wherein the step S3 is specifically as follows:
s31, embedding the homogeneous sequence into the matrix E in the potential interest preference capturing phaseMInputting the two-order interest preference feature matrix F of the learning user into a bilinear product moduledl=[ei·Wij⊙ej]Wherein W isijEmbedding a parameter matrix shared between vector interactions for all fields, eiAnd ejVector representations of each term after the embedding operation, respectively;
s32, embedding the homogeneous sequence into the matrix E in the potential interest preference capturing phaseMEarly fusion interest feature output weighting matrix F input into multi-head attention modulea=concat(H1,H2,…,Hk)WOWherein W isOFor an additional weight matrix, HiAn interest preference subspace output for each attention head;
s33, outputting the early fusion interest characteristics to a weighting matrix FaOutputting a height fitting matrix F after passing through two full-connection layersaff;
S34, determining the convolution kernel size C in the convolution neural network stage convolution layerlWith sliding window size P in pooling layerl;
S35, resetting the input of the first layer convolution layer, executing convolution and pooling operation until the loop is finished;
s36, taking the output of the last pooling layer as a local interest preference feature matrix Fmp;
S37, obtaining a global interest preference feature matrix F by using a sliding window with the same size as the input feature matrixgp;
S38, splicing the feature matrixes output by the steps S31, S36 and S37, and outputting the potential interest preference I of the user at the next stagec=concat(Fdl,Fmp,Fgp)。
4. The method for capturing user interest preference according to claim 1, wherein the step S4 is specifically as follows:
s41, extracting each homogeneous sequence embedding matrix E by utilizing convolution layer and pooling layer of convolution neural networkkInclusion of interest node setsAnd as a node of the k-th layer, wherein akThe total number of interest nodes of the k layer;
s42, sampling lower nodes by using an optimal sampler and constructing a dynamic interest preference evolutionary graph G, wherein the optimal sampler is defined as:
wherein the content of the first and second substances,for a given k-layer nodeSampling k' layer nodesThe probability of (a) of (b) being,is an autocorrelation function calculated based on node characteristics;
s43, utilizing mapping function mapgsMapping G (V, E) to a set of interest preference sequences S (map) with variable lengthgs(G,Sk)={S1,…,SM};
S44 learning sequence by using double-masking gating circulation unitThe manner of interconnection between nodes in the sequence, thereby capturing how nodes in the next sequence are linked to previous nodes;
5. The method for capturing user interest preference according to claim 1, wherein the step S5 is specifically as follows:
s51, capturing the user potential interest preference I obtained in the steps S38 and S45cWith dynamic interest preferences IhSplicing to obtain interest preference I after splicing;
and S52, taking the spliced interest preference I as the finally captured interest preference feature of the user in the next stage.
6. The method as claimed in claim 1, wherein MN, M in step S1 indicates the number of short sequences after sequence division of the long historical interactive sequences of the user; n refers to the number of entries contained in the short sequence.
7. The method of claim 1, wherein the homogeneous sequence in step S4 is a short sequence with length N divided by sequence, and the items in the sequence have high similarity.
8. The method of claim 1, wherein the double-masking gate control loop unit in step S4 comprises an outer reset gate, an outer masking gate, an inner reset gate and an inner masking gate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110043271.5A CN112733030B (en) | 2021-01-13 | 2021-01-13 | User interest preference capturing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110043271.5A CN112733030B (en) | 2021-01-13 | 2021-01-13 | User interest preference capturing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112733030A true CN112733030A (en) | 2021-04-30 |
CN112733030B CN112733030B (en) | 2022-08-09 |
Family
ID=75593117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110043271.5A Active CN112733030B (en) | 2021-01-13 | 2021-01-13 | User interest preference capturing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733030B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929164A (en) * | 2019-12-09 | 2020-03-27 | 北京交通大学 | Interest point recommendation method based on user dynamic preference and attention mechanism |
CN111209475A (en) * | 2019-12-27 | 2020-05-29 | 武汉大学 | Interest point recommendation method and device based on space-time sequence and social embedded ranking |
CN111369278A (en) * | 2020-02-19 | 2020-07-03 | 杭州电子科技大学 | Click rate prediction method based on long-term interest modeling of user |
CN112084450A (en) * | 2020-09-09 | 2020-12-15 | 长沙理工大学 | Click rate prediction method and system based on convolutional attention network deep session sequence |
-
2021
- 2021-01-13 CN CN202110043271.5A patent/CN112733030B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110929164A (en) * | 2019-12-09 | 2020-03-27 | 北京交通大学 | Interest point recommendation method based on user dynamic preference and attention mechanism |
CN111209475A (en) * | 2019-12-27 | 2020-05-29 | 武汉大学 | Interest point recommendation method and device based on space-time sequence and social embedded ranking |
CN111369278A (en) * | 2020-02-19 | 2020-07-03 | 杭州电子科技大学 | Click rate prediction method based on long-term interest modeling of user |
CN112084450A (en) * | 2020-09-09 | 2020-12-15 | 长沙理工大学 | Click rate prediction method and system based on convolutional attention network deep session sequence |
Non-Patent Citations (1)
Title |
---|
李其娜等: "基于深度学习的情境感知推荐系统研究进展", 《计算机系统应用》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112733030B (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111523047B (en) | Multi-relation collaborative filtering algorithm based on graph neural network | |
Zhu et al. | A survey on graph structure learning: Progress and opportunities | |
CN111611488B (en) | Information recommendation method and device based on artificial intelligence and electronic equipment | |
CN113761359B (en) | Data packet recommendation method, device, electronic equipment and storage medium | |
CN113127737B (en) | Personalized search method and search system integrating attention mechanism | |
CN112632296B (en) | Knowledge graph-based paper recommendation method and system with interpretability and terminal | |
CN113486190A (en) | Multi-mode knowledge representation method integrating entity image information and entity category information | |
CN112232087A (en) | Transformer-based specific aspect emotion analysis method of multi-granularity attention model | |
CN113780002A (en) | Knowledge reasoning method and device based on graph representation learning and deep reinforcement learning | |
CN114519145A (en) | Sequence recommendation method for mining long-term and short-term interests of users based on graph neural network | |
CN112699310A (en) | Cold start cross-domain hybrid recommendation method and system based on deep neural network | |
CN115270007A (en) | POI recommendation method and system based on mixed graph neural network | |
CN110245310B (en) | Object behavior analysis method, device and storage medium | |
Li et al. | Anchor-based knowledge embedding for image aesthetics assessment | |
WO2022063076A1 (en) | Adversarial example identification method and apparatus | |
CN105608118B (en) | Result method for pushing based on customer interaction information | |
CN113326384A (en) | Construction method of interpretable recommendation model based on knowledge graph | |
CN112733030B (en) | User interest preference capturing method | |
CN112486467A (en) | Interactive service recommendation method based on dual interaction relation and attention mechanism | |
Zhang et al. | Graph spring network and informative anchor selection for session-based recommendation | |
Liang et al. | The graph embedded topic model | |
CN111414538A (en) | Text recommendation method and device based on artificial intelligence and electronic equipment | |
Ali et al. | Recent Trends in Neural Architecture Search Systems | |
Yuan et al. | Combining Event Segment Classification And Graph Self-Encoder For Event Prediction | |
CN115620807B (en) | Method for predicting interaction strength between target protein molecule and drug molecule |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |