CN110765353B - Processing method and device of project recommendation model, computer equipment and storage medium - Google Patents
Processing method and device of project recommendation model, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110765353B CN110765353B CN201910984752.9A CN201910984752A CN110765353B CN 110765353 B CN110765353 B CN 110765353B CN 201910984752 A CN201910984752 A CN 201910984752A CN 110765353 B CN110765353 B CN 110765353B
- Authority
- CN
- China
- Prior art keywords
- item
- time sequence
- vector
- recommendation model
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 13
- 239000013598 vector Substances 0.000 claims abstract description 412
- 238000012549 training Methods 0.000 claims abstract description 223
- 230000002452 interceptive effect Effects 0.000 claims abstract description 41
- 125000004122 cyclic group Chemical group 0.000 claims description 98
- 238000000034 method Methods 0.000 claims description 39
- 230000003993 interaction Effects 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 34
- 238000009826 distribution Methods 0.000 claims description 33
- 239000011159 matrix material Substances 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000005457 optimization Methods 0.000 claims description 15
- 230000000306 recurrent effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000000873 masking effect Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 5
- 230000002457 bidirectional effect Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013434 data augmentation Methods 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a processing method and device of a project recommendation model, computer equipment and a storage medium, which can acquire a training time sequence, wherein the training time sequence comprises project vectors of historical interactive projects of a target user; analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector; taking the training time sequence and the condition vector corresponding to each item vector as the input of a decoder layer of the recommendation model of the item to be trained, and obtaining a prediction time sequence through the decoder layer; the project recommendation model to be trained is optimized based on the training time sequence and the prediction time sequence to obtain the project recommendation model.
Description
Technical Field
The application relates to the technical field of computers, in particular to a processing method and device of a project recommendation model, computer equipment and a storage medium.
Background
At present, a project recommendation model can realize various tasks according to input time sequence data, for example, according to time sequence data generated by historical operations of a user on projects before the current time, projects which the user may be interested in next are predicted, and a more effective and personalized recommendation effect is achieved.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for processing a project recommendation model, a computer device, and a storage medium, which may train the recommendation model based on data before and after each time node in a training time sequence, and are beneficial to improving recommendation accuracy of the recommendation model.
The embodiment of the application provides a processing method of a project recommendation model, which comprises the following steps:
acquiring a training time sequence, wherein the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises structural information of the training time sequence;
taking the training time sequence as an input sequence of a decoder layer of the recommended model of the item to be trained, taking a condition vector corresponding to each item vector as an input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer;
and optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
The embodiment of the present application further provides a processing apparatus of an item recommendation model, where the processing apparatus of the item recommendation model includes:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training time sequence, the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
the analysis module is used for analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises the structural information of the training time sequence;
the prediction module is used for taking the training time sequence as an input sequence of a decoder layer of the recommended model of the item to be trained, taking a condition vector corresponding to each item vector as an input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer;
and the optimization module is used for optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
The embodiment of the application also provides computer equipment, which comprises a processor and a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to perform the steps in the processing method of the item recommendation model of the present embodiment.
The embodiment of the present application also provides a storage medium on which a computer program is stored, which, when running on a computer, causes the computer to execute the steps in the processing method of the item recommendation model according to the embodiment.
The embodiment of the invention provides a processing method and device of a project recommendation model, computer equipment and a storage medium, which can acquire a training time sequence, wherein the training time sequence comprises project vectors of history interactive projects of target users arranged in sequence; analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises the structural information of the training time sequence; taking the training time sequence as an input sequence of a decoder layer of a recommended model of the item to be trained, taking a condition vector corresponding to each item vector as the input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer; according to the embodiment, when the project recommendation model is trained, for the recommendation result of a certain time node in the training time sequence, the project recommendation model for project recommendation can be optimized based on the project vector before the time node and the project vector after the time node, so that the trained project recommendation model can effectively express all known data in the training time sequence, the data utilization degree of the project recommendation model on the input sequence is improved, and the prediction accuracy of the project recommendation model is favorably improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a schematic structural diagram of an item recommendation system according to an embodiment of the present invention;
FIG. 1b is a flow chart of a method for processing an item recommendation model provided by an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an item recommendation model according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of another item recommendation model provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a processing device of an item recommendation model according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a computer device provided by an embodiment of the present invention;
fig. 6 is an alternative structure diagram of the distributed system 100 applied to the blockchain system according to the embodiment of the present invention;
fig. 7 is an alternative schematic diagram of a block structure according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In recent years, with the rapid development of big data technology, machine learning and deep learning technology, the sequence recommendation system algorithm and model based on time series data and deep learning attract more and more extensive attention and research interest in academic and business circles. Particularly for recommendation scenes with large interest change of users in a short time, such as short video, music, shopping and news recommendation scenes (users may browse hundreds of items (items) within hours), the sequence recommendation system model based on time sequence data and deep learning can be conveniently deployed and model iterated on the trained model, so that the method is suitable for the extremely high requirements of the current big data era on model performance and deployment instantaneity.
Conventional deep learning based time series recommendation system models typically consist of a recurrent neural network model (RNN). Meanwhile, in cooperation with a training mode of a data augmentation technology (data augmentation) technology, the RNN model originally applied to natural language processing can effectively predict sequence contents to appear in the future according to known sequence contents. The main implementation modes are an RNN Decoder layer (RNN Decoder) structure using auto-regressive strategy (auto-regressive) and an RNN encoder layer structure using random covering strategy.
For RNN decoder layer models using an autoregressive strategy. The input sequence is { x1, x2, x3, …, xN-1}, and the generated sequence is { x2, x3, x4, …, xN }. At any time point, for example, when the output of the tth time point is predicted, the time point data input into the decoder layer model is only { x1, x2, x3, …, xt-1}, so that the problem of information leakage is effectively avoided.
For RNN encoder layer models using a random masking strategy. A portion of the input sequence is randomly overwritten, replaced with uniform overwritten data (masked items), and the portion of the randomly overwritten sequence is used as the input sequence for the RNN encoder layer model. Unlike the decoder layer, at the t-th time point, if the input data is covered, the prediction target at the time point is the data of the covered t-th time point.
When the two RNN models predict data at a certain time point in data at a certain time, data information after the time point is not fully utilized. However, in practical application scenarios, the future selection data of the user in the model training phase is still valid and also contains the personal preference information of the user. If the future selected data of the user can be reasonably utilized and accurately modeled, the prediction precision of the current time point data can be greatly improved.
Currently, there is also a NextitNet model, which is a 1-dimensional hole-based CNN model similar to RNN Decoder. The system has a multi-layer network structure, and can fully capture the interest points of the user according to previous interaction records. However, each current item in the NextitNet model only uses past user interests during training, the training process is unidirectional, and if a simple bidirectional network is adopted, information leakage problems can occur to cause the model training to be invalid.
In the embodiment of the application, a new project recommendation model of an encoder layer-decoder layer structure is provided, an encoder layer can learn the overall structure information of a training time sequence, a decoder layer can learn the structure information and the training time sequence based on the encoder layer to obtain a prediction time sequence, and the project recommendation model is optimized based on the training time sequence and the prediction time sequence, so that the expression of the project recommendation model on the overall information of the training time sequence is enhanced, and the recommendation accuracy of the project recommendation model is improved.
The following describes an exemplary modeling method of a recommended model according to an embodiment of the present application and a modeling method of nexttinet, respectively.
Modeling in NextitNet mode: for a given item set, the joint distribution of the item sequence is maximized, (i.e., the probability of the ith item occurrence is greatest given the preceding i-1 items occurrences), and decomposed into a large product of conditional probabilities. Mathematically expressed as:
an alternative modeling approach to the present application: for a given item (e.g. video) set, maximizing the probability of an obscured item or the next item to that item occurring (i.e. the probability of the ith obscured item occurring is maximal under the condition that all other items occur) is mathematically expressed as:
wherein x isΔiRepresents a number from { x1,x2,…,xnRandomly selecting sequence data at the shielded time points. x is the number of1,…,xΔi-1Is at { x1,x2,…,xnIn the sequence, at xΔiPrevious sequence data. x is the number ofΔiSimilar to just filling the space in the shape, it is equivalent to filling the space in the language, i.e., i's special-this is only a lovely one because it's-loved, xΔ1,xΔ2,xΔ3Respectively, for "like", "dog", "very". Vector c is the output of the "recurrent neural network-based Encoder layer (RNN-Encoder layer)" described belowThe quantity, the final state vector ("FinaEncoddState").
The embodiment of the invention provides a method and a device for processing a project recommendation model, computer equipment and a storage medium. Specifically, the embodiment of the invention provides a processing device of an item recommendation model suitable for computer equipment. The computer device may be a terminal or a server, the terminal may be a mobile phone, a tablet computer, a notebook computer, or the like, and the server may be a single server or a server cluster composed of a plurality of servers.
The embodiment of the invention introduces a processing method of a project recommendation model by taking a computer device as a server as an example.
Referring to fig. 1a, an item recommendation system provided by an embodiment of the present invention includes a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected via a network, e.g. a wired or wireless network connection, etc., wherein the processing means of the item recommendation model are integrated in the server.
The server 20 may be configured to obtain a training time sequence, where the training time sequence includes item vectors of historical interaction items of target users arranged in sequence, and an arrangement sequence of the item vectors is a sequence of interaction times of the target users on the historical interaction items; analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises the structural information of the training time sequence; taking the training time sequence as an input sequence of a decoder layer of a recommended model of the item to be trained, taking a condition vector corresponding to each item vector as the input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer; and optimizing the recommendation model of the item to be trained based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain the recommendation model of the item for recommending the item.
The terminal 10 may be configured to send an item recommendation request to the server, so as to trigger the server to obtain an item to be recommended to the user through the item recommendation model, and send the item to the terminal.
The server 20 is further configured to obtain a project time sequence of the user to be recommended based on the received project recommendation request, where the project time sequence includes project vectors of historical interaction projects of the user to be recommended, which are arranged in sequence, and the arrangement sequence of the project vectors is the sequence of interaction times of the user to be recommended on the historical interaction projects; taking the project time sequence as an input sequence of a project recommendation model, and analyzing the project time sequence through the project recommendation model to obtain a prediction time sequence corresponding to the project time sequence; an item to be recommended to the user to be recommended is determined based on the predicted time series, and the item is transmitted to the terminal 10.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiments of the present invention will be described from the perspective of a processing device of an item recommendation model, which may be specifically integrated in a terminal or a server.
The method for processing the project recommendation model provided by the embodiment of the present invention may be executed by a processor of a terminal or a server, as shown in fig. 1b, and a specific flow of the method for processing the project recommendation model may be as follows:
101. acquiring a training time sequence, wherein the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
in this embodiment, the item recommendation model may adopt a session-based recommendation, which generally refers to predicting items that may be interested (including clicking, purchasing, or watching) by the user at the next time according to the history of clicking or watching the user for a period of time before. In this embodiment of the present application, the target user may be any user, and the history interaction items of the target user include: items that the target user has interacted with, wherein understanding of the interaction includes, but is not limited to: click, watch, listen, download, favorite, comment, print or purchase, etc., the items in this embodiment may be any form of content, such as video, audio, image, expression, messages pushed in the information stream, news, merchandise, advertisement, etc., and the embodiment is not limited thereto.
The history interactive item may be an item interacted by the target user in a history time period, the length of the history time period, or a starting time point and an ending time point of the history time period may be set according to actual needs, for example, the history time period is 1 hour before the current time, and the like.
In this embodiment, the item vector is used to represent the corresponding historical interaction item, and the item vector may be a one-hot (one-hot) vector or a lower-dimensional vector obtained based on the one-hot vector.
In one embodiment, the server may store information for the project in the source database, including but not limited to one-hot encoding of the project, information of users who interacted with the project, and so on.
In one embodiment, the step of "obtaining a training time sequence" may include:
acquiring a one-hot coding time sequence of historical interactive items of a target user from a source database, wherein in the one-hot coding time sequence, the one-hot coding of each historical interactive item is arranged based on the interactive time of the target user for each historical interactive item;
acquiring an embedding matrix corresponding to the one-hot coding time sequence, wherein the embedding matrix comprises embedding vectors corresponding to the one-hot codes of all items;
and mapping each one-hot code in the one-hot code time sequence into a corresponding item vector based on the embedded matrix to obtain a training time sequence.
By the embedding matrix, one-hot codes in the one-hot coding time sequence can be mapped into an embedding space with a lower dimension, and the association relation of each item in the one-hot coding time sequence is also reserved.
Referring to fig. 2, fig. 2 shows a structure of an optional recommendation model for an item to be trained of this embodiment, and the recommendation model for an item to be trained of this embodiment may include an Embedding layer (Embedding layer), a random covering layer, an encoder layer, a decoder layer, and a softmax layer.
The embedding layer is the first layer of the project recommendation model of the application, and mainly maps a high-dimensional one-hot code into a low-dimensional embedding space. The embedding layer mainly includes an embedding matrix of items, each row (or each column in some embodiments) of the embedding matrix represents an embedding vector of an item, and one-hot encoding of an item is multiplied by the embedding vector to obtain a lower-dimensional item vector corresponding to the item.
For items to be filled in with gaps, the patent may use an additional embedded vector representation. For example, it may be an all 0 embedding vector, where the additional embedding vector for each item to be filled in with a null may be the same or different.
For example, for a video pool of a certain player, there are 100 ten thousand pieces of video data, then the one-hot encoding of each piece of video data is 100 ten thousand dimensions, each dimension has n pieces of data, such as 100 plus 1000 pieces of data, the video that the target user has viewed is retrieved from the video pool, for example, 10 pieces of video, and the one-hot encoding of the 10 pieces of video is mapped to a vector of a lower dimension, for example, to a vector of 10 × n by the embedding layer.
The model in this embodiment may fully utilize context information, such as user characteristics and user operation information on the item, where the user characteristics include, but are not limited to, user image (profile) information, social friend relationship, and the like, and the user operation information on the image includes, but is not limited to, user interaction time with the item, interaction space, interaction type (interaction type such as click, collection, play, purchase, and the like), and the like.
Taking the user characteristics as user portrait information as an example, a user ID embedding matrix can be initialized at first, and the embedding matrix contains user ID embedding vectors of all users corresponding to the source data; for each session data (i.e. the training time sequence), a corresponding user ID embedding vector is located according to the user ID, and the user ID embedding vector and the original embedding matrix of the item are combined in a concat manner.
Optionally, in this embodiment, before mapping each one-hot code in the one-hot code time sequence to a corresponding item vector based on the embedded matrix to obtain the training time sequence, the method may further include:
the user characteristics of the target user are obtained, the user characteristic embedding vectors corresponding to the target user are determined based on the user characteristics, and the user characteristic embedding vectors are fused into each embedding vector of the embedding matrix.
A user feature embedding matrix can be initialized firstly, wherein the embedding matrix contains user feature embedding vectors of all users corresponding to the source data; and acquiring a user characteristic embedding vector corresponding to the target user from the user characteristic embedding matrix based on the user characteristic of the target user.
Optionally, in this embodiment, before mapping each one-hot code in the one-hot code time sequence to a corresponding item vector based on the embedded matrix to obtain the training time sequence, the method further includes:
the method comprises the steps of obtaining operation information of a target user on each historical interaction project, determining operation information embedding vectors corresponding to the historical interaction projects based on the operation information, and fusing the operation information embedding vectors of the historical interaction projects into the embedding vectors corresponding to the historical interaction projects of an embedding matrix.
The method comprises the steps that a user operation information embedding matrix can be initialized firstly, and the embedding matrix comprises user operation information embedding vectors corresponding to interaction items of all users corresponding to source data; the embodiment may obtain the user operation information embedding vector corresponding to the target user from the user operation information embedding matrix based on the operation information of the target user and the historical interaction item.
102. Analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises the structural information of the training time sequence;
in this embodiment, before analyzing the structural features of the training time sequence through the encoder layer of the to-be-trained item recommendation model to obtain the condition vectors corresponding to the item vectors in the training time sequence, the method may further include:
selecting a part of time nodes from the training time sequence as replacement time nodes;
and in the training time sequence, replacing the item vector corresponding to the replacement time node with a preset filling vector to obtain a training time sequence after replacement.
Correspondingly, the step of analyzing the structural features of the training time sequence through the encoder layer of the to-be-trained item recommendation model to obtain the condition vector corresponding to each item vector in the training time sequence may include: and analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the recommendation model of the item to be trained to obtain a condition vector corresponding to each item vector in the training time sequence after replacement.
In this embodiment, a part of time nodes may be randomly selected from the training time sequence as the replacement time nodes. Optionally, the replacement time nodes in this embodiment may not be adjacent, or the replacement time nodes may not be the first time node and the last time node of the training time sequence.
Referring to fig. 2, the random masking layer (random masking) of the present embodiment is a random masking operation specifically designed to adapt to the training mode of random masking in this patent. For an input sequence X ═ X1,x2,x3,…,xN-2,xN-1,xNWe randomly pick and mask a certain proportion of time node data (e.g. 50%). The so-called "masking" operation consists in the fact of replacing the original vector in the input sequence by a uniform, characterized additional vector, denoted XM={x1,xΔ2,x3,…,xΔN-2,xΔN-1,xN}. The greek letter Δ indicates that the sequence data at the time node is randomly selected and masked.
Optionally, in this embodiment, the coding layer is a cyclic neural network, and includes a plurality of connected cyclic coding units; the step of analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the recommended model of the item to be trained to obtain the condition vector corresponding to each item vector in the training time sequence after replacement may include:
inputting the item vectors in the training time sequence after replacement into a corresponding cyclic coding unit in an encoder layer based on the corresponding time nodes; wherein each cyclic encoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic coding unit, and acquiring a hidden layer state vector output by the previous cyclic coding unit based on the acquired hidden layer state vector and an input item vector;
and acquiring a hidden layer state vector output by the last cyclic coding unit of the encoder layer, and taking the hidden layer state vector as a condition vector corresponding to each item vector in the training time sequence, wherein the hidden layer state vector contains the structural information of the training time sequence after replacement.
Referring to fig. 2, the Encoder layer (RNN-Encoder layer) of fig. 2 is implemented based on a recurrent neural network, and is composed using typical RNN units and their variants. The input of this layer is randomly masked sequence data XM. The hidden state vector output by the layer at the last time node is denoted as Enc (X)M) That is, the Final Encoded State mentioned above is also the condition vector corresponding to each item vector.
Optionally, in this embodiment, a gate cycle Unit (GRU) or a Long Short Term Memory Unit (LSTM) may be used to form a main network structure of the RNN-Encoder layer, so as to effectively alleviate the problem of gradient disappearance or gradient explosion that the original RNN model is very likely to appear in the training process.
As shown in fig. 2, the information of the training time sequence after replacement at each time node is transmitted from front to back through the sequentially connected cyclic coding units, and the cyclic coding unit corresponding to the last time node may obtain the hidden state vector containing the complete structure information of the training time sequence after replacement based on the hidden state vector input by the previous cyclic coding unit and the entry vector input by itself.
In this example, the condition vector of each item vector is the hidden-layer state vector output by the cyclic coding unit corresponding to the last time node. Optionally, the coding layer in this example may be a unidirectional recurrent neural network, or may be a bidirectional recurrent neural network.
In another example, referring to fig. 3, an encoder layer may employ a bi-directional cyclic neural network, the encoder layer including a plurality of cyclic coding units connected; the step of analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the recommended model of the item to be trained to obtain the condition vector corresponding to each item vector in the training time sequence after replacement may include:
inputting the item vectors in the training time sequence after replacement into a corresponding cyclic coding unit based on the corresponding time nodes; wherein each cyclic encoding unit is configured to: acquiring a backward hidden layer state vector output backward by a previous cyclic coding unit and a forward hidden layer state vector output forward by a next cyclic coding unit, acquiring a backward hidden layer state vector output backward by the current cyclic coding unit based on the acquired backward hidden layer state vector and the acquired project vector, and acquiring a forward hidden layer state vector output forward by the current cyclic coding unit based on the acquired forward hidden layer state vector and the acquired project vector;
and processing the obtained forward hidden layer state vector, backward hidden layer state vector and item vector based on each cyclic coding unit to obtain a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the condition vector comprises the structure information of the training time sequence after replacement.
In the structure shown in fig. 3, the transmission between the cyclic coding units is bidirectional, and the information from the first time node to the last time node in the post-training time sequence can be transmitted from front to back through the cyclic coding units; the information from the last time node to the information from the most previous time node in the post-replacement training time sequence may be passed from back to front through the cyclic coding unit. That is, each of the cyclic encoding units may forward information of the item vectors input by the cyclic encoding unit and the cyclic encoding units subsequent thereto, and each of the cyclic encoding units may backward transfer information of the item vectors input by the cyclic encoding unit and the cyclic encoding units previous thereto.
In this example, the condition vector of each item vector is the vector output by the circular coding unit to which the item vector corresponds.
103. Taking the training time sequence as an input sequence of a decoder layer of a recommended model of the item to be trained, taking a condition vector corresponding to each item vector as the input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer;
referring to fig. 2, optionally, the decoder layer of this embodiment is a cyclic neural network, and includes a plurality of cyclic decoding units connected, and the step "taking the training time sequence as an input sequence of the decoder layer of the recommendation model of the item to be trained, and also taking a condition vector corresponding to each item vector as an input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence by the decoder layer" may include:
inputting the item vectors in the training time sequence into corresponding cyclic decoding units based on corresponding time nodes, and inputting the condition vectors corresponding to the item vectors into the cyclic decoding units corresponding to the item vectors; wherein each cyclic decoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic decoding unit, and acquiring a hidden layer state vector output by the previous cyclic decoding unit based on the acquired hidden layer state vector, the item vector and the condition vector;
and predicting a prediction time sequence corresponding to the training time sequence by each cyclic decoding unit based on the obtained hidden layer state vector, the item vector and the condition vector.
In this embodiment, the decoder layer may be an autoregressive decoder layer based on a recurrent neural network: like the encoder layer, the layer may also be composed using typical RNN units and their variants. And encoder layerSimilarly, the data input to the decoder is an unmasked training time sequence, sequence X, rather than a randomly masked sequence XM. In the structure shown in FIG. 2, the final state vector Enc (X) output by the encoder layerM) And also as a condition vector, into the circular decoding unit of each time node. For each time node, the decoder layer outputs a hidden layer state vector (hidden vector, i.e., h vector in fig. 2).
Optionally, in this embodiment, a gate cycle unit or a long-short term memory unit may be used to form a main network structure of the decoder layer, so as to effectively alleviate the problem of gradient disappearance or gradient explosion that is easily occurred in the training process of the original RNN model.
In this embodiment, the decoder layer may predict the entry vector of the next time node for the entry vector of each input time node to obtain a predicted time sequence, for example, referring to fig. 2, for the entry vector Xn-1 of the time node n-1 in the training time sequence, the cyclic decoding unit outputs the prediction result of the time node n (i.e., the h vector in fig. 2).
In this embodiment, the representation form of the item vector at each time node in the prediction time sequence output by the decoding layer may be different from the representation form of the item vector in the training time sequence input to the decoder layer.
Each cyclic decoding unit of the decoder layer outputs a hidden layer state vector obtained by itself based on the obtained hidden layer state vector, the obtained item vector and the obtained condition vector (output by the previous cyclic decoding unit), wherein the hidden layer state vector h can be regarded as the item vector output by the cyclic decoding unit, and the hidden layer state vectors h output by all the cyclic decoding units form a prediction time sequence output by the decoder layer.
In other embodiments, for the model structure shown in fig. 3, the condition vectors output by each cyclic encoding unit are input into the cyclic decoding units of the same time node.
104. And optimizing the recommendation model of the item to be trained based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain the recommendation model of the item for recommending the item.
In this embodiment, the step "optimizing a to-be-trained item recommendation model based on a difference between the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for item recommendation", may include:
and calculating the cross entropy of the item vector corresponding to the replacement time node in the training time sequence and the prediction time sequence, and optimizing the item recommendation model to be trained by taking the minimum cross entropy as a target to obtain the item recommendation model for recommending items.
In this embodiment, in order to calculate the cross entropy, the item vectors in the prediction time sequence may be first converted into corresponding probability distributions through the softmax layer, and then the cross entropy of the training time sequence and the prediction time sequence is calculated, so as to optimize the recommendation model for the item to be trained with the cross entropy.
Wherein, calculating the cross entropy of the item vectors corresponding to the replacement time nodes in the training time series and the prediction time series may include:
converting each item vector of the prediction time sequence output by the decoder layer into corresponding probability distribution through a Softmax layer;
calculating cross entropy between the item vector of the replacement time node in the training time sequence and the probability distribution of the replacement time node in the prediction time sequence.
In this example, the step of "converting each item vector of the prediction time series output by the decoder layer into a corresponding probability distribution by the Softmax layer" may include: and carrying out full-connection conversion on each item vector of the prediction time sequence output by the decoder layer through a Softmax layer, and normalizing the converted item vectors into corresponding probability distribution through a Softmax function.
For example, referring to fig. 2, the replacement time nodes are 2, 5 and n-2, the item vectors of the replacement time nodes in the training time sequence are X2, X5 and Xn-2, the corresponding item vectors in the prediction time sequence are hidden layer state vectors h2, h5 and hn-2, respectively, and the Softmax layer can convert all the hidden layer state vectors output by the decoder layer into corresponding probability distributions, wherein the hidden layer state vectors h2, h5 and hn-2 are converted into corresponding probability distributions (vectors) p2, p5 and pn-2, respectively, and this embodiment may determine the cross entropy based on X2, X5 and Xn-2 in the training time sequence and p2, p5 and pn-2 in the prediction time sequence, so as to optimize the to-be-trained item recommendation model for the purpose of minimizing the cross entropy.
In another embodiment, the step "optimizing the item recommendation model to be trained based on the difference between the item vectors corresponding to the same time node in the training time series and the prediction time series to obtain the item recommendation model for item recommendation", may include:
determining a project vector corresponding to a next time node of the replacement time node in the training time sequence and the prediction time sequence, calculating the cross entropy of the project vector determined by the training time sequence and the project vector determined by the prediction time sequence, and optimizing a to-be-trained project recommendation model with the aim of minimizing the cross entropy to obtain a project recommendation model for project recommendation.
Determining a project vector corresponding to a time node next to the replacement time node in the training time sequence and the prediction time sequence, and calculating cross entropy of the project vector determined by the training time sequence and the project vector determined by the prediction time sequence may include:
converting each item vector of the prediction time sequence output by the decoder layer into corresponding probability distribution through a Softmax layer;
determining a project vector corresponding to a time node next to the replacement time node in the training time sequence, predicting probability distribution corresponding to the time node next to the replacement time node in the training time sequence, and calculating the cross entropy of the determined project vector and the corresponding probability distribution.
In this example, the step of "converting each item vector of the prediction time series output by the decoder layer into a corresponding probability distribution by the Softmax layer" may include: and carrying out full-connection conversion on each item vector of the prediction time sequence output by the decoder layer through a Softmax layer, and normalizing the converted item vectors into corresponding probability distribution through a Softmax function.
For example, referring to fig. 2, the replacement time nodes are 2, 5 and n-2, the item vectors of the next time node of the replacement time node in the training time sequence are X3, X6 and Xn-1, the corresponding item vectors in the prediction time sequence are hidden layer state vectors h3, h6 and hn-1, respectively, the Softmax layer can convert all the hidden layer state vectors output by the decoder layer into corresponding probability distributions, wherein the hidden layer state vectors h3, h6 and hn-1 are converted into corresponding probability distributions (vectors) p3, p6 and pn-1, respectively, and the present embodiment can determine the cross entropy based on X3, X6 and Xn-1 in the training time sequence and p3, p6 and pn-1 in the prediction time sequence, so as to optimize the model of the item to be recommended for the purpose of minimizing the entropy cross.
In one embodiment, the Softmax layer may process only the item vectors in the prediction time series that need to be cross-entropy calculated, and not all the item vectors.
The step of optimizing the to-be-trained item recommendation model based on the difference between the training time series and the project vectors corresponding to the same time node in the prediction time series to obtain the item recommendation model for recommending items may include:
converting the item vectors positioned at the replacement time nodes in the prediction time sequence output by the decoder layer into corresponding probability distribution through a Softmax layer;
calculating cross entropy between the item vector of the replacement time node in the training time sequence and the probability distribution of the replacement time node in the prediction time sequence;
and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items.
For example, referring to fig. 2, the replacement time nodes are 2, 5 and n-2, the term vectors of the replacement time nodes in the training time sequence are X2, X5 and Xn-2, the corresponding term vectors in the prediction time sequence are hidden layer state vectors h2, h5 and hn-2, respectively, and the Softmax layer may convert only all the hidden layer state vectors corresponding to the replacement time nodes into corresponding probability distributions, wherein the hidden layer state vectors h2, h5 and hn-2 are converted into corresponding probability distributions (vectors) p2, p5 and pn-2, respectively.
The step of optimizing the to-be-trained item recommendation model based on the difference between the training time series and the project vectors corresponding to the same time node in the prediction time series to obtain the item recommendation model for recommending items may include:
converting the item vector of the next time node positioned in the replacement time node in the prediction time sequence output by the decoder layer into corresponding probability distribution through a Softmax layer;
calculating a cross entropy between a term vector of a time node next to the replacement time node in the training time series and a probability distribution of a time node next to the replacement time node in the prediction time series;
and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items.
For example, referring to fig. 2, the replacement time nodes are 2, 5 and n-2, the item vectors of the next time node in the training time sequence are X3, X6 and Xn-1, the corresponding item vectors in the prediction time sequence are hidden state vectors h3, h6 and hn-1, respectively, the Softmax layer can convert only the hidden state vectors h3, h6 and hn-1 into corresponding probability distributions, wherein the hidden state vectors h3, h6 and hn-1 are converted into corresponding probability distributions (vectors) p3, p6 and pn-1,
in the above two examples, the Softmax layer may convert only the item vectors of the replacement time node or the next time node of the replacement time in the prediction time series into corresponding probability distributions, further reducing the amount of computation.
Referring to fig. 2, the Softmax layer is the last layer of the recommendation model of the present embodiment. The output hidden state vector of the decoder layer is input into the layer and then transformed into a vector with the same length as the number of the item involved in the training time sequence through a full connection. Thereafter, the transformed quantiles are normalized by the softmax function to a probability distribution (i.e., softmax value, below) that the item predicted by the time node belongs to an item in the training time series. Unlike nextitNet, which needs to calculate the softmax values of all the generation time nodes, in the embodiment, the softmax values of only a part of the generation time nodes may be calculated, for example, the softmax values of only the shaded time nodes (i.e., the completely filled empty space) are calculated (i.e., the softmax of the gray output nodes in FIG. 2), and the generated softmax values and the tags of the training time series are used to calculate a cross entropy (cross entropy) loss function. The recommendation model in this embodiment is optimized with the goal of minimizing the cross entropy loss function.
It is understood that, in this embodiment, the label of each item vector in the training time sequence may be each item vector itself.
After the trained item recommendation model is obtained, recommendation of the item can be achieved based on the item model. Optionally, in this embodiment, after the step "optimizing the to-be-trained item recommendation model based on the difference between the training time sequence and the prediction time sequence and corresponding to the item vector of the same time node to obtain the item recommendation model for item recommendation", the method may further include:
acquiring a project time sequence of a user to be recommended, wherein the project time sequence comprises project vectors of historical interactive projects of the user to be recommended, which are arranged in sequence, and the arrangement sequence of the project vectors is the sequence of interaction time of the user to be recommended on the historical interactive projects;
taking the project time sequence as an input sequence of a project recommendation model, and analyzing the project time sequence through the project recommendation model to obtain a prediction time sequence corresponding to the project time sequence;
and determining the items to be recommended to the user to be recommended based on the predicted time sequence.
The item corresponding to the last time node in the predicted time sequence can be used as the item to be recommended to the user to be recommended. In the actual application process, in order to use the information of the item time series to the maximum extent, optionally, the coverage ratio of the random coverage layer in the actual application process may be set to 0.
The applicable scenes of the embodiment of the invention comprise: a session based recommendation system. Items that may be of interest (i.e., view/click/purchase, etc.) to future users may be predicted based on the behavior of viewing, clicking, and purchasing in the recommendation system. Taking a short video APP as an example, a user a effectively (for example, the playing rate of a video is more than 80%) watches 300 videos in one morning, and the video that may be interested in the future of the user can be predicted according to the 300 videos watched by the user, so that a personalized recommendation effect is achieved.
The embodiment of the application is inspired by the idea of shape filling (clozing) in language testing, and provides a brand-new recommendation model training mode, so that the interaction behavior of past users in session data can be utilized, the future interaction behavior of the users can be utilized, and a strategy is improved aiming at the data. The network is formed by an RNN encoder and an RNN decoder consisting of basic RNN units (or variants thereof, such as LSTM or GRU). Under the same experimental setting, the performance obtained by the algorithm of the present invention is found to be very stable and significantly improved compared to the latest recommended system model based on RNN.
All data sets and related networks listed below use this arrangement: using the LSTM model, the dimensions of the cells and the dimensions of the embedding matrix are 64. In the training process, an Adams optimizer is used to train all involved models, and the learning rate of the models is 0.001. The data experiment hardware environment used GPU Tesla P40, tensoflow version 1.9.0.
1. Application 1: the properties (in the length 10 sequence) are as follows:
MRR@All | |
RNN-Decoder | 0.19888 |
RNN-Encoder | 0.10353 |
RNN-Decoder-Encoder | 0.2153 the index is sorted according to the order, |
in the data set, the length of each piece of data is 10, the total number of related items is 136737, the number of training data is 2861673, and the number of test data is 476946. The batch size (batch size) is 512.
2. Application 2: the performance (in the dataset with a sequence length of 20) is as follows:
MRR@All | |
RNN-Decoder | 0.21023 |
RNN-Encoder | 0.15034 |
RNN-Decoder-Encoder | 0.23112 |
in the data set, the length of each piece of data is 20, the total number of related items is 136737, the number of training data is 1449532, and the number of test data is 241589. The batch size (batch size) is 256.
3. Application 3: the performance (in the dataset with sequence length 100) is as follows:
in the data set, the length of each piece of data is 100, the total number of related items is 136737, the number of training data is 322387, and the number of test data is 53731. The batch size (batch size) is 64.
4. Application 4: the properties (in the length-10 sequence) are as follows
MRR@All | |
RNN-Decoder | 0.08365 |
RNN-Encoder | 0.0699 |
RNN-Decoder-Encoder | 0.09082 |
In the data set, the length of each piece of data is 30, the total number of items involved is 65997, the number of training data is 786431, and the number of test data is 131072. The batch size (batch size) is 512.
In this embodiment, when the project recommendation model is trained, the recommendation result of a certain time node in the training time sequence may be optimized based on the project vector before the time node and the project vector after the time node, so that the trained project recommendation model may effectively express all known data in the training time sequence, improve the data utilization degree of the project recommendation model for the input sequence, and be beneficial to improving the prediction accuracy of the project recommendation model. The optimization method based on the random covering scheme only minimizes the cross entropy between the predicted data and the corresponding real data of a part of time nodes, so that the model is trained fully and effectively, and the problem of information leakage is avoided.
In addition, an embodiment of the present invention further provides a processing apparatus of an item recommendation model, and with reference to fig. 4, the processing apparatus of the item recommendation model includes:
an obtaining module 401, configured to obtain a training time sequence, where the training time sequence includes item vectors of history interactive items of target users arranged in sequence, and an arrangement sequence of the item vectors is a sequence of interaction time of the target users on the history interactive items;
an analysis module 402, configured to analyze structural features of the training time sequence through an encoder layer of the to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, where the condition vector includes structural information of the training time sequence;
a prediction module 403, configured to use the training time sequence as an input sequence of a decoder layer of the recommended model of the item to be trained, and further use a condition vector corresponding to each item vector as an input of the decoder layer, and predict, by the decoder layer, a prediction time sequence corresponding to the training time sequence;
and the optimizing module 404 is configured to optimize a to-be-trained item recommendation model based on a difference between the training time sequence and the project vector corresponding to the same time node in the prediction time sequence, so as to obtain an item recommendation model for item recommendation.
Optionally, the processing device of the item recommendation model in this embodiment further includes: a replacing module, configured to select a part of time nodes from the training time sequence as replacement time nodes before the analyzing module 402 analyzes the structural features of the training time sequence through the encoder layer of the to-be-trained item recommendation model to obtain condition vectors corresponding to the item vectors in the training time sequence; and in the training time sequence, replacing the item vector corresponding to the replacement time node with a preset filling vector to obtain a training time sequence after replacement.
Correspondingly, the analysis module 402 is specifically configured to analyze the structural features of the training time sequence after replacement through an encoder layer of the to-be-trained item recommendation model, so as to obtain a condition vector corresponding to each item vector in the training time sequence after replacement.
Optionally, the optimization module 404 includes a first optimization submodule or a second optimization submodule:
and the first optimization submodule is used for calculating the cross entropy of the item vector corresponding to the replacement time node in the training time sequence and the prediction time sequence, optimizing the to-be-trained item recommendation model by taking the minimized cross entropy as a target, and obtaining the item recommendation model for item recommendation.
And the second optimization submodule is used for determining a project vector corresponding to the next time node of the replacement time node in the training time sequence and the prediction time sequence, calculating the cross entropy of the project vector determined by the training time sequence and the project vector determined in the prediction time sequence, optimizing the to-be-trained project recommendation model by taking the minimum cross entropy as a target, and obtaining the project recommendation model for project recommendation.
In one embodiment, the encoder layer is a recurrent neural network comprising a plurality of recurrent encoding units connected. An analysis module 402, configured to input the item vectors in the replaced training time sequence to corresponding cyclic coding units in the encoder layer based on corresponding time nodes; wherein each cyclic encoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic coding unit, and acquiring a hidden layer state vector output by the previous cyclic coding unit based on the acquired hidden layer state vector and an input item vector; and acquiring a hidden layer state vector output by the last cyclic coding unit of the encoder layer, and taking the hidden layer state vector as a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the hidden layer state vector contains the structure information of the training time sequence after replacement.
In another embodiment, the encoder layer is a bidirectional cyclic neural network comprising a plurality of cyclic coding units connected; an analysis module 402, configured to input the item vectors in the replaced training time sequence into corresponding cyclic coding units based on corresponding time nodes; wherein each cyclic encoding unit is configured to: acquiring a backward hidden layer state vector output backward by a previous cyclic coding unit and a forward hidden layer state vector output forward by a next cyclic coding unit, acquiring a backward hidden layer state vector output backward by the current cyclic coding unit based on the acquired backward hidden layer state vector and the acquired project vector, and acquiring a forward hidden layer state vector output forward by the current cyclic coding unit based on the acquired forward hidden layer state vector and the acquired project vector; and processing the obtained forward hidden layer state vector, backward hidden layer state vector and item vector based on each cyclic coding unit to obtain a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the condition vector comprises the structure information of the training time sequence after replacement.
Optionally, the obtaining module 401 is configured to obtain, from the source database, an one-hot coding time sequence of the historical interaction items of the target user, where in the one-hot coding time sequence, the one-hot codes of the historical interaction items are arranged based on interaction time of the target user for the historical interaction items; acquiring an embedding matrix corresponding to the one-hot coding time sequence, wherein the embedding matrix comprises embedding vectors corresponding to the one-hot codes of all items; and mapping each one-hot code in the one-hot code time sequence into a corresponding item vector based on the embedded matrix to obtain a training time sequence.
Optionally, in an embodiment, the optimization module 404 further includes a third optimization sub-module or a fourth optimization sub-module:
a third optimization sub-module, configured to convert, in the prediction time sequence output by the decoder layer, an item vector located at a replacement time node into a corresponding probability distribution through a Softmax layer; calculating cross entropy between the item vector of the replacement time node in the training time sequence and the probability distribution of the replacement time node in the prediction time sequence; and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items.
A fourth optimization submodule, configured to convert, in the prediction time sequence output by the decoder layer, an item vector of a time node next to the replacement time node into a corresponding probability distribution through a Softmax layer; calculating a cross entropy between a term vector of a time node next to the replacement time node in the training time series and a probability distribution of a time node next to the replacement time node in the prediction time series; and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items.
Optionally, the apparatus of this embodiment further includes a fusion module, configured to, before the obtaining module 401 maps each one-hot code in the one-hot code time sequence to a corresponding item vector based on the embedding matrix, obtain a user characteristic of the target user, determine a user characteristic embedding vector corresponding to the target user based on the user characteristic, and fuse the user characteristic embedding vector into each embedding vector of the embedding matrix; and/or acquiring operation information of the target user on each historical interaction project, determining an operation information embedding vector corresponding to each historical interaction project based on the operation information, and fusing the operation information embedding vector of each historical interaction project into an embedding vector corresponding to each historical interaction project of the embedding matrix.
In this embodiment, the decoder layer is a cyclic neural network, and includes a plurality of connected cyclic decoding units, and the prediction module 403 is configured to input the item vectors in the training time sequence into the corresponding cyclic decoding units based on the corresponding time nodes, and input the condition vectors corresponding to the item vectors into the cyclic decoding units corresponding to the item vectors; wherein each cyclic decoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic decoding unit, and acquiring a hidden layer state vector output by the previous cyclic decoding unit based on the acquired hidden layer state vector, the item vector and the condition vector; and predicting a prediction time sequence corresponding to the training time sequence by each cyclic decoding unit based on the obtained hidden layer state vector, the item vector and the condition vector.
In this embodiment, the apparatus further includes:
the system comprises a user sequence acquisition module, a recommendation module and a recommendation module, wherein the user sequence acquisition module is used for acquiring a project time sequence of a user to be recommended, the project time sequence comprises project vectors of historical interactive projects of the user to be recommended, and the project vectors are arranged in sequence, and the arrangement sequence of the project vectors is the sequence of interactive time of the user to be recommended on the historical interactive projects;
the actual prediction module is used for taking the project time sequence as an input sequence of a project recommendation model, and obtaining a prediction time sequence corresponding to the project time sequence through the analysis of the project recommendation model on the project time sequence;
and the determining module is used for determining the items to be recommended to the user to be recommended based on the predicted time sequence.
When the device of the embodiment trains the project recommendation model, for the recommendation result of a certain time node in the training time sequence, optimization can be performed based on the project vector before the time node and the project vector after the time node, so that the trained project recommendation model can effectively express all known data in the training time sequence, the data utilization degree of the project recommendation model on the input sequence is improved, and the prediction accuracy of the project recommendation model is favorably improved. The device is based on an optimization method of a random covering scheme, and can only minimize the cross entropy between the predicted data and the corresponding real data of a part of time nodes, so that the model is trained fully and effectively, and the problem of information leakage is avoided.
In addition, an embodiment of the present invention further provides a computer device, where the computer device may be a terminal or a server, as shown in fig. 5, which shows a schematic structural diagram of the computer device according to the embodiment of the present invention, and specifically:
the computer device may include components such as a processor 501 of one or more processing cores, memory 502 of one or more computer-readable storage media, a power supply 503, and an input unit 504. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 5 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 501 is a control center of the computer device, connects various parts of the entire computer device by using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby monitoring the computer device as a whole. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The computer device further comprises a power supply 503 for supplying power to the various components, and preferably, the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 503 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 504, and the input unit 504 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 501 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application programs stored in the memory 502, so as to implement various functions as follows:
acquiring a training time sequence, wherein the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises structural information of the training time sequence;
taking the training time sequence as an input sequence of a decoder layer of the recommended model of the item to be trained, taking a condition vector corresponding to each item vector as an input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer;
and optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
The system related to the embodiment of the invention can be a distributed system formed by connecting a client and a plurality of nodes (computer equipment in any form in an access network, such as servers and terminals) through a network communication form.
Taking a distributed system as an example of a blockchain system, referring To fig. 6, fig. 6 is an optional structural schematic diagram of the distributed system 100 applied To the blockchain system provided in the embodiment of the present invention, and is formed by a plurality of nodes 200 (computer devices in any form in an access network, such as servers and user terminals) and a client 300, where a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer. In this embodiment, information such as a to-be-trained item recommendation model, an item recommendation model of a training number, and a source database may be stored in a shared ledger of an area chain system through a node, and a computer device (e.g., a terminal or a server) may acquire a training time sequence, an item time sequence, and the like of a target user based on recorded data stored in the shared ledger.
Referring to the functions of each node in the blockchain system shown in fig. 6, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) wallet, for providing the function of transaction of electronic money, including initiating transaction (i.e. sending the transaction record of current transaction to other nodes in the blockchain system, after the other nodes are successfully verified, storing the record data of transaction in the temporary blocks of the blockchain as the response of confirming the transaction is valid; of course, the wallet also supports the querying of the remaining electronic money in the electronic money address;
and 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks.
Referring to fig. 7, fig. 7 is an optional schematic diagram of a Block Structure (Block Structure) according to an embodiment of the present invention, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in the processing method of the item recommendation model provided in the embodiments of the present application. For example, the instructions may perform the steps of:
acquiring a training time sequence, wherein the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
analyzing the structural characteristics of the training time sequence through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence, wherein the condition vector comprises structural information of the training time sequence;
taking the training time sequence as an input sequence of a decoder layer of the recommended model of the item to be trained, taking a condition vector corresponding to each item vector as an input of the decoder layer, and predicting a prediction time sequence corresponding to the training time sequence through the decoder layer;
and optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in the method for processing any item recommendation model provided in the embodiment of the present invention, the beneficial effects that can be achieved by the method for processing any item recommendation model provided in the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above detailed description is given of a processing method, an apparatus, a computer device, and a storage medium of a project recommendation model provided in an embodiment of the present application, and a specific example is applied in the present application to explain the principle and an implementation of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (11)
1. A processing method of an item recommendation model is characterized by comprising the following steps:
acquiring a training time sequence, wherein the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
selecting a part of time nodes from the training time sequence as replacement time nodes;
replacing the item vector corresponding to the replacement time node with a preset filling vector in the training time sequence to obtain a replaced training time sequence;
analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of a to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the condition vector comprises structural information of the training time sequence, the encoder layer is realized based on a cyclic neural network, the to-be-trained item recommendation model further comprises a decoder layer, and the decoder layer is a cyclic neural network and comprises a plurality of connected cyclic decoding units;
inputting the item vectors in the training time sequence into a cyclic decoding unit corresponding to the decoder layer based on corresponding time nodes, and inputting the condition vectors corresponding to the item vectors into the cyclic decoding unit corresponding to the item vectors; wherein each cyclic decoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic decoding unit, and acquiring a hidden layer state vector output by the previous cyclic decoding unit based on the acquired hidden layer state vector, the item vector and the condition vector;
predicting a prediction time sequence corresponding to the training time sequence based on the obtained hidden layer state vector, the item vector and the condition vector through each cyclic decoding unit;
and optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
2. The method for processing the item recommendation model according to claim 1, wherein the optimizing the item recommendation model to be trained based on the difference between the item vectors corresponding to the same time node in the training time series and the prediction time series to obtain the item recommendation model for item recommendation comprises:
calculating the cross entropy of the item vector corresponding to the replacement time node in the training time sequence and the prediction time sequence, and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for item recommendation;
or determining a project vector corresponding to a time node next to the replacement time node in the training time sequence and the prediction time sequence, calculating the cross entropy of the project vector determined in the training time sequence and the project vector determined in the prediction time sequence, and optimizing the project recommendation model to be trained with the aim of minimizing the cross entropy to obtain the project recommendation model for project recommendation.
3. The method of processing the item recommendation model according to claim 1, wherein the encoder layer is a recurrent neural network comprising a plurality of recurrent coding units connected; analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the recommendation model of the item to be trained to obtain condition vectors corresponding to the item vectors in the training time sequence after replacement, wherein the condition vectors comprise:
inputting the item vectors in the replaced training time sequence into corresponding cyclic coding units in the encoder layer based on corresponding time nodes; wherein each cyclic encoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic coding unit, and acquiring a hidden layer state vector output by the previous cyclic coding unit based on the acquired hidden layer state vector and an input item vector;
and acquiring a hidden layer state vector output by the last cyclic coding unit of the encoder layer, and taking the hidden layer state vector as a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the hidden layer state vector contains the structural information of the training time sequence after replacement.
4. The method of processing the item recommendation model according to claim 1, wherein the encoder layer is a bi-directional cyclic neural network comprising a plurality of cyclic coding units connected; analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the recommendation model of the item to be trained to obtain condition vectors corresponding to the item vectors in the training time sequence after replacement, wherein the condition vectors comprise:
inputting the item vectors in the training time sequence after replacement into a corresponding cyclic coding unit based on a corresponding time node; wherein each cyclic encoding unit is configured to: acquiring a backward hidden layer state vector output backward by a previous cyclic coding unit and a forward hidden layer state vector output forward by a next cyclic coding unit, acquiring a backward hidden layer state vector output backward by the current cyclic coding unit based on the acquired backward hidden layer state vector and the acquired project vector, and acquiring a forward hidden layer state vector output forward by the current cyclic coding unit based on the acquired forward hidden layer state vector and the acquired project vector;
and processing the obtained forward hidden layer state vector, backward hidden layer state vector and item vector based on each cyclic coding unit to obtain a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the condition vector comprises the structure information of the training time sequence after replacement.
5. The method of processing the item recommendation model according to claim 1, wherein said obtaining a training time series comprises:
acquiring a single-hot-code time sequence of the historical interactive items of the target user from a source database, wherein in the single-hot-code time sequence, the single-hot codes of the historical interactive items are arranged based on the interactive time of the target user for the historical interactive items;
acquiring an embedding matrix corresponding to the one-hot coding time sequence, wherein the embedding matrix comprises embedding vectors corresponding to the one-hot codes of all items;
and mapping each unique hot code in the unique hot code time sequence into a corresponding item vector based on the embedded matrix to obtain a training time sequence.
6. The method of processing the item recommendation model according to claim 5, further comprising, before mapping each unique hot code in the unique hot code time series to a corresponding item vector based on the embedding matrix to obtain a training time series:
acquiring user characteristics of the target user, determining user characteristic embedded vectors corresponding to the target user based on the user characteristics, and fusing the user characteristic embedded vectors into each embedded vector of the embedded matrix;
and/or acquiring operation information of the target user on each historical interaction project, determining an operation information embedding vector corresponding to each historical interaction project based on the operation information, and fusing the operation information embedding vector of each historical interaction project into an embedding vector corresponding to each historical interaction project of the embedding matrix.
7. The method for processing the item recommendation model according to claim 1, wherein the optimizing the item recommendation model to be trained based on the difference between the item vectors corresponding to the same time node in the training time series and the prediction time series to obtain the item recommendation model for item recommendation comprises:
converting the item vector positioned at the replacement time node in the prediction time sequence output by the decoder layer into a corresponding probability distribution through a Softmax layer;
calculating cross entropy between the item vector of the replacement time node in the training time sequence and the probability distribution of the replacement time node in the prediction time sequence;
optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items;
or,
converting the item vector of the next time node located in the replacement time node in the prediction time sequence output by the decoder layer into a corresponding probability distribution through a Softmax layer;
calculating a cross entropy between a term vector of a time node next to the replacement time node in the training time series and a probability distribution of a time node next to the replacement time node in the prediction time series;
and optimizing the item recommendation model to be trained with the aim of minimizing the cross entropy to obtain an item recommendation model for recommending items.
8. The method of claim 1, wherein after optimizing the item recommendation model to be trained based on the difference between the item vectors corresponding to the same time node in the training time series and the prediction time series to obtain an item recommendation model for item recommendation, the method further comprises:
acquiring a project time sequence of a user to be recommended, wherein the project time sequence comprises project vectors of historical interactive projects of the user to be recommended, which are arranged in sequence, and the arrangement sequence of the project vectors is the sequence of the interaction time of the user to be recommended on the historical interactive projects;
taking the project time sequence as an input sequence of the project recommendation model, and analyzing the project time sequence through the project recommendation model to obtain a prediction time sequence corresponding to the project time sequence;
and determining the items to be recommended to the user to be recommended based on the predicted time sequence.
9. An apparatus for processing an item recommendation model, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training time sequence, the training time sequence comprises item vectors of historical interactive items of a target user which are arranged in sequence, and the arrangement sequence of the item vectors is the sequence of the interactive time of the target user on the historical interactive items;
the replacing module is used for selecting a part of time nodes from the training time sequence as replacing time nodes; replacing the item vector corresponding to the replacement time node with a preset filling vector in the training time sequence to obtain a training time sequence after replacement;
the analysis module is used for analyzing the structural characteristics of the training time sequence after replacement through an encoder layer of the to-be-trained item recommendation model to obtain a condition vector corresponding to each item vector in the training time sequence after replacement, wherein the condition vector comprises structural information of the training time sequence, the encoder layer is realized based on a cyclic neural network, the to-be-trained item recommendation model further comprises a decoder layer, and the decoder layer is a cyclic neural network and comprises a plurality of connected cyclic decoding units;
a prediction module, configured to input the item vectors in the training time sequence into the cyclic decoding unit corresponding to the decoder layer based on the corresponding time nodes, and input the condition vectors corresponding to the item vectors into the cyclic decoding unit corresponding to the item vectors; wherein each cyclic decoding unit is configured to: acquiring a hidden layer state vector output by a previous cyclic decoding unit, and acquiring a hidden layer state vector output by the previous cyclic decoding unit based on the acquired hidden layer state vector, the item vector and the condition vector;
predicting a prediction time sequence corresponding to the training time sequence based on the obtained hidden layer state vector, the item vector and the condition vector through each cyclic decoding unit;
and the optimization module is used for optimizing the to-be-trained item recommendation model based on the difference of the item vectors corresponding to the same time node in the training time sequence and the prediction time sequence to obtain an item recommendation model for recommending items.
10. A computer device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the method of processing an item recommendation model according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when the computer program is run on a computer, causes the computer to execute a processing method of an item recommendation model according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910984752.9A CN110765353B (en) | 2019-10-16 | 2019-10-16 | Processing method and device of project recommendation model, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910984752.9A CN110765353B (en) | 2019-10-16 | 2019-10-16 | Processing method and device of project recommendation model, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110765353A CN110765353A (en) | 2020-02-07 |
CN110765353B true CN110765353B (en) | 2022-03-08 |
Family
ID=69331315
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910984752.9A Active CN110765353B (en) | 2019-10-16 | 2019-10-16 | Processing method and device of project recommendation model, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110765353B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113254795B (en) * | 2020-02-11 | 2023-11-07 | 北京京东振世信息技术有限公司 | Training method and device for recommendation model |
WO2021159448A1 (en) * | 2020-02-14 | 2021-08-19 | 中国科学院深圳先进技术研究院 | General network compression framework and compression method based on sequence recommendation system |
CN111597233B (en) * | 2020-04-03 | 2022-07-15 | 浙江工业大学 | Design mode recommendation method for resource-constrained environment |
CN111582492B (en) * | 2020-04-13 | 2023-02-17 | 清华大学 | Dissociation self-supervision learning method and device of sequence recommendation model |
CN113297418A (en) * | 2020-04-17 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Project prediction and recommendation method, device and system |
CN111651671B (en) * | 2020-05-27 | 2023-11-21 | 腾讯科技(深圳)有限公司 | User object recommendation method, device, computer equipment and storage medium |
CN111538906B (en) * | 2020-05-29 | 2023-06-20 | 支付宝(杭州)信息技术有限公司 | Information pushing method and device based on privacy protection |
CN112446556B (en) * | 2021-01-27 | 2021-04-30 | 电子科技大学 | Communication network user calling object prediction method based on expression learning and behavior characteristics |
CN114115878A (en) * | 2021-11-29 | 2022-03-01 | 杭州数梦工场科技有限公司 | Workflow node recommendation method and device |
CN114048826B (en) * | 2021-11-30 | 2024-04-30 | 中国建设银行股份有限公司 | Recommendation model training method, device, equipment and medium |
CN116805255B (en) * | 2023-06-05 | 2024-04-23 | 深圳市瀚力科技有限公司 | Advertisement automatic optimizing throwing system based on user image analysis |
CN116911938A (en) * | 2023-06-08 | 2023-10-20 | 天翼爱音乐文化科技有限公司 | Service recommendation method, device, equipment and medium based on vector coding |
CN116541610B (en) * | 2023-07-06 | 2023-09-29 | 深圳须弥云图空间科技有限公司 | Training method and device for recommendation model |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647226A (en) * | 2018-03-26 | 2018-10-12 | 浙江大学 | A kind of mixing recommendation method based on variation autocoder |
CN108874914A (en) * | 2018-05-29 | 2018-11-23 | 吉林大学 | A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll |
CN109446430A (en) * | 2018-11-29 | 2019-03-08 | 西安电子科技大学 | Method, apparatus, computer equipment and the readable storage medium storing program for executing of Products Show |
CN109543100A (en) * | 2018-10-31 | 2019-03-29 | 上海交通大学 | User interest modeling method and system based on Cooperative Study |
CN109635204A (en) * | 2018-12-21 | 2019-04-16 | 上海交通大学 | Online recommender system based on collaborative filtering and length memory network |
CN109754317A (en) * | 2019-01-10 | 2019-05-14 | 山东大学 | Merge interpretation clothes recommended method, system, equipment and the medium of comment |
CN110232480A (en) * | 2019-03-01 | 2019-09-13 | 电子科技大学 | The item recommendation method and model training method realized using the regularization stream of variation |
CN110287412A (en) * | 2019-06-10 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Content recommendation method, recommended models generation method, equipment and storage medium |
CN110309427A (en) * | 2018-05-31 | 2019-10-08 | 腾讯科技(深圳)有限公司 | A kind of object recommendation method, apparatus and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7590616B2 (en) * | 2006-11-17 | 2009-09-15 | Yahoo! Inc. | Collaborative-filtering contextual model based on explicit and implicit ratings for recommending items |
CN108984731A (en) * | 2018-07-12 | 2018-12-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Sing single recommended method, device and storage medium |
-
2019
- 2019-10-16 CN CN201910984752.9A patent/CN110765353B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647226A (en) * | 2018-03-26 | 2018-10-12 | 浙江大学 | A kind of mixing recommendation method based on variation autocoder |
CN108874914A (en) * | 2018-05-29 | 2018-11-23 | 吉林大学 | A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll |
CN110309427A (en) * | 2018-05-31 | 2019-10-08 | 腾讯科技(深圳)有限公司 | A kind of object recommendation method, apparatus and storage medium |
CN109543100A (en) * | 2018-10-31 | 2019-03-29 | 上海交通大学 | User interest modeling method and system based on Cooperative Study |
CN109446430A (en) * | 2018-11-29 | 2019-03-08 | 西安电子科技大学 | Method, apparatus, computer equipment and the readable storage medium storing program for executing of Products Show |
CN109635204A (en) * | 2018-12-21 | 2019-04-16 | 上海交通大学 | Online recommender system based on collaborative filtering and length memory network |
CN109754317A (en) * | 2019-01-10 | 2019-05-14 | 山东大学 | Merge interpretation clothes recommended method, system, equipment and the medium of comment |
CN110232480A (en) * | 2019-03-01 | 2019-09-13 | 电子科技大学 | The item recommendation method and model training method realized using the regularization stream of variation |
CN110287412A (en) * | 2019-06-10 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Content recommendation method, recommended models generation method, equipment and storage medium |
Non-Patent Citations (5)
Title |
---|
Improved Recurrent Neural Networks for Session-based Recommendations;Yong Kiam Tan;《arXiv》;20160916;第1-6页 * |
Modeling Embedding Dimension Correlations via Convolutional Neural Collaborative Filtering;XIAOYU DU;《ACM Transactions on Information Systems》;20190930;第1-22页 * |
Modeling the Past and Future Contexts for Session-based Recommendation;Fajie Yuan;《arXiv》;20190611;第1-10页 * |
SESSION-BASED RECOMMENDATIONS WITH RECURRENT NEURAL NETWORKS;Balazs Hidasi;《arXiv》;20160329;第1-10页 * |
基于深度学习的推荐系统研究综述;黄立威;《计算机学报》;20180731;第1619-1647页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110765353A (en) | 2020-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110765353B (en) | Processing method and device of project recommendation model, computer equipment and storage medium | |
CN110287412B (en) | Content recommendation method, recommendation model generation method, device, and storage medium | |
CN111008336A (en) | Content recommendation method, device and equipment and readable storage medium | |
CN111026977B (en) | Information recommendation method and device and storage medium | |
CN111079015A (en) | Recommendation method and device, computer equipment and storage medium | |
CN112559896A (en) | Information recommendation method, device, equipment and computer readable storage medium | |
CN110598070A (en) | Application type identification method and device, server and storage medium | |
CN112348592A (en) | Advertisement recommendation method and device, electronic equipment and medium | |
CN114416313A (en) | Task scheduling method and device based on resource prediction model | |
CN113283948A (en) | Generation method, device, equipment and readable medium of prediction model | |
CN114297470A (en) | Content recommendation method, device, equipment, medium and computer program product | |
CN110674181B (en) | Information recommendation method and device, electronic equipment and computer-readable storage medium | |
CN115935185A (en) | Training method and device for recommendation model | |
CN115168721A (en) | User interest recommendation method and system integrating collaborative transformation and temporal perception | |
CN113011911B (en) | Data prediction method and device based on artificial intelligence, medium and electronic equipment | |
CN115358807A (en) | Article recommendation method and device, storage medium and electronic equipment | |
CN114429384B (en) | Intelligent product recommendation method and system based on e-commerce platform | |
CN115455292A (en) | Product recommendation and model training method and device based on target recommendation model | |
CN117009912A (en) | Information recommendation method and training method for neural network model for information recommendation | |
CN111784377B (en) | Method and device for generating information | |
CN115329183A (en) | Data processing method, device, storage medium and equipment | |
CN110727705A (en) | Information recommendation method and device, electronic equipment and computer-readable storage medium | |
Zhao et al. | Deep hierarchical reinforcement learning based recommendations via multi-goals abstraction | |
CN118569865B (en) | Data processing method and system for multi-platform aggregate payment | |
CN113706204B (en) | Deep learning-based rights issuing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40020932 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |