CN107730306B - Movie scoring prediction and preference estimation method based on multi-dimensional preference model - Google Patents
Movie scoring prediction and preference estimation method based on multi-dimensional preference model Download PDFInfo
- Publication number
- CN107730306B CN107730306B CN201710880804.9A CN201710880804A CN107730306B CN 107730306 B CN107730306 B CN 107730306B CN 201710880804 A CN201710880804 A CN 201710880804A CN 107730306 B CN107730306 B CN 107730306B
- Authority
- CN
- China
- Prior art keywords
- rdd
- preference
- value
- conditional probability
- elements
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
- G06Q30/0202—Market predictions or forecasting for commercial activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a movie scoring prediction and preference estimation method based on a multi-dimensional preference model, which adopts the multi-dimensional preference model to express the user preference of multiple dimensions, adopts a variable elimination method to obtain an elimination result set on the basis of the multi-dimensional preference model to realize the integration of node information, then generates a joint set, and then carries out scoring prediction and preference estimation according to the joint set; the whole process is based on Spark computing framework to implement parallel reasoning, the advantages of Spark on large-scale data are fully utilized, and the efficiency of score prediction and preference estimation is improved.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence and information processing, and particularly relates to a movie scoring prediction and preference estimation method based on a multi-dimensional preference model.
Background
With the rapid development of the internet, more and more people participate in the internet activities, and in the movie community, the quality of movies is graded and judged according to the movie grade, so that the movies are more and more favored by movie enthusiasts. With the rapid expansion of famous movie communities such as the bean-shaped movies, a user needs to spend a lot of time to find the favorite movies, and for the movie community, personalized services are provided for the user, and the favorite movies are recommended to the user in a targeted manner, so that the method is an important guarantee for avoiding the loss of the user.
In the movie community, User Preference (User Preference) is a choice of a User's Preference or tendency for a movie, and affects the User's final rating for the movie, which in turn reflects the User's Preference. The user scores the movie, a large amount of user behavior data is generated, knowledge contained in the movie is understood, a user preference model is built, and the user is more and more concerned by people to predict the movie scoring or estimate the user preference of the user based on the built model.
Taking the bean-product movie community as an example, in the scoring data, the "user" has two attributes of "occupation" and "residence," and the "movie" has attributes of multiple dimensions including "director", "drama", "director", "genre", "language" and "show time", and the user has corresponding preference or tendency for the attribute of each dimension of the movie, which forms the user preference of multiple dimensions. The method is an effective modeling method by using a plurality of hidden variables (Latent variables) to respectively describe the user preferences of a plurality of dimensions and using Bayesian Network (Bayesian Network) containing the hidden variables to represent the dependency relationship among the variables, and the model is called as a multidimensional preference model for short. The method has the advantages that the grading of the user on the movie is predicted and the preference of the user on multiple dimensions of the movie is estimated on the basis of the multi-dimensional preference model, so that a quantitative calculation mechanism and a more accurate result can be provided for solving the problems of personalized information service, user orientation, click rate prediction and the like of the movie community, and the method has important significance.
Known score prediction and preference estimation methods are based on models such as collaborative filtering, for example, Liu ocean et al (< software science report >, 2015) proposes a rank collaborative ranking algorithm based on score matrix local low-rank hypothesis to predict scores, Zheng et al (< computer science report >, 2016) uses a context-aware method to predict scores, Liu Huting et al (< computer application >, 2015) proposes a preference estimation method based on matrix decomposition, Luo Dong et al (< computer science, 2017) dynamically analyzes user preferences by decomposing multidimensional matrices, gay et al (< patent 105205184>, 2015) proposes preference extraction methods based on hidden variable models, which can better predict scores or estimate preferences, but cannot be applied to score prediction or preference estimation based on multidimensional preference models. The multi-dimensional preference model can express the user preference of multiple dimensions, and the score prediction and preference estimation results based on the multi-dimensional preference model are more accurate and are more consistent with the actual situation.
The multidimensional preference model, which is generally given by experts, is a bayesian network with a plurality of hidden variables, which represents the dependencies between the variables, and the uncertainties of the dependencies, in a qualitative and quantitative manner. The core of the score prediction and the preference estimation based on the multidimensional preference model is the probability inference of the Bayesian network, the conditional probability table set of the traditional Bayesian network is small in scale, but the inference has exponential time complexity, for example, Malachian et al (< computer research and development >, 2015) applies the inference of the Bayesian network to the attack field research, Zhan et al (< Dian electronic science and technology university 2014) combines the maximum principal subgraph decomposition technology and the genetic algorithm, and provides a Bayesian network inference method based on a mixed mode, which improves the inference efficiency to a certain extent, but the multidimensional preference model comprises a plurality of hidden variables, the scale of the conditional probability table set of each node is increased sharply with the increase of the number of the hidden variables, the complexity of the edge probability calculation is further increased, and meanwhile, the preference estimation and the score prediction have higher timeliness requirements, the above method is not applicable to user preference estimation and score prediction based on a multi-dimensional preference model.
Estimating the preference of a user to multiple dimensions of a movie and predicting the score of the user to the movie, wherein the efficiency bottleneck lies in the efficiency of Bayesian network probability inference, aiming at the Bayesian network inference problem with a large-scale conditional probability table, the known method optimizes the inference process and removes repeated computation or performs parallel expansion, for example, Huchun Ling et al (< software academic report >, 2011) provides an interestingness computation and pruning method BN-EJTR of a frequent item set and a frequent attribute set based on domain knowledge, further provides a Bayesian network inference algorithm based on expanding adjacent tree elimination, Yue Kun et al (< patent 201310709499.9>, 2017) provides a large-scale Bayesian network parallel inference method based on MapReduce, Sun Youme et al (< patent 2011110319410>, 2012) introduces user feedback to improve the inference speed, which improves the inference efficiency of Bayesian networks to a certain extent, further, efficiency of score prediction and preference estimation can be improved, but these methods still cannot fully satisfy the higher timeliness requirements of score prediction and preference estimation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a movie scoring prediction and preference estimation method based on a multi-dimensional preference model.
In order to achieve the above object, the movie score prediction and preference estimation method based on the multidimensional preference model of the present invention comprises the following steps:
s1: reading a movie multidimensional preference model, including model structureAnd a set of conditional probability tables, theta, where,the method comprises the steps that a directed acyclic graph structure of a model is formed, V is a set of nodes in the graph, E is a set of directed edges, and theta is a set of all node condition probability tables;
s2: generating a project set Name _ RDD according to a node set V of the multi-dimensional preference model, wherein the RDD represents an elastic distributed data set, and the Name _ RDD is represented in a set form of { R, C }1,C2,…,CM,I1,I2,…,IN,L1,L2,…,LNWhere R represents user rating, CiIndicates the ith user information, I is 1,2, …, M indicates the number of items of user information, IjMovie information of j-th dimension, where j is 1,2, …, N indicates the number of items of movie information, LjRepresenting user preferences of users for the j dimension movie information;
defining a conditional probability set F _ RDD, wherein the element format of the conditional probability set is P (V) or P (V | pi (V)), V is the serial number of the current node, V belongs to V, pi (V) is the father node set of the node V, P (V) or P (V | pi (V)) represents the conditional probability corresponding to the node V in the conditional probability table set theta, and the generation mode of the conditional probability set F _ RDD is as follows:
applying a conversion operation map to the Name _ RDD, and inquiring the multidimensional preference model aiming at each node v in the Name _ RDDJudging whether a parent node exists or not: if no father node exists, converting the element into P (v); if the parent node exists, converting the element into P (v | pi (v));
s3: applying a conversion operation filter to the conditional probability set F _ RDD, selecting elements with v being user information, dividing the elements into the user information conditional probability set filter1_ F _ RDD, and dividing the rest elements into the non-user information conditional probability set filter0_ F _ RDD;
s4: determining the elimination order of the nodes according to a maximum potential search method, recording the elimination order set as rho, defining an elimination result set RF _ RDD, and generating the mode as follows:
sequentially selecting the element elimination elements in the element elimination sequence set rho according to the element elimination sequence, searching all elements in the non-user information conditional probability set filter0_ F _ RDD for the current element elimination x belonging to rho, regarding each element P (v | pi (v)) in the non-user information conditional probability set filter0_ RDD, if v or pi (v) is the same as x, using the element as a summation item of the element elimination x, deleting the element from the non-user information conditional probability set filter0_ RDD, and otherwise, not performing any operation; the number of summation items of the element x is K, and each summation item is respectively marked as fk(x) K is 1,2, …, K, the corresponding summation formula Σ is obtainedxF(x),F(x)=f1(x)*…*fK(x) Putting the generated summation formula into a cell elimination result set RF _ RDD, and selecting a next cell elimination element until the traversal of the elements in the cell elimination sequence set rho is finished; if the non-user information conditional probability set filter0_ F _ RDD is not empty after the traversal is finished, directly putting the elements in the non-user information conditional probability set filter0_ F _ RDD into the element elimination result set RF _ RDD, otherwise, not performing any operation;
s5: applying a conversion operation unit to the meta result set RF _ RDD, merging all elements in the user information conditional probability set filter1_ F _ RDD into the RF _ RDD, and generating a joint set unit _ RDD;
s6: when the scoring prediction is needed, the following method is adopted:
s6.1: applying a conversion operation map to the union _ RDD, and replacing the summation variable which is an element in the movie information set I or a summation formula element of the score R with an expression in a summation formula bracket to generate a score conditional probability set R _ RDD;
s6.2: generating a score prediction formula, wherein the specific steps comprise:
s6.2.1: reading user information E needing scoring predictionC={C1=c1,C2=c2,…,CM=cM}, and movie information E of a plurality of dimensionsI={I1=i1,I2=i2,…,IN=iNH, mixing ECAnd EIIs denoted as E, and is { C1,C2,…,CM,I1,I2,…,INRecording as CI;
s6.2.2: applying a conversion operation map to the scoring conditional probability set R _ RDD, and aiming at each shape of the scoring conditional probability set R _ RDDxF (x) or P (v | pi (v)), if x, v or pi (v) belongs to the set CI, replacing the x, v or pi (v) with the corresponding element in the set E, and generating an evidence set RE _ RDD;
s6.2.3: defining a query score probability table set RS _ RDD, and generating the query score probability table set RS _ RDD in the following mode: applying a conversion operation map to the evidence set RE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the evidence set RE _ RDD can be obtained;
s6.2.4: dividing the RS _ RDD application conversion operation filter of the query scoring probability table set into P (v | pi (v)) or sigma (sigma)) formsxThe elements of F (x) are divided into filter0_ RS _ RDD, and the rest are divided into filter1_ RS _ RDD; sorting the elements in the filter0_ RS _ RDD in reverse order according to the order of v or x in the argument set, selecting the last element y if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting element y, and continuing to select the current last element for processing until only one element is left; the last remaining element expression is then compared to all elements in filter1_ RS _ RDDMultiplying, and recording the expression of the multiplication as a score forecasting term hr (R);
s6.2.5: generating a score prediction formula according to the score prediction term hr (R) as follows:
s6.3: defining a score prediction result set RMap _ RDD, wherein the format of the RMap _ RDD is < key, value >, and the generation mode is as follows: traversing all values of the score R, taking the current value of the score R as a key, and substituting the current value of the score R into a result calculated by a score prediction formula to be recorded as a value;
s6.4: applying action operation collection to the RMap _ RDD, collecting the content of a preference estimation result set RMap _ RDD, further applying a foreach function to the RMap _ RDD, traversing all < key, value > pairs in the score prediction result set RMap _ RDD, and selecting the key value of the < key, value > pair with the largest value as a score prediction result;
s7: when preference estimation is needed, the method comprises the following steps:
s7.1: determining a movie information set L 'needing preference estimation, applying a conversion operation map to the units _ RDD, and when a summation variable belongs to the set L', replacing a corresponding summation formula element with an expression in a summation formula bracket to generate a preference conditional probability set L _ RDD;
s7.2: generating a preference estimation formula, which comprises the following specific steps:
s7.2.1: reading user information E requiring preference estimationC={C1=c1,C2=c2,…,CM=cMWill { C }1,C2,…,CMRecording as C;
s7.2.2: applying a transformation operation map to the federated set L _ RDD, and aiming at each shape like sigma in the federated set unit _ RDDxF (x) or P (v | π (v)), replacing it with a set E if x, v or π (v) belongs to set CCGenerating a preference evidence set LE _ RDD according to the corresponding element in the list;
s7.2.3: defining a set of query preference probability tables LS _ RDD, generated as follows: applying a conversion operation map to the preference evidence set LE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the preference evidence set LE _ RDD can be obtained;
s7.2.4: dividing the LS _ RDD set of the query preference probability table by applying a conversion operation filter, which is in the form of P (v | pi (v)) or sigmaxThe elements of F (x) are divided into filter0_ LS _ RDD, and the rest elements are divided into filter1_ LS _ RDD; sorting the elements in the filter0_ LS _ RDD according to the order of v or x in the argument set, selecting the last element y, if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting element y, and continuing to select the current last element for processing until only one element is left; then multiplying the last remaining element expression by all elements in the filter1_ LS _ RDD, and recording the multiplied expression as a preference estimation item hl (L');
s7.2.5: generating a preference estimation formula from hl (L'):
s7.3: defining a preference estimation result set LMap _ RDD, wherein the format of the LMap _ RDD is < key, value >, and the generation mode is as follows: traversing all value combinations of the movie information set L ' with preference estimation, taking the value of each movie information in the current set L ' as a key, and substituting the value of each movie information in the current set L ' into a score prediction formula to obtain a value;
s7.4: and applying action operation collection to the LMap _ RDD, collecting the content of the preference estimation result set LMap _ RDD, further applying a foreach function to the LMap _ RDD, traversing the preference estimation result set, and selecting a key value of a maximum value < key, value > pair as a preference estimation result.
The invention relates to a movie scoring prediction and preference estimation method based on a multi-dimensional preference model, which adopts the multi-dimensional preference model to express the user preference of multiple dimensions, adopts a variable elimination method to obtain an elimination result set on the basis of the multi-dimensional preference model to realize the integration of node information, then generates a combined set, and carries out scoring prediction and preference estimation according to the combined set; the whole process is based on Spark computing framework to implement parallel reasoning, the advantages of Spark on large-scale data are fully utilized, and the efficiency of score prediction and preference estimation is improved.
Drawings
FIG. 1 is a flowchart of an embodiment of a movie scoring prediction and preference estimation method based on a multi-dimensional preference model according to the present invention;
FIG. 2 is a flow chart of an embodiment of score prediction in the present invention;
FIG. 3 is a flow diagram of an embodiment of the present invention for generating a scoring prediction formula;
FIG. 4 is a flow diagram of an embodiment of preference estimation in the present invention;
FIG. 5 is a flow diagram of an embodiment of the present invention for generating a preference estimation formula;
fig. 6 is a diagram of a multi-dimensional preference model of the movie in the present embodiment.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
The probability inference of the bayesian network is to calculate a conditional probability P (Q ═ Q | E ═ E) according to the structure of the bayesian network and a set of conditional probability tables, where Q is a query variable and E is an evidence variable, and the essence is to search the set of conditional probability tables and simplify the calculation of joint distribution by using the independence of the conditional probabilities in the bayesian network. The movie has attributes with multiple dimensions, and a multi-dimensional preference model can well describe the preferences of users for the attributes, but inference based on the multi-dimensional preference model has certain difficulty. The variable elimination method is an effective inference algorithm, is a method for calculating the marginal condition probability according to a specific sequence so as to avoid repeated calculation, and can effectively reduce the complexity of inference. Probabilistic reasoning based on a multidimensional preference model is the core of preference estimation and score prediction, Spark is an efficient parallel computing framework based on a memory, and the reasoning efficiency can be improved by using the computing framework to carry out parallel extension on the probabilistic reasoning. The invention aims at a multi-dimensional preference model of a movie, and completes movie scoring prediction and preference estimation by taking Spark as a calculation framework on the basis of reasoning based on a variable elimination method.
FIG. 1 is a flowchart of an embodiment of a movie score prediction and preference estimation method based on a multi-dimensional preference model according to the present invention. As shown in FIG. 1, the movie scoring prediction and preference estimation method based on the multi-dimensional preference model of the present invention includes two parts, the first part is preprocessing, reading the structure and conditional probability table set of the multi-dimensional preference model, calculating and storing the intermediate result; the second part is movie scoring prediction and preference estimation, user information and movie information are read to generate a scoring prediction formula or a preference estimation formula, calculation is carried out according to a stored intermediate result and a conditional probability table set to obtain a result sequence of possible scoring value probability distribution or possible preference value probability distribution of the movie, the maximum value of the result sequence is selected as a result of the movie scoring prediction or the preference estimation, and the operation of the second part can be repeated for any number of times on the basis of the completion of the first part. The first part includes the following specific steps S101 to S106:
s101: reading a multi-dimensional preference model:
reading a multi-dimensional preference model of a movie, which is generally constructed from movie scoring data or given by experts, including model structureAnd a set of conditional probability tables, theta, where,is a directed acyclic of a modelIn the graph structure, V is a set of nodes in the graph, E is a set of directed edges, edges between nodes represent direct dependencies, and θ is a set of all-node conditional probability tables.
S102: generating a conditional probability set F _ RDD:
the generation of the conditional probability set is divided into two steps:
s2.1: generating a project set Name _ RDD according to a node set V of the multidimensional preference model, wherein the RDD represents an elastic Distributed data set (Resilient Distributed data sets), and the Name _ RDD is represented in a set form of { R, C }1,C2,…,CM,I1,I2,…,IN,L1,L2,…,LNI.e. a set of nodes V, where R represents a user score, CiIndicates the ith user information, I is 1,2, …, M indicates the number of items of user information, IjMovie information of j-th dimension, where j is 1,2, …, N indicates the number of items of movie information, LjRepresenting the user's preference for the j-dimension movie information. The user score has various values, such as {1 point, 2 points, 3 points, 4 points, 5 points }, user information generally includes "occupation", "residence", and the like, and movie information generally includes "director", "year", "director", and the like.
S2.2: defining a conditional probability set F _ RDD, wherein the element format of the conditional probability set F _ RDD is P (V) or P (V | pi (V)), V is a node sequence number, V belongs to V, pi (V) is a parent node set of a node V, P (V) or P (V | pi (V)) represents the conditional probability corresponding to the node V in the conditional probability table set theta, and the generation mode of the conditional probability set F _ RDD is as follows:
applying a conversion operation map to the Name _ RDD, and inquiring the multidimensional preference model aiming at each node v in the Name _ RDDJudging whether a parent node exists or not: if no father node exists, converting the element into P (v); if there is a parent, the element is converted to P (v | π (v)).
S103: dividing a conditional probability set F _ RDD:
dividing elements in the conditional probability set F _ RDD according to whether only user information is contained to obtain two sets of filter1_ RDD and filter0_ RDD, wherein the dividing method comprises the following steps: and applying a conversion operation filter to the conditional probability set F _ RDD, selecting elements with v being user information, dividing the elements into the user information conditional probability set filter1_ F _ RDD, and dividing the rest elements into the non-user information conditional probability set filter0_ F _ RDD.
According to the characteristics of the multi-dimensional preference model of the movie, when v is the movie information IjUser preferences LjOr when the user information is scored, the parent nodes exist, so that the element formats in the non-user information conditional probability set fileter0_ F _ RDD are all P (v | pi (v)).
S104: the elimination processing obtains an elimination result set RF _ RDD:
determining the elimination order of the nodes according to a maximum potential search method, recording the elimination order set as rho, defining an elimination result set RF _ RDD, and generating the mode as follows:
sequentially selecting element elimination elements in an element elimination sequence set rho according to an element elimination sequence, searching all elements in a non-user information conditional probability set filter0_ F _ RDD for a current element elimination x belonging to rho, regarding each element P (v | pi (v)) in the non-user information conditional probability set filter0_ RDD, if v or pi (v) is the same as x, using the element as a summation item of the element elimination x, deleting the element from the non-user information conditional probability set filter0_ F _ RDD, and otherwise, not performing any operation; the number of summation items of the element x is K, and each summation item is respectively marked as fk(x) K is 1,2, …, K, the corresponding summation formula Σ is obtainedxF(x),F(x)=f1(x)*…*fK(x) Putting the generated summation formula into a cell elimination result set RF _ RDD, and selecting a next cell elimination element until the traversal of the elements in the cell elimination sequence set rho is finished; and if the non-user information conditional probability set filter0_ F _ RDD is not empty after the traversal is finished, directly putting the elements in the non-user information conditional probability set filter0_ F _ RDD into the element elimination result set RF _ RDD, and otherwise, not performing any operation.
S105: generating a union set unit _ RDD:
merging the user information conditional probability set filter1_ F _ RDD and the metadata elimination result set RF _ RDD: and applying a conversion operation unit to the meta result set RF _ RDD, merging all elements in the user information conditional probability set filter1_ F _ RDD into the RF _ RDD, and generating a joint set unit _ RDD.
The steps of the second part movie score prediction and preference estimation are S106 and S107, and the specific steps are as follows:
s106: and (3) score prediction:
fig. 2 is a flow chart of an embodiment of score prediction in the present invention. As shown in fig. 2, the scoring prediction in the present invention comprises the following steps:
s201: generating a scoring conditional probability set R _ RDD:
screening out the sum variable from the union aggregation _ RDD as the movie information IjOr the sum formula elements of the scores R, which are modified. The specific operation is as follows: and applying a conversion operation map to the union _ RDD, and replacing the summation variable which is an element in the movie information set I or a summation formula element of the score R with an expression in a bracket of the summation formula to generate a score conditional probability set R _ RDD.
S202: generating a score prediction formula:
FIG. 3 is a flow chart of an embodiment of generating a score prediction formula in the present invention. As shown in fig. 3, the specific steps of generating the score prediction formula include:
s301: reading the information of the score prediction:
reading user information E needing scoring predictionC={C1=c1,C2=c2,…,CM=cM}, and movie information E of a plurality of dimensionsI={I1=i1,I2=i2,…,IN=iNH, mixing ECAnd EIIs denoted as E, and is { C1,C2,…,CM,I1,I2,…,INAnd noted as CI.
When score prediction is actually performed, specific values do not necessarily exist in each user information or movie information, and the score of a large class of users on a large class of movies is predicted at the moment. For example, if the user information in the multidimensional preference model includes "occupation" and "residence", the movie information includes "director" and "era" and the user information in the score prediction is that the occupation is a doctor and the movie information only includes 2000s of the era, then the score of the user with the occupation being a doctor on the movie in the 2000s of the year is predicted.
S302: and (3) information replacement:
applying a conversion operation map to the scoring conditional probability set R _ RDD, and aiming at each shape of the scoring conditional probability set R _ RDDxAnd F (x) or P (v | pi (v)), if x, v or pi (v) belongs to the set CI, replacing the x, v or pi (v) with the corresponding element in the set E, namely substituting the information needing to be subjected to scoring prediction into the related element in the scoring conditional probability set R _ RDD, and generating an evidence set RE _ RDD.
S303: generating a set of query score probability tables (RS _ RDD):
defining a query score probability table set RS _ RDD, and generating the query score probability table set RS _ RDD in the following mode: and applying a conversion operation map to the evidence set RE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the evidence set RE _ RDD can be obtained.
S304: generating a score prediction term hr (r):
dividing the RS _ RDD application conversion operation filter of the query scoring probability table set into P (v | pi (v)) or sigma (sigma)) formsxThe elements of F (x) are divided into filter0_ RS _ RDD, and the remaining elements are divided into filter1_ RS _ RDD. Sorting the elements in the filter0_ RS _ RDD according to the order of v or x in the argument set, selecting the last element y, if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting the element y, continuing to select the current last element for processing, namely taking the element obtained by replacement as y, and continuing to process until only one element is left; the last remaining element expression is then multiplied by all the elements in filter1_ RS _ RDD, the multiplied expression being denoted as the score predictor hr (r).
S305: generating a score prediction formula:
generating a score prediction formula according to the score prediction term hr (R) as follows:
s203: calculating to obtain a score prediction result set RMap _ RDD:
defining a score prediction result set RMap _ RDD, wherein the format of the RMap _ RDD is < key, value >, and the generation mode is as follows: and traversing all values of the score R, taking the current value of the score R as a key, and substituting the current value of the score R into a score prediction formula to calculate a result which is recorded as a value.
S204: obtaining a score prediction result:
and applying action operation collection to the RMap _ RDD, collecting the content of the preference estimation result set RMap _ RDD, further applying a foreach function to the RMap _ RDD, traversing all the < key, value > pairs in the score prediction result set RMap _ RDD, and selecting the key value of the < key, value > pair with the largest value as the score prediction result.
S107: preference estimation:
fig. 4 is a flow chart of a specific embodiment of preference estimation in the present invention. As shown in fig. 4, the preference estimation in the present invention includes the following steps:
s401: generating a preference conditional probability set L _ RDD:
and determining a movie information set L 'needing preference estimation, screening out summation formula elements of which summation variables belong to the set L' from the joint set unit _ RDD, and modifying the elements. The specific operation is as follows: and applying a conversion operation map to the union _ RDD, and when the summation variable belongs to the set L', replacing the corresponding summation formula element with an expression in the parentheses of the summation formula to generate a preference conditional probability set L _ RDD.
S402: generating a preference estimation formula:
FIG. 5 is a flow chart of an embodiment of the present invention for generating a preference estimation formula. As shown in fig. 3, the specific steps of setting the preference estimation formula include:
s501: reading the information of preference estimation:
reading user information E requiring preference estimationC={C1=c1,C2=c2,…,CM=cMWill { C }1,C2,…,CMAnd is denoted as C.
S502: and (3) information replacement:
applying a transformation operation map to the federated set L _ RDD, and aiming at each shape like sigma in the federated set unit _ RDDxF (x) or P (v | π (v)), replacing it with a set E if x, v or π (v) belongs to set CCThe corresponding element in (1), namely, information to be subjected to preference estimation is substituted into the relevant element in the preference conditional probability set L _ RDD, so as to generate a preference evidence set LE _ RDD.
S503: generating a set of query preference probability tables LS _ RDD:
defining a set of query preference probability tables LS _ RDD, generated as follows: and applying a conversion operation map to the preference evidence set LE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the preference evidence set LE _ RDD can be obtained.
S504: generating a preference estimation term hl (L'):
dividing the LS _ RDD set of the query preference probability table by applying a conversion operation filter, which is in the form of P (v | pi (v)) or sigmaxThe elements of F (x) are divided into filter0_ LS _ RDD, and the rest elements are divided into filter1_ LS _ RDD; sorting the elements in the filter0_ LS _ RDD according to the order of v or x in the argument set, selecting the last element y, if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting element y, and continuing to select the current last element for processing until only one element is left; the last remaining element expression is then multiplied by all the elements in filter1_ LS _ RDD, the multiplied expression being noted as preference estimate hl (L').
S505: generating a preference estimation formula:
generating a preference estimation formula from hl (L'):
s403: and calculating a preference estimation result set LMap _ RDD:
defining a preference estimation result set LMap _ RDD, wherein the format of the LMap _ RDD is < key, value >, and the generation mode is as follows: traversing all value combinations of the movie information set L ' with preference estimation, taking the value of each movie information in the current set L ' as a key, and substituting the value of each movie information in the current set L ' into a scoring prediction formula to obtain a value.
S404: obtaining a preference estimation result:
and applying action operation collection to the LMap _ RDD, collecting the content of the preference estimation result set LMap _ RDD, further applying a foreach function to the LMap _ RDD, traversing the preference estimation result set, and selecting a key value of a maximum value < key, value > pair as a preference estimation result.
In order to better illustrate the technical effect of the invention, a specific example is adopted for technical scheme description and experimental verification.
First, a movie multidimensional preference model is read, and the multidimensional preference model in this embodiment is generated according to movie scoring data. Table 1 is an example of movie rating data in the present embodiment.
Score R | Occupation C1 | Residential area C2 | Director I1 | Age I2 | Lead actor I3 |
4 | Doctor | Jing made of Chinese medicinal materials | Zhang skill | 2010s | Yizi chapter |
5 | Lawyer | Hu (Chinese character of 'Hu') | Chen Kaige | 2010s | Wang Baoqiang |
3 | Lawyer | Jing made of Chinese medicinal materials | Zhang skill | 1990s | Sclerite |
4 | Teacher's teacher | Root of Kun Hao | All directions of the week | 2010s | Huang Bo |
5 | Doctor | Hu (Chinese character of 'Hu') | Schpilberg | 1990s | Tom |
TABLE 1
Fig. 6 is a diagram of a multi-dimensional preference model of the movie in the present embodiment.
First, a project data set Name _ RDD is generated according to the movie scoring data, and then a conditional probability set F _ RDD is generated according to the multidimensional preference model shown in fig. 6, which can be expressed as:
F_RDD={P(C1),P(C2),P(L1|C1,C2),P(L2|C1,C2),P(L3|C1,C2),P(I1|L1),
P(I2|L2),P(I3|L3),P(R|L1,L2,L3,I1,I2,I3)
applying the conversion operation filter to the conditional probability set F _ RDD, dividing the F _ RDD into two RDDs, respectively a filter0_ RDD whose elements do not contain "occupation" and "residence", and a filter1_ RDD which contains only "occupation" and "residence", namely:
filter0_RDD={P(L1|C1,C2),P(L2|C1,C2),P(L3|C1,C2),
P(I1|L1),P(I2|L2),P(I3|L3),P(R|L1,L2,L3,I1,I2,I3)}
filter1_RDD={P(C1),P(C2)}
then, according to the maximum potential searching method, determining and obtaining the vanishing order set rho ═ { R, I ═1,L1,I2,L2,I3,L3And (4) carrying out elimination on the non-user information conditional probability set filter0_ RDD to obtain an elimination result set RF _ RDD, wherein the contents of the elimination result set RF _ RDD are as follows:
finally, applying an intersection function to the metadata result set RF _ RDD, merging all elements of the user information conditional probability set filter1_ RDD into the RF _ RDD, and generating a joint set intersection _ RDD:
then, score prediction is carried out, and the specific process is as follows:
first, apply the map function to the union _ RDD, and sumRP(R|L1,L2,L3,I1,I2,I3)、 Replacement by P (R | L)1,L2,L3,I1,I2,I3)、P(I1|L1)、P(I2|L2)、P(I3|L3) Generating a scoring conditional probability set R _ RDD:
then, user information C is read in1Attorney, C2Beijing, and scoring object information I1Crime, I2=1990s,I3Substituting these information into the scoring conditional probability R _ RDD, generates an evidence set RE _ RDD:
then inquiring all elements in the conditional probability table set theta to obtain probability values corresponding to the elements in the evidence set RE _ RDD, and generating a scoring probability table set RS _ RDD:
dividing the query score probability table set RS _ RDD to obtain:
filter1_RS_RDD={0.27,0.38}
according to the argument set rho ═ { R, I1,L1,I2,L2,I3,L3The order in (f) orders the elements in filter0_ RS _ RDD in reverse:
the last element P (R | L) is selected1,L2,L3,I1Crime, I2=1990s,I3Huoge) due to the previous element P (I)1Crime | L1) Not of the sum formula ∑xF (x), thus replacing the previous element by (I)1Crime | L1)*P(R|L1,L2,L3,I1Crime, I2=1990s,I3Picnic). Then this element is selected, and the next previous element isIs replaced byBy analogy, only one element is left finally:
this element is then multiplied by all the elements in filter1_ RS _ RDD, generating the score predictor hr (r):
the following score prediction formula is generated:
traversing all possible values {5,4,3,2,1} of the score R, taking the current value of R as key, taking the calculation result of the score prediction formula as value, and generating a result set R _ Map ═ 5,0.33>, <4,0.21>, <3,0.16>, <2,0.14>, <1,0.14 >.
And finally, traversing all the < key, value > pairs in the R _ Map, and selecting the key value of the < key, value > pair with the largest value as a score prediction result, namely the score prediction result is 5 in the embodiment.
Next, preference estimation is performed, and in order to simplify the process, it is assumed in this embodiment that the movie information set L' to be estimated only includes one preference L1Namely, for the preference of the type, the specific process of preference estimation is as follows:
first, applying the map function to the units _ RDD willReplacement by P (L)1|C1,C2) Generating a preference conditional probability set L _ RDD:
then, user information C is read in1Attorney, C2Substituting these information into the scoring conditional probability L _ RDD, generates an evidence set LE _ RDD:
then inquiring all elements in the conditional probability table set theta to obtain probability values corresponding to the elements in the evidence set LE _ RDD, and generating a scoring probability table set LS _ RDD:
generating a preference estimation term hl (L)1):
The following preference estimation formula is generated:
traverse L1All possible values of (a) in the current L1Taking the value as key, taking the formula calculation result as value, and generating a result set L _ Map ═<Comedy, 0.30>,<War, 0.27>,<Phobia, 0.15>,<Animation, 0.22>。
Finally, all < key, value > pairs in the L _ Map are traversed, and the key value comedy with the largest value of < key, value > pair < comedy, 0.30> is selected as the preference estimation result.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.
Claims (1)
1. A movie scoring prediction and preference estimation method based on a multi-dimensional preference model is characterized by comprising the following steps:
s1: reading a movie multidimensional preference model, including a model structureAnd a set of conditional probability tables, theta, where,the method comprises the steps that a directed acyclic graph structure of a model is formed, V is a set of nodes in the graph, E is a set of directed edges, and theta is a set of all node condition probability tables;
s2: generating a project set Name _ RDD according to a node set V of the multi-dimensional preference model, wherein the RDD represents an elastic distributed data set, and the Name _ RDD is represented in a set form of { R, C }1,C2,…,CM,I1,I2,…,IN,L1,L2,…,LNWhere R represents user rating, CiIndicates the ith user information, I is 1,2, …, M indicates the number of items of user information, IjMovie information of j-th dimension, where j is 1,2, …, N indicates the number of items of movie information, LjRepresenting user preferences of users for the j dimension movie information;
defining a conditional probability set F _ RDD, wherein the element format of the conditional probability set is P (V) or P (V | pi (V)), V is the serial number of the current node, V belongs to V, pi (V) is the father node set of the node V, P (V) or P (V | pi (V)) represents the conditional probability corresponding to the node V in the conditional probability table set theta, and the generation mode of the conditional probability set F _ RDD is as follows:
applying a conversion operation map to the Name _ RDD, and inquiring the multidimensional preference model aiming at each node v in the Name _ RDDJudging whether a parent node exists or not: if no father node exists, converting the element into P (v); if the parent node exists, converting the element into P (v | pi (v));
s3: applying a conversion operation filter to the conditional probability set F _ RDD, selecting elements with v being user information, dividing the elements into the user information conditional probability set filter1_ F _ RDD, and dividing the rest elements into the non-user information conditional probability set filter0_ F _ RDD;
s4: determining the elimination order of the nodes according to a maximum potential search method, recording the elimination order set as rho, defining an elimination result set RF _ RDD, and generating the mode as follows:
sequentially selecting the element elimination elements in the element elimination sequence set rho according to the element elimination sequence, searching all elements in the non-user information conditional probability set filter0_ F _ RDD for the current element elimination x belonging to rho, regarding each element P (v | pi (v)) in the non-user information conditional probability set filter0_ RDD, if v or pi (v) is the same as x, using the element as a summation item of the element elimination x, deleting the element from the non-user information conditional probability set filter0_ RDD, and otherwise, not performing any operation; the number of summation items of the element x is K, and each summation item is respectively marked as fk(x) K is 1,2, …, K, the corresponding summation formula Σ is obtainedxF(x),F(x)=f1(x)*…*fK(x) Putting the generated summation formula into a cell elimination result set RF _ RDD, and selecting a next cell elimination element until the traversal of the elements in the cell elimination sequence set rho is finished; if the non-user information conditional probability set filter0_ F _ RDD is not empty after the traversal is finished, directly putting the elements in the non-user information conditional probability set filter0_ F _ RDD into the element elimination result set RF _ RDD, otherwise, not performing any operation;
s5: applying a conversion operation unit to the meta result set RF _ RDD, merging all elements in the user information conditional probability set filter1_ F _ RDD into the RF _ RDD, and generating a joint set unit _ RDD;
s6: when the scoring prediction is needed, the following method is adopted:
s6.1: applying a conversion operation map to the union _ RDD, and replacing the summation variable which is an element in the movie information set I or a summation formula element of the score R with an expression in a summation formula bracket to generate a score conditional probability set R _ RDD;
s6.2: generating a score prediction formula, wherein the specific steps comprise:
s6.2.1: reading user information E needing scoring predictionC={C1=c1,C2=c2,…,CM=cM}, and movie information E of a plurality of dimensionsI={I1=i1,I2=i2,…,IN=iNH, mixing ECAnd EIIs denoted as E, and is { C1,C2,…,CM,I1,I2,…,INRecording as CI;
s6.2.2: applying a conversion operation map to the scoring conditional probability set R _ RDD, and aiming at each shape of the scoring conditional probability set R _ RDDxF (x) or P (v | pi (v)), if x, v or pi (v) belongs to the set CI, replacing the x, v or pi (v) with the corresponding element in the set E, and generating an evidence set RE _ RDD;
s6.2.3: defining a query score probability table set RS _ RDD, and generating the query score probability table set RS _ RDD in the following mode: applying a conversion operation map to the evidence set RE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the evidence set RE _ RDD can be obtained;
s6.2.4: dividing the RS _ RDD application conversion operation filter of the query scoring probability table set into P (v | pi (v)) or sigma (sigma)) formsxThe elements of F (x) are divided into filter0_ RS _ RDD, and the rest are divided into filter1_ RS _ RDD; sorting the elements in the filter0_ RS _ RDD in reverse order according to the order of v or x in the argument set, selecting the last element y if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting element y, and continuing to select the current last element for processing until only one element is left; then multiplying the last remaining element expression by all elements in the filter0_ RS _ RDD, and recording the multiplied expression as a score prediction item hr (R);
s6.2.5: generating a score prediction formula according to the score prediction term hr (R) as follows:
s6.3: defining a score prediction result set RMap _ RDD, wherein the format of the RMap _ RDD is < key, value >, and the generation mode is as follows: traversing all values of the score R, taking the current value of the score R as a key, and substituting the current value of the score R into a result calculated by a score prediction formula to be recorded as a value;
s6.4: applying action operation collection to the RMap _ RDD, collecting the content of a preference estimation result set RMap _ RDD, further applying a foreach function to the RMap _ RDD, traversing all < key, value > pairs in the score prediction result set RMap _ RDD, and selecting the key value of the < key, value > pair with the largest value as a score prediction result;
s7: when preference estimation is needed, the method comprises the following steps:
s7.1: determining a movie information set L 'needing preference estimation, applying a conversion operation map to the units _ RDD, and when a summation variable belongs to the set L', replacing a corresponding summation formula element with an expression in a summation formula bracket to generate a preference conditional probability set L _ RDD;
s7.2: generating a preference estimation formula, which comprises the following specific steps:
s7.2.1: reading user information E requiring preference estimationC={C1=c1,C2=c2,…,CM=cMWill { C }1,C2,…,CMRecording as C;
s7.2.2: applying a transformation operation map to the federated set L _ RDD, and aiming at each shape like sigma in the federated set unit _ RDDxF (x) or P (v | π (v)), replacing it with a set E if x, v or π (v) belongs to set CCGenerating a preference evidence set LE _ RDD according to the corresponding element in the list;
s7.2.3: defining a set of query preference probability tables LS _ RDD, generated as follows: applying a conversion operation map to the preference evidence set LE _ RDD, inquiring all elements in the condition probability table set theta aiming at each element, and replacing the element with a probability value if the probability value corresponding to the element in the preference evidence set LE _ RDD can be obtained;
s7.2.4: dividing the LS _ RDD set of the query preference probability table by applying a conversion operation filter, which is in the form of P (v | pi (v)) or sigmaxThe elements of F (x) are divided into filter0_ LS _ RDD, and the rest elements are divided into filter1_ LS _ RDD; sorting the elements in the filter0_ LS _ RDD according to the order of v or x in the argument set, selecting the last element y, if the previous element y' of the element y is shaped like sigmaxF (x), then the former element is replaced by sigmaxF (x) y, otherwise, replacing the previous element by y'; then deleting element y, and continuing to select the current last element for processing until only one element is left; then multiplying the last remaining element expression by all elements in the filter1_ LS _ RDD, and recording the multiplied expression as a preference estimation item hl (L');
s7.2.5: generating a preference estimation formula from hl (L'):
s7.3: defining a preference estimation result set LMap _ RDD, wherein the format of the LMap _ RDD is < key, value >, and the generation mode is as follows: traversing all value combinations of the movie information set L ' with preference estimation, taking the value of each movie information in the current set L ' as a key, and substituting the value of each movie information in the current set L ' into a score prediction formula to obtain a value;
s7.4: and applying action operation collection to the LMap _ RDD, collecting the content of the preference estimation result set LMap _ RDD, further applying a foreach function to the LMap _ RDD, traversing the preference estimation result set, and selecting a key value of a maximum value < key, value > pair as a preference estimation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710880804.9A CN107730306B (en) | 2017-09-26 | 2017-09-26 | Movie scoring prediction and preference estimation method based on multi-dimensional preference model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710880804.9A CN107730306B (en) | 2017-09-26 | 2017-09-26 | Movie scoring prediction and preference estimation method based on multi-dimensional preference model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107730306A CN107730306A (en) | 2018-02-23 |
CN107730306B true CN107730306B (en) | 2021-02-02 |
Family
ID=61208014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710880804.9A Active CN107730306B (en) | 2017-09-26 | 2017-09-26 | Movie scoring prediction and preference estimation method based on multi-dimensional preference model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107730306B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520450B (en) * | 2018-03-21 | 2021-09-24 | 电子科技大学 | Recommendation method and system for local low-rank matrix approximation based on implicit feedback information |
CN109064209A (en) * | 2018-06-28 | 2018-12-21 | 四川斐讯信息技术有限公司 | A kind of advertisement placement method and server based on portfolio |
CN110134828B (en) * | 2019-04-29 | 2021-02-23 | 北京物资学院 | Video off-shelf detection method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793853A (en) * | 2014-01-21 | 2014-05-14 | 中国南方电网有限责任公司超高压输电公司检修试验中心 | Overhead power transmission line running state assessment method based on bidirectional Bayesian network |
WO2015160415A2 (en) * | 2014-01-31 | 2015-10-22 | The Trustees Of Columbia University In The City Of New York | Systems and methods for visual sentiment analysis |
CN106570525A (en) * | 2016-10-26 | 2017-04-19 | 昆明理工大学 | Method for evaluating online commodity assessment quality based on Bayesian network |
-
2017
- 2017-09-26 CN CN201710880804.9A patent/CN107730306B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793853A (en) * | 2014-01-21 | 2014-05-14 | 中国南方电网有限责任公司超高压输电公司检修试验中心 | Overhead power transmission line running state assessment method based on bidirectional Bayesian network |
WO2015160415A2 (en) * | 2014-01-31 | 2015-10-22 | The Trustees Of Columbia University In The City Of New York | Systems and methods for visual sentiment analysis |
CN106570525A (en) * | 2016-10-26 | 2017-04-19 | 昆明理工大学 | Method for evaluating online commodity assessment quality based on Bayesian network |
Non-Patent Citations (1)
Title |
---|
电影评分数据分析及用户行为偏好模型;胡淼元;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN107730306A (en) | 2018-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yi et al. | Sampling-bias-corrected neural modeling for large corpus item recommendations | |
CN109241412B (en) | Recommendation method and system based on network representation learning and electronic equipment | |
Zhang et al. | A collective bayesian poisson factorization model for cold-start local event recommendation | |
Yang et al. | Fine-grained preference-aware location search leveraging crowdsourced digital footprints from LBSNs | |
Li et al. | Community detection using hierarchical clustering based on edge-weighted similarity in cloud environment | |
WO2023039901A1 (en) | Text recommendation method and apparatus, model training method and apparatus, and readable storage medium | |
CN107038184B (en) | A kind of news recommended method based on layering latent variable model | |
CN107730306B (en) | Movie scoring prediction and preference estimation method based on multi-dimensional preference model | |
CN108427756B (en) | Personalized query word completion recommendation method and device based on same-class user model | |
Bertani et al. | Combining novelty and popularity on personalised recommendations via user profile learning | |
CN112836125A (en) | Recommendation method and system based on knowledge graph and graph convolution network | |
CN114282077A (en) | Session recommendation method and system based on session data | |
CN114817712A (en) | Project recommendation method based on multitask learning and knowledge graph enhancement | |
Liu et al. | Siga: social influence modeling integrating graph autoencoder for rating prediction | |
Chen et al. | DPM-IEDA: dual probabilistic model assisted interactive estimation of distribution algorithm for personalized search | |
Zhang et al. | Knowledge graph driven recommendation model of graph neural network | |
Li et al. | Semi-supervised graph pattern matching and rematching for expert community location | |
Sangeetha et al. | Predicting personalized recommendations using GNN | |
He et al. | Lgccf: A linear graph convolutional collaborative filtering with social influence | |
Xiao et al. | Research on tourist routes recommendation based on the user preference drifting over time | |
Chew et al. | A hybrid ontology-based recommender system utilizing data enrichment and SVD approaches | |
CN114329167A (en) | Hyper-parameter learning, intelligent recommendation, keyword and multimedia recommendation method and device | |
Guerraoui et al. | Sequences, items and latent links: Recommendation with consumed item packs | |
Zou et al. | Online group recommendation with local optimization | |
Su et al. | AxDFM: Position Prediction System Based on the Importance of High-Order Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |