CN111522886B - Information recommendation method, terminal and storage medium - Google Patents

Information recommendation method, terminal and storage medium Download PDF

Info

Publication number
CN111522886B
CN111522886B CN201910045755.6A CN201910045755A CN111522886B CN 111522886 B CN111522886 B CN 111522886B CN 201910045755 A CN201910045755 A CN 201910045755A CN 111522886 B CN111522886 B CN 111522886B
Authority
CN
China
Prior art keywords
label
tag
information
similarity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910045755.6A
Other languages
Chinese (zh)
Other versions
CN111522886A (en
Inventor
周志红
郭叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910045755.6A priority Critical patent/CN111522886B/en
Publication of CN111522886A publication Critical patent/CN111522886A/en
Application granted granted Critical
Publication of CN111522886B publication Critical patent/CN111522886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses an information recommendation method, a terminal and a storage medium, wherein the method comprises the following steps: acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information; obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer; based on the first label set, N third labels with entity relation with the first labels and a word vector library, determining the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label.

Description

Information recommendation method, terminal and storage medium
Technical Field
The present invention relates to computer technologies, and in particular, to an information recommendation method, a terminal, and a storage medium.
Background
The user information recommending method aims at recommending information or commodities possibly interested in the user to the user as a main purpose, and aims at assisting the user in selecting the commodities, improving user experience and improving user viscosity. In the current recommendation method, user behavior data (i.e. interaction data between a user and an article) and article content data (i.e. article self attributes such as names, profiles and other content attributes) are often used as basic data, and the recommendation is completed according to the matching degree between the interests of the user and the articles or the similarity between different articles. The common algorithms comprise a content-based recommendation algorithm, a collaborative filtering recommendation algorithm, a matrix decomposition algorithm and the like, but the recommendation algorithms are mainly based on a user history browsing record at present, so that similar contents are recommended, the recommendation coverage range is narrow, and the requirements of user interest transition are difficult to meet.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the invention is expected to provide an information recommendation method, a terminal and a storage medium, and the information recommendation range is enlarged by expanding user interest tags.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides an information recommendation method, which comprises the following steps:
acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
based on the first label set, N third labels with entity relation with the first labels and a word vector library, determining the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label.
In the above solution, the determining, based on the first tag set, N third tags having an entity relationship with the first tag, and a word vector library, the similarity between the recommendation information and the user interest information includes: determining similarity scores of at least one second tag in the second tag set based on the first tag set, N third tags having entity relationships with the first tag, and a word vector library; and accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommended information and the user interest information.
In the above scheme, the first tag set further includes a weight value of the first tag; the determining similarity scores of at least one second tag in the second tag set based on the first tag set, the N third tags having entity relationships with the first tag, and a word vector library, includes: when the target second label is the same as the first label, calculating a similarity score of the target second label according to the weight value of the first label in the first label set; or when the target second label is the same as the third label, calculating a similarity score of the target second label according to the weight value of the first label in the first label set and a relation parameter for representing the entity relation between the third label and the first label; or when the target second label is different from the first label and the third label, calculating the similarity score of the target second label according to the word vector library and the weight value of the first label in the first label set; the target second tag is any one second tag in the second tag set.
In the above scheme, when the target second tag is the same as the first tag, calculating the similarity score of the target second tag according to the weight value of the first tag in the first tag set includes: obtaining a similarity score of the target second label according to the weight value and the first contribution coefficient of the first label in the first label set;
When the target second tag is the same as the third tag, calculating a similarity score of the target second tag according to the weight value of the first tag in the first tag set and a relationship parameter for representing the entity relationship between the third tag and the first tag, including: obtaining a similarity score of the target second label according to a weight value of a first label in the first label set, a relation parameter used for representing an entity relation between a third label and the first label and a second contribution coefficient;
when the target second tag is different from the first tag and the third tag, calculating a similarity score of the target second tag according to the word vector library and the weight value of the first tag in the first tag set, including: acquiring word vectors of the target second tag and word vectors of at least one first tag in the first tag set from the word vector library; and obtaining a similarity score of the target second label according to the similarity value of the word vector of the target second label and the word vector of the at least one first label, the weight value of the first label in the first label set and the third contribution coefficient.
In the above scheme, the method further comprises: based on the similarity between the recommendation information and the user interest information, pushing the recommendation information to a user according to a preset recommendation strategy;
the preset recommendation strategy comprises the following steps: pushing the recommendation information to a user when the similarity is larger than a recommendation threshold; or when at least two pieces of recommended information are contained, pushing the recommended information to the user according to the sequence from the high similarity to the low similarity.
In the above scheme, the method further comprises: obtaining a corpus; constructing a knowledge graph of the corpus to obtain the preset knowledge graph;
and calculating the word vector of each word in the corpus based on the preset word vector training model to obtain the word vector library.
In the above scheme, the preset word vector training model is obtained based on a word2vec model combined with a transient algorithm.
The embodiment of the invention also provides a terminal, which is characterized in that the terminal comprises:
the acquisition unit is used for acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
The training unit is used for obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
the processing unit is used for determining the similarity between the recommendation information and the user interest information based on the first label set, N third labels with entity relation with the first labels and a word vector library; the word vector library at least comprises word vectors of a first label and word vectors of a second label.
The embodiment of the invention also provides another terminal, which comprises: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of the method described above when the computer program is run.
There is also provided in an embodiment of the invention a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the method described above.
By adopting the technical scheme, N third labels related to the first label are obtained based on the diffusion mode of the first label in the knowledge graph and are used for expanding the interest label (the interest label is the first label in the embodiment of the invention), and the similarity of the recommended information and the user interest information is determined by combining the first label set and the word vector library. Therefore, by expanding the interest labels of the users, the potential interest labels of the users can be found, and the information recommendation range is enlarged.
Drawings
FIG. 1 is a schematic flow chart of a method for recommending information according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a second flow chart of an information recommendation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a third flow chart of an information recommendation method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a component structure of a functional architecture of a terminal system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training structure of a word vector training model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a structure of a happy network of an interest tag according to an embodiment of the present invention;
FIG. 7 is a fourth flowchart of an information recommendation method according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a first component structure of a terminal according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a second component structure of a terminal according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a third component structure of a terminal according to an embodiment of the present invention.
Detailed Description
For a more complete understanding of the nature and the technical content of the embodiments of the present invention, reference should be made to the following detailed description of embodiments of the invention, taken in conjunction with the accompanying drawings, which are meant to be illustrative only and not limiting of the embodiments of the invention.
Example 1
As shown in fig. 1, the information recommendation method includes:
Step 101: acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
step 102: obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
step 103: based on the first label set, N third labels with entity relation with the first labels and a word vector library, determining the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label.
Here, the execution subject of steps 101 to 103 may be a processor of the terminal having the information recommendation function.
Here, the user interest tag set may obtain keywords of the behavior data according to the behavior data of the user in the last period, such as clicking, downloading, praying, collecting, searching, browsing, etc., where each keyword may be used as an interest tag of the user, and multiple keywords may be used to form the user interest tag set.
The information to be recommended may be an item, a public number, a website, an article, or the like. For example, the information to be recommended is the latest new A articles, the free selling B articles and the like, and both A and B take positive integers. And extracting keywords in the information to be recommended as second tags, and using a plurality of second tags as a second tag set of the recommended information.
In practical application, the method further comprises the following steps: obtaining a corpus; constructing a knowledge graph of a corpus to obtain a preset knowledge graph; based on a preset word vector training model, calculating the word vector of each word in the corpus to obtain a word vector library.
Illustratively, a knowledge graph of the corpus is constructed by taking the hundred-degree encyclopedia vocabulary entry library or the wikipedia vocabulary entry library as the corpus. The knowledge graph not only comprises words of the corpus, but also comprises entity relations among the words, and the knowledge graph can be a knowledge graph of any language, such as Chinese, english, japanese and the like.
Obtaining a happy network of the first label from the knowledge graph; the method comprises the steps that a first label and N third labels which have entity relations with the first label are included in the Ripple network of the first label, N is a positive integer, and relation parameters used for representing the entity relations between the third labels and the first label are included. Here, the risple network of the first tag is the result of expanding the user interest tag.
And calculating the word vector of each word in the corpus by using a preset word vector training model to obtain a word vector library. Here, because the corpus contains the expectation that most users can use daily, the obtained word vector library contains the word vector of the first tag and the word vector of the second tag. Or, using a preset word vector training model to obtain the word vector of the first label and the word vector of the second label, and using the obtained word vector to establish a word vector library.
The word vector training model may be, for example, any of the following: LSA (Latent Semantic Analysis ) matrix decomposition model, PLSA (Probabilitistic Latent Semantic Analysis, probabilistic latent semantic analysis) latent semantic analysis probability model, LDA (Latent Dirichlet Allocation ) document generation model, word2vec model. Preferably, a word vector training model is obtained based on a word2vec model and a TransE algorithm.
In practical applications, step 103 specifically includes: and determining a corresponding similarity calculation method by judging whether a second label in the second label set is the same as the first label or the third label, and finally obtaining the similarity of the recommended information and the user interest information.
In some embodiments, the method further comprises: based on the similarity between the recommendation information and the user interest information, pushing the recommendation information to the user according to a preset recommendation strategy;
the preset recommendation strategy comprises the following steps: pushing recommendation information to a user when the similarity is larger than a recommendation threshold; or when at least two pieces of recommended information are contained, pushing the recommended information to the user according to the sequence from the high similarity to the low similarity.
That is, when the similarity of the recommendation information is greater than the recommendation threshold, the recommendation information is transmitted to the user terminal, and the recommendation information is displayed on the user terminal.
When the similarity of at least two pieces of recommended information is obtained, the recommended information which is larger than a recommended threshold value is sequentially sent to the user terminal from large to small according to the similarity, and the display time sequence or the front and the back of the display position are set on the user terminal from large to small according to the similarity. Or simultaneously transmitting the recommendation information which is larger than the recommendation threshold value to the terminal, and setting the display time sequence or the front and back of the display position according to the similarity on the user terminal.
By adopting the technical scheme, N third labels related to the first label are obtained based on the diffusion mode of the first label in the knowledge graph and are used for expanding the interest label (namely the first label), and the similarity of the recommended information and the user interest information is determined by combining the first label set and the word vector library. Therefore, by expanding the interest labels of the users, the potential interest labels of the users can be found, and the information recommendation range is enlarged.
Example two
In order to further embody the object of the present invention, on the basis of the first embodiment of the present invention, as shown in fig. 2, the information recommendation method specifically includes:
step 201: acquiring a first tag set of user interest information and a second tag set of recommendation information; the first label set comprises at least one first label for identifying user interest information, and the second label set comprises at least one second label for identifying recommendation information.
Step 202: obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
step 203: determining similarity scores of at least one second tag in the second tag set based on the first tag set, N third tags having entity relationships with the first tag, and a word vector library;
step 204: and accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommendation information and the user interest information.
Here, the execution subject of steps 201 to 204 may be a processor of the terminal having the information recommendation function.
Here, the user interest tag set may obtain keywords of the behavior data according to the behavior data of the user in the last period, such as clicking, downloading, praying, collecting, searching, browsing, etc., where each keyword may be used as an interest tag of the user, and multiple keywords may be used to form the user interest tag set.
The information to be recommended may be an item, a public number, a website, an article, or the like. For example, the information to be recommended is the latest new a items, the free selling B items, and the like. And extracting keywords in the information to be recommended as second tags, and using a plurality of second tags as a second tag set of the recommended information.
In practical application, the method further comprises the following steps: obtaining a corpus; constructing a knowledge graph of a corpus to obtain a preset knowledge graph; based on a preset word vector training model, calculating the word vector of each word in the corpus to obtain a word vector library.
In practical application, in step 204, a similarity score of each second tag in the second tag set is obtained based on the first tag set, N third tags having an entity relationship with the first tag, and the word vector library. The similarity score is used for representing the similarity between the second labels and the first label set, and the similarity between the recommendation information identified by the second label set and the second label set identified by the first label set is determined by calculating the similarity between each second label in the second label set and the first label set.
Further, based on the similarity score of each second tag in the second tag set, the similarity between the recommended information and the user interest information is obtained.
Example III
In order to further embody the object of the present invention, on the basis of the first embodiment of the present invention, as shown in fig. 3, the information recommendation method specifically includes:
step 301: acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
step 302: obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
step 303: when the target second label is the same as the first label, calculating a similarity score of the target second label based on the first label set;
step 304: when the target second label is the same as the third label, calculating a similarity score of the target second label based on the first label set and a relation parameter for representing the entity relation between the third label and the first label;
step 305: when the second label of the target is different from the first label and the third label, calculating a similarity score of the second label of the target based on the word vector library and the first label set;
Step 306: and accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommendation information and the user interest information.
Here, the execution subject of steps 301 to 306 may be a processor of the terminal having the information recommendation function.
Here, the user interest tag set may obtain keywords of the behavior data according to the behavior data of the user in the last period, such as clicking, downloading, praying, collecting, searching, browsing, etc., where each keyword may be used as an interest tag of the user, and multiple keywords may be used to form the user interest tag set.
The information to be recommended may be an item, a public number, a website, an article, or the like. For example, the information to be recommended is the latest new a items, the free selling B items, and the like. And extracting keywords in the information to be recommended as second tags, and using a plurality of second tags as a second tag set of the recommended information.
In practical application, the method further comprises the following steps: obtaining a corpus; constructing a knowledge graph of a corpus to obtain a preset knowledge graph; based on a preset word vector training model, calculating the word vector of each word in the corpus to obtain a word vector library.
It should be noted that, before executing steps 303 to 305, it is further required to determine whether the second label of the target is the same as the first label or the third label, and if the second label of the target is the same as any one of the first labels in the first label set, executing step 303; if the target second tag is the same as any of the third tags, performing step 304; otherwise, step 305 is performed. Only one of steps 303 to 305 is performed when calculating the similarity score.
In practical application, after obtaining the similarity score of each second tag in the second tag set by using steps 303 to 305, accumulating and summing all the similarity scores, and taking the accumulated result as the similarity between the recommendation information and the interest information of the user.
In practical applications, the first tag set includes, in addition to at least one first tag, a weight value of the first tag, where the weight value of the first tag is used to characterize interest preference of the user, and a larger weight value indicates a greater interest of the user in a certain type of information, for example, the user may browse frequently in recent times or the user likes. The smaller the weight value, the less interest the user has in a certain class of information, e.g. information that is only browsed infrequently.
Illustratively, the first set of tags for the user interest information is [ x 1 :s 1 ,x 2 :s 2 ,x 3 :s 3 ,…x j :s j …x n :s n ]Wherein x is j Is the first label, s j Is the first label x j And j is a positive integer less than or equal to n.
The second tag set of the recommendation information is [ y ] 1 ,y 2 ,y 3 ,…y i …y m ]Wherein y is i Is a second tag; i is a positive integer less than or equal to m.
First label x j The corresponding N third tags include [ z ] 1 ,z 2 ,z 3 ,…z q …z N ]Wherein z is q Is a third tag, q is a positive integer less than or equal to N.
If the second label y i Is the first label x j Second label y i Is scored as s j
If the second label y i Is the third label z q Second label y i Is scored as
Figure BDA0001949106130000081
If the second label y i I.e. not the first tag x j Nor is the third label z q Second label y i Similarity score of max (cos) ×s j The method comprises the steps of carrying out a first treatment on the surface of the max (cos) is the second label y i With the first label x j A maximum cosine value therebetween.
In some embodiments, corresponding contribution coefficients may be further set for calculation methods of different similarity scores according to the importance degree of the second label, where a larger contribution coefficient indicates a higher likelihood of being user interest information.
Exemplary contribution coefficients specifically include: a first contribution coefficient, a second contribution coefficient, and a third contribution coefficient.
Step 303 specifically includes: obtaining a similarity score of a target second label according to the weight value and the first contribution coefficient of the first label in the first label set;
step 304 specifically includes: obtaining a similarity score of a target second label according to a weight value of a first label in the first label set, a relation parameter used for representing an entity relation between a third label and the first label and a second contribution coefficient;
step 305 specifically includes: acquiring word vectors of the target second labels from a word vector library and word vectors of at least one first label in the first label set; and obtaining the similarity score of the target second label according to the similarity value of the word vector of the target second label and the word vector of at least one first label, the weight value of the first label in the first label set and the third contribution coefficient.
Specifically, if the second label y i Is the first label x j Second label y i Similarity score of a is a 0 s j
If the second label y i Is the third label z q Second label y i Is scored as
Figure BDA0001949106130000091
If the second label y i I.e. not the first tag x j Nor is the third label z q Second label y i Similarity score of max (cos) ×a 2 s j ;max (cos) is the second tag y i With the first label x j A maximum cosine value therebetween.
Wherein a is 0 For the first contribution coefficient, a 1 For the second contribution coefficient, a 3 Is the third contribution coefficient.
Here, by adding different contribution coefficients, the accuracy of the recommendation information similarity calculation can be improved.
Example IV
In the embodiment of the present invention, an application scenario is specifically provided, fig. 4 shows a system function architecture of a terminal with an information recommendation function, including: the system comprises a data access layer, a data processing layer, a recommendation model layer and an application layer.
Wherein, (1) the data access layer, the dependent basic data comprises the following two parts:
the user behavior data is mainly used for describing interaction behaviors between a user and an article, and specifically comprises clicking, downloading, clicking, collecting and other behaviors;
the descriptive data of the content of the article is mainly used for describing the attribute of the article, such as data of names, categories, keywords, introduction and the like.
(2) The data processing layer is mainly used for extracting labels of object content description data and user behavior data, and specifically comprises the following steps:
article label: extracting information such as category, keywords, introduction and the like; in the embodiment of the invention, the object label is the second label.
User interest tag: the interest labels of the users are calculated through the articles with interactive behaviors mainly according to the recent behaviors of the users. Different score values are given to user behaviors representing different degrees of interest, labels of corresponding articles of which the user has related operations in a certain latest time period are counted, and the scores of the labels are accumulated, namely the weight of the labels, so that a group of labels of the user are obtained.
For example, the first set of tags for user interest information is [ x ] 1 :s 1 ,x 2 :s 2 ,x 3 :s 3 ,…x j :s j …x n :s n ]Whereinx j Is the first label, s j Is the first label x j And j is a positive integer less than or equal to n.
(3) The recommendation model layer mainly comprises three modules, which are in turn: the system comprises a word vector training module, a user tag rule network module and a similarity calculation module.
Firstly, obtaining a word vector of a label through a word vector module, generating a user label happy network according to a knowledge graph to expand user interests, obtaining the interest label happy network, and finally, calculating the similarity with an article according to the user interest label and combining the user label happy network to obtain a final recommendation result.
(4) And the application layer responds to the user recommendation request and displays the recommended result to the user.
The word vector training module, the user tag happy network module and the similarity calculating module in the recommendation model layer are respectively described in detail.
1) word2vec+TransE word vector training module
word2vec+TransE word vector training module is used for combining knowledge graph entity relationship and context information to train more effective word vector. The specific flow chart is as follows:
here, the information of the info box extracted from the encyclopedic entry is mainly used to form the triplet information (h, r, t), where h and t are two related entities, and r is a relationship, for example (hundred degrees, board length, li Yanhong). Assuming that the information is a fact, adding these triples information in the process of training word2vec model makes the h and t of the associated entities somewhat closer, and can also be calculated as a regularization constraint. For example, triplet information is category information, which indicates which field a word belongs to.
In order to fuse with the word2vec model, using the TransE concept, an objective function of (h+r, t) is defined as a probability function:
Figure BDA0001949106130000101
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0001949106130000102
is formed by w i And r, i.e.: />
Figure BDA0001949106130000103
θt represents a parameter corresponding to the word t, L t(w) =1 indicates that the entity relationship is established, L t(w) =0 indicates that the entity relationship is not established.
Thus, we can construct a model objective function based on word2vec and TransE as follows:
Figure BDA0001949106130000111
wherein the first part is word2vec model based on CBOW, the second part is relational word vector model, gamma is parameter for balancing the contribution ratio of two models, C is the size of the whole corpus,
Figure BDA0001949106130000112
is comprised of w i Content (w) i ) Is w i In the context of (a). The softmax was also calculated approximately using the method of Negative Sampling at the time of training. The following specifically shows the solving process of the relational word vector model:
Figure BDA0001949106130000113
in Negative Sampling, the triples are also divided into positive and negative samples, and according to Local Closed World, the triples not in the knowledge-graph are considered as negative samples, i.e. (w i R, t) is true, the corresponding t is a positive sample, while the other words are negative samples, for example (hundred degrees, board, li Yanhong) is a positive sample, and (hundred degrees, board, ma Yun) is a negative sample. For a given word w i And a corresponding relationship r, the trained objective likelihood function is as follows:
Figure BDA0001949106130000114
it can be seen that the solution process for the word vector training model is substantially similar to the solution process for the word2vec model. Specifically, the triplet information in the knowledge graph of the corpus is added into the word2vec model, so that not only is the context information of the corpus considered, but also the entity relationship in the triplet information is integrated, and the word vector obtained through training is beneficial to improving the accuracy of subsequent similarity calculation.
As shown in fig. 5, the right half part is a process of obtaining a word vector training model, firstly, a triple information base is constructed according to a hundred-degree encyclopedia word stock, a plurality of triple information of the triple information base includes entity relations among the words, wherein h and t are two related entities, r is a relation, the plurality of triple information form the triple information base, and the word2 vec+transition word vector training module obtained by utilizing the entity relations among the words in the triple information base and the context information of the words in the hundred-degree encyclopedia word stock can train more effective word vectors and is beneficial to improving accuracy of subsequent similarity calculation. And constructing a Chinese knowledge graph by using the hundred-degree encyclopedic entry library in the left half part to obtain a Chinese knowledge graph corresponding to the entry library, wherein the Chinese knowledge graph can be formed by triplet information contained in a triplet information base.
2) Ripple network module
According to the Ripple network in the knowledge graph, simulating the propagation process of the user interests on the knowledge graph, wherein the whole process is similar to the propagation of water waves, the interests of one user are propagated outwards layer by layer on the knowledge graph by taking the entity in the history record as the center, and the gradual attenuation of the interests of the user in the diffusion process on the knowledge graph is represented.
Building a user interest happle network aiming at a user, and extracting an interest label x of the user 1 The entity relationship in the knowledge graph is as follows:
knowledge graph of entity 1 in ChineseThe diffusion range in (a) is the user interest label x 1 Is a kind of network. As shown in fig. 6, the innermost entities 2, 3, 4 and 5 are "one-hop" relationship entities of entity 1; entity 6, entity 7, entity 8, entity 9 and entity 10 of the middle circle are then "two-hop" relationship entities of entity 1; the outermost entity 11 is then the "three-hop" relationship entity of entity 1.
The physical relationships of the different circles also represent the spread of interest of the user, spread outwards like the figure "water wave", and the degree of interest is also getting smaller and smaller. The closer to the user tag entity 1, the more likely the representation is to be a potential interest tag for the user. The interest of the user can be effectively expanded by the Ripple newwork, and the interest has degree difference, so that the problem of sparse user behaviors is solved.
Based on the above system framework, a calculation method of the similarity of the recommended information is also exemplarily provided, as shown in fig. 7, the specific steps of the information recommendation method include the following steps:
1. acquiring a user interest tag set;
here, the user interest tag set may obtain keywords of the behavior data according to the behavior data of the user in the last period, such as clicking, downloading, praying, collecting, searching, browsing, etc., where each keyword may be used as an interest tag of the user, and multiple keywords may be used to form the user interest tag set.
2. Extracting and generating an item tag of each item from the set of items;
here, the recommended information is a recommended item, and the recommended item includes a recommended item in the item set, and the recommended item may be a new item recently added, a free-selling item, or the like. For example, an item tag set of a recommended item M in the item set is [ y ] 1 ,y 2 ,y 3 ,…y i …y m ]Wherein y is i Is an item tag (i.e., a second tag); i is a positive integer less than or equal to m.
3. Judging article label y i Whether or not it is an interest tag x j
4. If article label y i Is interest tag x j Item tag y i Is scored as alpha 0 s j
Wherein s is j Is interest tag x j Weight value of alpha 0 Is the first contribution coefficient.
5. If article label y i Not interest tag x j Judging whether the object tag is in the interest tag's happy network or not;
that is, it is determined whether the item tag is a third tag in the risple network of the interest tag that has an entity relationship with the interest tag.
6. If article label y i Is interest tag x j The third label in the risple network), then the item label yi has a similarity score of
Figure BDA0001949106130000121
Wherein lambda is interest tag x j Article tag y in the risple network i With interest tag x j With lambda-hop entity relationship, alpha 1 Is the second contribution coefficient s j Is interest tag x j Is a weight value of (a).
In practice, if the article label y i And if a plurality of groups of relation entities exist in the interest label of the user, selecting the minimum lambda to calculate the similarity score.
7. If article label y i Acquiring an item tag y from a word vector library, which is not in the interest tag's risple network i Word vector of each interest tag, and word vector of each interest tag, calculate to obtain article tag y i Similarity score of max (cos) ×α 2 s j
Wherein max (cos) is the article tag y i With interest tag x j Maximum cosine value between s j Interest tag x corresponding to maximum cosine value j Weight value of alpha 2 Is the third contribution coefficient.
Wherein the contribution coefficient alpha 012 As fractional coefficients in different situations, one can rootThe coefficients are set as needed. Here, the magnitude relation between the contribution coefficients may be α 0 >α 1 >α 2 The larger the contribution coefficient is, the more reliable the similarity score obtained based on the corresponding formula is, and the more interesting the user is, for example, alpha 1 =1,α 1 =0.5,α 2 =0.3。
Here, the cosine value between word vectors is calculated based on a cosine similarity algorithm to represent the similarity between the object tag and the interest tag, and other similarity algorithms, such as Euclidean distance, manhattan distance, etc., may be adopted; selecting an interest tag corresponding to the maximum cosine value from all the calculated cosine values, and calculating an article tag y by using the weight value of the interest tag i Is a similarity score of (2).
Here, the item tag set [ y ] of the recommended item M can be obtained by using the steps 3 to 7 1 ,y 2 ,y 3 ,…y i …y m ]The similarity score corresponding to each item label is specifically set as follows: [ score ] 1 ,score 2 ,score 3 ,score 4 …score m ]。
8. Summing the similarity scores to obtain the similarity Y of the article M M
Here the number of the elements is the number,
Figure BDA0001949106130000131
Y M in particular to the similarity of the object M and the interest label of the user.
Here, the similarity of each item in the item set is calculated sequentially using steps 3-8.
9. And sequencing the similarity of each article in the article set, and sequencing to generate a recommendation result.
Here, the similarity of each article in the article set is ranked, a ranking result is generated, and the ranking result is sequentially recommended to the user from front to back.
By adopting the technical scheme, the triple information in the knowledge graph is combined with the word2vec model, so that the context knowledge is fused, and the similarity of entity relations is realized; in the interest recommendation process based on the user, knowledge graph information is added by utilizing the idea of the happle network, and potential interest labels of the user can be found by expanding the interest labels of the user, so that the information recommendation range is enlarged.
Example five
Based on the same inventive concept, an embodiment of the present invention further provides a terminal, as shown in fig. 8, the terminal 80 includes:
an obtaining unit 801, configured to obtain a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
training unit 802, obtaining N third labels having entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
A processing unit 803, configured to determine, based on the first tag set, N third tags having an entity relationship with the first tag, and a word vector library, a similarity between the recommendation information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label.
In some embodiments, the processing unit 803 is specifically configured to determine, based on the first tag set, N third tags having an entity relationship with the first tag, and the word vector library, a similarity score of at least one second tag in the second tag set; and accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommendation information and the user interest information.
In some embodiments, the first set of tags further includes a weight value for the first tag; the processing unit 803 is specifically configured to calculate a similarity score of the second label of the target according to the weight value of the first label in the first label set when the second label of the target is the same as the first label;
or when the target second label is the same as the third label, calculating the similarity score of the target second label according to the weight value of the first label in the first label set and the relation parameter for representing the entity relation between the third label and the first label;
Or when the second label of the target is different from the first label and the third label, calculating the similarity score of the second label of the target according to the word vector library and the weight value of the first label in the first label set;
the target second tag is any one second tag in the second tag set.
In some embodiments, the processing unit 803 is specifically configured to obtain, when the target second tag is the same as the first tag, a similarity score of the target second tag according to the weight value and the first contribution coefficient of the first tag in the first tag set;
or when the target second label is the same as the third label, according to the weight value of the first label in the first label set, the relationship parameter used for representing the entity relationship between the third label and the first label and the second contribution coefficient, obtaining the similarity score of the target second label;
or when the target second label is different from the first label and the third label, acquiring the word vector of the target second label and the word vector of at least one first label in the first label set from the word vector library; and obtaining the similarity score of the target second label according to the similarity value of the word vector of the target second label and the word vector of at least one first label, the weight value of the first label in the first label set and the third contribution coefficient.
In some embodiments, the processing unit 803 is further configured to push the recommendation information to the user according to a preset recommendation policy based on a similarity between the recommendation information and the interest information of the user;
the preset recommendation strategy comprises the following steps: pushing recommendation information to a user when the similarity is larger than a recommendation threshold; or when at least two pieces of recommended information are contained, pushing the recommended information to the user according to the sequence from the high similarity to the low similarity.
In some embodiments, training unit 802 is further configured to obtain a corpus; constructing a knowledge graph of a corpus to obtain a preset knowledge graph; based on a preset word vector training model, calculating the word vector of each word in the corpus to obtain a word vector library.
In some embodiments, the predetermined word vector training model is based on a word2vec model in combination with a transient algorithm.
By adopting the technical scheme, N third labels related to the first label are obtained based on the diffusion mode of the first label in the knowledge graph and are used for expanding the interest labels, and the similarity of the recommended information and the interest information of the user is determined by combining the first label set and the word vector library. Therefore, by expanding the interest labels of the users, the potential interest labels of the users can be found, and the information recommendation range is enlarged.
Example six
Based on the same inventive concept, the embodiment of the present invention further provides a second terminal, as shown in fig. 9, the terminal 90 includes: a communication interface 901 and a processor 902;
a communication interface 901, configured to perform data transmission between a terminal and an external device;
a processor 902, configured to obtain a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
the processor 902 is further configured to obtain N third labels having an entity relationship with the first label from a preset knowledge graph; wherein N is a positive integer;
the processor 902 is further configured to determine, based on the first tag set, N third tags having an entity relationship with the first tag, and the word vector library, a similarity between the recommendation information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label.
In some embodiments, the processor 902 is specifically configured to determine a similarity score of at least one second tag in the second tag set based on the first tag set, the N third tags having an entity relationship with the first tag, and the word vector library; and accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommendation information and the user interest information.
In some embodiments, the first set of tags further includes a weight value for the first tag; the processor 902 is specifically configured to calculate a similarity score of the second label of the target according to the weight value of the first label in the first label set when the second label of the target is the same as the first label;
or when the target second label is the same as the third label, calculating the similarity score of the target second label according to the weight value of the first label in the first label set and the relation parameter for representing the entity relation between the third label and the first label;
or when the second label of the target is different from the first label and the third label, calculating the similarity score of the second label of the target according to the word vector library and the weight value of the first label in the first label set;
the target second tag is any one second tag in the second tag set.
In some embodiments, the processor 902 is specifically configured to obtain a similarity score of the second label of the target according to the weight value and the first contribution coefficient of the first label in the first label set when the second label of the target is the same as the first label;
or when the target second label is the same as the third label, according to the weight value of the first label in the first label set, the relationship parameter used for representing the entity relationship between the third label and the first label and the second contribution coefficient, obtaining the similarity score of the target second label;
Or when the target second label is different from the first label and the third label, acquiring the word vector of the target second label and the word vector of at least one first label in the first label set from the word vector library; and obtaining the similarity score of the target second label according to the similarity value of the word vector of the target second label and the word vector of at least one first label, the weight value of the first label in the first label set and the third contribution coefficient.
In some embodiments, the processor 902 is further configured to push the recommendation information to the user through the communication interface 901 according to a preset recommendation policy based on the similarity between the recommendation information and the interest information of the user;
the preset recommendation strategy comprises the following steps: pushing recommendation information to a user when the similarity is larger than a recommendation threshold; or when at least two pieces of recommended information are contained, pushing the recommended information to the user according to the sequence from the high similarity to the low similarity.
In some embodiments, the processor 902 is further configured to obtain a corpus; constructing a knowledge graph of a corpus to obtain a preset knowledge graph; based on a preset word vector training model, calculating the word vector of each word in the corpus to obtain a word vector library.
In some embodiments, the predetermined word vector training model is based on a word2vec model in combination with a transient algorithm.
By adopting the technical scheme, N third labels related to the first label are obtained based on the diffusion mode of the first label in the knowledge graph and are used for expanding the interest labels, and the similarity of the recommended information and the interest information of the user is determined by combining the first label set and the word vector library. Therefore, by expanding the interest labels of the users, the potential interest labels of the users can be found, and the information recommendation range is enlarged.
Example seven
Based on the hardware implementation of each unit in the above terminal, another terminal is further provided in the embodiment of the present application, as shown in fig. 10, where the terminal 100 includes: a processor 1001 and a memory 1002 configured to store a computer program capable of running on the processor;
wherein the processor 1001 is configured to execute the method steps in the aforementioned embodiments when running a computer program.
Of course, in actual practice, the various components in the terminal 100 would be coupled together via a bus system 1003, as shown in FIG. 10. It is appreciated that the bus system 1003 is used to implement connective communication between these components. The bus system 1003 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 1003 in fig. 10.
In practical applications, the processor may be at least one of an application specific integrated circuit (ASIC, application Specific Integrated Circuit), a digital signal processing device (DSPD, digital Signal Processing Device), a programmable logic device (PLD, programmable Logic Device), a Field-programmable gate array (Field-Programmable Gate Array, FPGA), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device for implementing the above-mentioned processor function may be other for different apparatuses, and embodiments of the present application are not specifically limited.
The Memory may be a volatile Memory (RAM) such as Random-Access Memory; or a nonvolatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a Hard Disk (HDD) or a Solid State Drive (SSD); or a combination of the above types of memories and provide instructions and data to the processor.
In an exemplary embodiment, the present application also provides a computer readable storage medium, e.g. a memory 1002 comprising a computer program executable by the processor 1001 of the terminal 100 to perform the aforementioned method steps.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block and/or flow of the flowchart illustrations and/or block diagrams, and combinations of blocks and/or flow diagrams in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
The above is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (10)

1. An information recommendation method, the method comprising:
acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
Obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
determining similarity scores of at least one second tag in the second tag set based on the first tag set, N third tags having entity relationships with the first tag, and a word vector library; accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label;
based on the similarity between the recommendation information and the user interest information, pushing the recommendation information to the user according to a preset recommendation strategy.
2. The method of claim 1, wherein the first set of tags further comprises a weight value for the first tag;
the determining similarity scores of at least one second tag in the second tag set based on the first tag set, the N third tags having entity relationships with the first tag, and a word vector library, includes:
when the target second label is the same as the first label, calculating a similarity score of the target second label according to the weight value of the first label in the first label set;
Or when the target second label is the same as the third label, calculating a similarity score of the target second label according to the weight value of the first label in the first label set and a relation parameter for representing the entity relation between the third label and the first label;
or when the target second label is different from the first label and the third label, calculating the similarity score of the target second label according to the word vector library and the weight value of the first label in the first label set;
the target second tag is any one second tag in the second tag set.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
when the target second tag is the same as the first tag, calculating a similarity score of the target second tag according to the weight value of the first tag in the first tag set, including:
obtaining a similarity score of the target second label according to the weight value and the first contribution coefficient of the first label in the first label set;
when the target second tag is the same as the third tag, calculating a similarity score of the target second tag according to the weight value of the first tag in the first tag set and a relationship parameter for representing the entity relationship between the third tag and the first tag, including:
Obtaining a similarity score of the target second label according to a weight value of a first label in the first label set, a relation parameter used for representing an entity relation between a third label and the first label and a second contribution coefficient;
when the target second tag is different from the first tag and the third tag, calculating a similarity score of the target second tag according to the word vector library and the weight value of the first tag in the first tag set, including:
acquiring word vectors of the target second tag and word vectors of at least one first tag in the first tag set from the word vector library;
and obtaining a similarity score of the target second label according to the similarity value of the word vector of the target second label and the word vector of the at least one first label, the weight value of the first label in the first label set and the third contribution coefficient.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the preset recommendation strategy comprises the following steps:
pushing the recommendation information to a user when the similarity is larger than a recommendation threshold;
or when at least two pieces of recommended information are contained, pushing the recommended information to the user according to the sequence from the high similarity to the low similarity.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the method further comprises the steps of: obtaining a corpus;
constructing a knowledge graph of the corpus to obtain the preset knowledge graph;
and calculating the word vector of each word in the corpus based on a preset word vector training model to obtain the word vector library.
6. The method of claim 5, wherein the predetermined word vector training model is based on a word2vec model in combination with a transition algorithm.
7. A terminal, the terminal comprising:
the acquisition unit is used for acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
the training unit is used for obtaining N third labels with entity relation with the first label from a preset knowledge graph; wherein N is a positive integer;
the processing unit is used for determining similarity scores of at least one second tag in the second tag set based on the first tag set, N third tags with entity relations with the first tag and a word vector library; accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label;
The processing unit is further configured to push the recommendation information to a user according to a preset recommendation policy based on similarity between the recommendation information and the user interest information.
8. A terminal, the terminal comprising: a communication interface and a processor;
the communication interface is used for carrying out data transmission between the terminal and the external equipment;
the processor is used for acquiring a first tag set of user interest information and a second tag set of recommendation information; the first tag set comprises at least one first tag for identifying user interest information, and the second tag set comprises at least one second tag for identifying recommendation information;
the processor is further configured to obtain N third labels having an entity relationship with the first label from a preset knowledge graph; wherein N is a positive integer;
the processor is further configured to determine a similarity score of at least one second tag in the second tag set based on the first tag set, N third tags having an entity relationship with the first tag, and a word vector library; accumulating and summing the similarity scores of the at least one second label, and taking the accumulated result as the similarity between the recommended information and the user interest information; the word vector library at least comprises word vectors of a first label and word vectors of a second label;
The processor is further configured to push the recommendation information to a user according to a preset recommendation policy based on similarity between the recommendation information and the user interest information.
9. A terminal, the terminal comprising: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the method of any of claims 1 to 6 when the computer program is run.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
CN201910045755.6A 2019-01-17 2019-01-17 Information recommendation method, terminal and storage medium Active CN111522886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910045755.6A CN111522886B (en) 2019-01-17 2019-01-17 Information recommendation method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910045755.6A CN111522886B (en) 2019-01-17 2019-01-17 Information recommendation method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111522886A CN111522886A (en) 2020-08-11
CN111522886B true CN111522886B (en) 2023-05-09

Family

ID=71910307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910045755.6A Active CN111522886B (en) 2019-01-17 2019-01-17 Information recommendation method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111522886B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967934B (en) * 2020-08-12 2022-11-18 上海辰山植物园 Intelligent recommendation method for green plant application in online shopping mall
CN111932308A (en) * 2020-08-13 2020-11-13 中国工商银行股份有限公司 Data recommendation method, device and equipment
CN112800326B (en) * 2021-01-18 2022-03-15 吉林大学 Improved Ripp-MKR recommendation method combining multitask learning and knowledge graph
CN113326203B (en) * 2021-06-22 2022-08-12 深圳前海微众银行股份有限公司 Information recommendation method, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593792A (en) * 2013-11-13 2014-02-19 复旦大学 Individual recommendation method and system based on Chinese knowledge mapping
WO2017101317A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Method and apparatus for displaying intelligent recommendations on different terminals
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN107688606A (en) * 2017-07-26 2018-02-13 北京三快在线科技有限公司 The acquisition methods and device of a kind of recommendation information, electronic equipment
CN108334632A (en) * 2018-02-26 2018-07-27 深圳市腾讯计算机系统有限公司 Entity recommends method, apparatus, computer equipment and computer readable storage medium
CN109033101A (en) * 2017-06-08 2018-12-18 华为软件技术有限公司 Label recommendation method and device
CN109189937A (en) * 2018-08-22 2019-01-11 阿里巴巴集团控股有限公司 A kind of characteristic relation recommended method and device, a kind of calculating equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729360A (en) * 2012-10-12 2014-04-16 腾讯科技(深圳)有限公司 Interest label recommendation method and system
US20170169341A1 (en) * 2015-12-14 2017-06-15 Le Holdings (Beijing) Co., Ltd. Method for intelligent recommendation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593792A (en) * 2013-11-13 2014-02-19 复旦大学 Individual recommendation method and system based on Chinese knowledge mapping
WO2017101317A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Method and apparatus for displaying intelligent recommendations on different terminals
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN109033101A (en) * 2017-06-08 2018-12-18 华为软件技术有限公司 Label recommendation method and device
CN107688606A (en) * 2017-07-26 2018-02-13 北京三快在线科技有限公司 The acquisition methods and device of a kind of recommendation information, electronic equipment
CN108334632A (en) * 2018-02-26 2018-07-27 深圳市腾讯计算机系统有限公司 Entity recommends method, apparatus, computer equipment and computer readable storage medium
CN109189937A (en) * 2018-08-22 2019-01-11 阿里巴巴集团控股有限公司 A kind of characteristic relation recommended method and device, a kind of calculating equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕学强 ; 王腾 ; 李雪伟 ; 董志安 ; .基于内容和兴趣漂移模型的电影推荐算法研究.计算机应用研究.2017,(03),全文. *
朱雨晗 ; .基于用户兴趣标签的混合推荐方法.电子制作.2018,(22),全文. *

Also Published As

Publication number Publication date
CN111522886A (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN108287864B (en) Interest group dividing method, device, medium and computing equipment
CN111460130B (en) Information recommendation method, device, equipment and readable storage medium
CN107220352B (en) Method and device for constructing comment map based on artificial intelligence
WO2022041979A1 (en) Information recommendation model training method and related device
CN111522886B (en) Information recommendation method, terminal and storage medium
US8918348B2 (en) Web-scale entity relationship extraction
KR20200094627A (en) Method, apparatus, device and medium for determining text relevance
CN111539197B (en) Text matching method and device, computer system and readable storage medium
US20120271821A1 (en) Noise Tolerant Graphical Ranking Model
WO2014107801A1 (en) Methods and apparatus for identifying concepts corresponding to input information
CN110597962A (en) Search result display method, device, medium and electronic equipment
CN110390052B (en) Search recommendation method, training method, device and equipment of CTR (China train redundancy report) estimation model
CN116601626A (en) Personal knowledge graph construction method and device and related equipment
Chen et al. Topic sense induction from social tags based on non-negative matrix factorization
CN112989208A (en) Information recommendation method and device, electronic equipment and storage medium
CN111931516A (en) Text emotion analysis method and system based on reinforcement learning
CN115374781A (en) Text data information mining method, device and equipment
CN111639255B (en) Recommendation method and device for search keywords, storage medium and electronic equipment
Wong et al. An unsupervised method for joint information extraction and feature mining across different web sites
CN112926308B (en) Method, device, equipment, storage medium and program product for matching text
CN109672706B (en) Information recommendation method and device, server and storage medium
CN110008396B (en) Object information pushing method, device, equipment and computer readable storage medium
CN117435685A (en) Document retrieval method, document retrieval device, computer equipment, storage medium and product
CN111460808A (en) Synonymous text recognition and content recommendation method and device and electronic equipment
CN110929526A (en) Sample generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant