CN112559879B - Interest model training method, interest point recommending method, device and equipment - Google Patents

Interest model training method, interest point recommending method, device and equipment Download PDF

Info

Publication number
CN112559879B
CN112559879B CN202011550110.7A CN202011550110A CN112559879B CN 112559879 B CN112559879 B CN 112559879B CN 202011550110 A CN202011550110 A CN 202011550110A CN 112559879 B CN112559879 B CN 112559879B
Authority
CN
China
Prior art keywords
interest
model
type
historical
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011550110.7A
Other languages
Chinese (zh)
Other versions
CN112559879A (en
Inventor
陈浩
张澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011550110.7A priority Critical patent/CN112559879B/en
Publication of CN112559879A publication Critical patent/CN112559879A/en
Application granted granted Critical
Publication of CN112559879B publication Critical patent/CN112559879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides an interest model training method, an interest point recommending device and interest point recommending equipment. The disclosure relates to the field of data processing technology, and in particular relates to the field of artificial intelligence, deep learning and mapping, and the specific scheme includes: acquiring historical interest points from historical behavior data; and training and establishing an interest model by using the word vector of the historical interest point and the vector of the scene requirement. The interest points are recommended by the user interests determined by the interest model, so that personalized requirements of users can be met, and user experience and searching efficiency are improved.

Description

Interest model training method, interest point recommending method, device and equipment
Technical Field
The disclosure relates to the field of data processing technology, and in particular to the field of artificial intelligence such as deep learning.
Background
With the rapid development of society, the map is used as a travel tool, the use frequency of users is higher and higher, and more users search interest points such as food, scenic spots and the like through the search and the finding of the periphery of the map. However, in existing map applications, the search and find surrounding recommendations are fixed results that rank the heat and quality of the points of interest.
Disclosure of Invention
The disclosure provides an interest model training method, an interest point recommending device and interest point recommending equipment.
According to an aspect of the present disclosure, there is provided an interest model training method, including:
acquiring historical interest points from historical behavior data;
and training an interest model by using the word vector of the historical interest point and the vector of the scene requirement.
According to another aspect of the present disclosure, there is provided a point of interest recommendation method, including:
the historical behavior data are input into a first type interest model to obtain first type interest information;
inputting historical behavior data into a second type of interest model to obtain second type of interest information, wherein the second type of interest model is trained by adopting the interest model training method disclosed by the disclosure;
and recalling and sorting based on the scene requirement, the first type of interest information and the second type of interest information to obtain recommended interest points.
According to another aspect of the present disclosure, there is provided an interest model training apparatus, including:
the historical interest point acquisition module is used for acquiring historical interest points from the historical behavior data;
and the interest model training module is used for training an interest model by utilizing the word vector of the historical interest points and the vector of the scene requirement.
According to another aspect of the present disclosure, there is provided a point of interest recommendation apparatus, including:
the first interest information acquisition module is used for inputting historical behavior data into the first interest model to obtain first interest information;
the second type interest information acquisition module is used for inputting the historical behavior data into a second type interest model to obtain second type interest information, and the second type interest model is trained by the interest model training device;
and the recommended interest point determining module is used for recalling and sorting based on the scene requirement, the first type of interest information and the second type of interest information to obtain recommended interest points.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the interest model training method or the point of interest recommendation method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the interest model training method or the interest point recommendation method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the interest model training method or the point of interest recommendation method in any of the embodiments of the present disclosure.
According to the technology disclosed by the disclosure, the interest model is trained by integrating the word vector of the interest point and the vector of the scene demand, so that the trained interest model can obtain the user interest vector matched with the scene demand. By using the interest vector to recall the interest points, the interest points meeting the personalized requirements of the user can be recommended, and the user experience and the searching efficiency are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram illustration of a method of interest model training in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of interest model training in accordance with another embodiment of the present disclosure;
FIG. 3 is a flow chart of a point of interest recommendation method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of a point of interest recommendation method according to another embodiment of the present disclosure;
FIG. 5 is a flow chart of a point of interest recommendation method according to another embodiment of the present disclosure;
FIG. 6 is a system diagram of implementing a point of interest recommendation method in accordance with another embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an attention sequence to sequence model effect to complete training in accordance with another embodiment of the present disclosure;
FIG. 8 is an application schematic of a long-term-interest application module in accordance with another embodiment of the present disclosure;
FIG. 9a is an example of a scenario in which a search box in a map application enters a general demand word;
FIG. 9b is an exemplary diagram of a recommendation result of a related art search box input general terms scene;
FIG. 10a is an example of a scene in which the point in the map application is around the occurrence;
FIG. 10b is an exemplary graph of recommended results for a related art click discovery perimeter;
FIG. 11a is a diagram showing an example of the recommendation effect of the related art;
FIG. 11b is an illustration of an example recommendation effect for the same user behavior as FIG. 11a, according to an embodiment of the present disclosure;
FIG. 12a is a diagram showing an example of the recommendation effect of the related art;
FIG. 12b is a fourth example recommendation effect illustration for the same user behavior as FIG. 11a, according to an embodiment of the present disclosure;
FIG. 13 is a schematic block diagram of an interest model training apparatus according to an embodiment of the present disclosure;
FIG. 14 is a schematic block diagram of an interest model training apparatus according to an embodiment of the present disclosure;
FIG. 15 is a schematic block diagram of a point of interest recommendation device according to an embodiment of the present disclosure;
FIG. 16 is a schematic block diagram of a point of interest recommendation device in accordance with an embodiment of the present disclosure;
FIG. 17 is a block diagram of an electronic device used to implement the interest model training method or the point of interest recommendation method of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
FIG. 1 is a flow diagram of a method of interest model training in accordance with an embodiment of the present disclosure. The method may include:
s11, acquiring historical interest points from historical behavior data.
S12, training an interest model by using word vectors of historical interest points and vectors of scene requirements.
According to the method and the device, the historical interest points are obtained according to various historical behaviors of the user, then the interest models are trained by correspondingly constructing vectors of the historical interest points and vectors of scene requirements, and the trained interest models can be used for generating interest vectors of the user. Because the interest model is trained by the vectors of the historical interest points, the interest vectors which embody the long-term interest habits of the user can be obtained. Moreover, since the user's historical behavior includes various types of points of interest, not all points of interest match the current scene needs. Therefore, the embodiment of the disclosure also synthesizes the word vector of the interest point and the vector training interest model of the scene requirement, so that the trained interest model can obtain the user interest vector matched with the scene requirement. By using the interest vector to recall the interest points, the interest points which are more in line with the scene requirements of the user can be obtained instead of the recommendation results of thousands of people. According to the method and the device for recommending the interest points, on the basis of understanding long-term interest habits and current demands of users, the interest points meeting personalized demands of the users as far as possible can be recommended, and therefore user experience and searching efficiency are improved.
In addition, in the embodiment of the disclosure, for the historical interest points and scene demands, corresponding word vectors are adopted as the input of the interest model, the interest vectors are obtained based on the input, richer information can be contained through the interest vectors, and the interest vectors can be better integrated into the recall model, so that the training effect is improved.
In embodiments of the present disclosure, points of interest (Point of Interest, POI), in a geographic information system, may include various markers, such as houses, shops, post boxes, bus stops, etc. in a map application.
For example, the embodiment of the disclosure can be applied to a recommended interest point scene of a map application, an interest model for obtaining the interest of a map user is trained, and then the interest point of the map is recalled by using the interest of the user. Specifically, the historical behavior data may include: various behavior data of user interaction with the map application is acquired from log data of the map application. The historical points of interest may include: interest points corresponding to interaction behaviors of a user and a map application, such as interest points of searching, clicking, navigating and the like. The scene requirements may include: user information, location information (whether or not it is a city in which the home address is located), time information, weather information, and the like. The interest model can understand long-term interest habits and scene requirements of the user, and obtain user interest vectors meeting the scene requirements, so that interest points are recommended to the user by using the interest vectors. For example, for a user who likes daily necessities, when the user searches for a good or searches for surrounding good using a map application, the interest model can obtain interest vector features that characterize his daily necessities, thereby recommending more japan cuisines for him. For example, for a user who frequently drives out the tour, when the user searches for scenic spots by using the map application, the interest model can obtain the interest vector features for representing that the user likes to drive out the tour, so that more scenic spots suitable for driving out the tour are recommended.
FIG. 2 is a flow chart of interest model training according to another embodiment of the present disclosure. The interest model training method of this embodiment may include the steps of the above-described embodiment. In this embodiment, the interest model training method may further include:
s21, processing the historical interest points by using a word vector tool to obtain word vectors of the historical interest points.
Illustratively, word2vec may be employed by the word vector tool. The word vector tool can generate vector expressions with definite semantics for interest points according to a large number of user history behavior sequences. This semantic meaning is mainly represented between similar points of interest, and the similarity between vectors is also high. Such as a high degree of vector similarity between the jockey and the round.
Because the word vector can contain a plurality of dimensions, rich information can be contained, and the multidimensional characteristics of the interest points can be better embodied, so that the relation among the interest points can be better found. Also taking the above-mentioned jockey garden and round garden as examples, if the relevance between the two is not easy to be known by the interest model if the relevance is presented directly by text, but the relevance between the two can be well obtained by converting the relevance into word vectors and then the interest model is based on the numerical value of each dimension of the word vectors.
In one embodiment, S12 of the interest model training method may include:
s22, adding an attention module in the recall model, wherein the input of the attention module comprises a vector of a scene demand and a word vector of a historical interest point, and the output of the attention module comprises a correlation weight of the scene demand and the historical interest point;
s23, performing end-to-end training on the recall model added with the attention module to obtain an interest model.
Specifically, in S22, after obtaining the correlation weight between the scene requirement and the historical interest points, the word vectors of the historical interest points in the user historical behavior sequence are weighted and summed to obtain the interest vector. The interest vector is used as an input to a downstream module of the interest model in the recall model to cause the downstream module to recall the point of interest.
Specifically, in S23, the end-to-end training recall model includes: the interest model is used as a module of the recall model, then word vectors of historical interest points of a user and vectors of a user demand scene are used as inputs of the recall model, and the user expected interest points are used as supervision data of output of the recall model to train the recall model. That is, the entire recall model learning process does not perform artificial sub-problem division, but instead communicates with the recall model to directly learn the mapping from the original data to the desired output. When the recall model is trained, the attention model is trained, and the interest model can be obtained based on the trained attention model.
Fig. 3 is a flowchart of a method for recommending points of interest according to an embodiment of the present disclosure. The method may include:
s31, inputting the historical behavior data into the first interest model to obtain first interest information.
S32, inputting the historical behavior data into a second type of interest model to obtain second type of interest information, wherein the second type of interest model is trained by adopting the interest model training method of the embodiment of the disclosure.
S33, recalling and sorting are conducted based on scene requirements, the first type of interest information and the second type of interest information, and recommended interest points are obtained.
In the embodiment of the disclosure, interest information of different types is obtained through the first type interest model and the second type interest model respectively. For example, the first type of interest information may include one of information such as a specific user's interest point, interest tag (may also be referred to as an interest category), brand, travel habit, etc., that is, the first type of interest model directly captures the user's interest that can be interpreted to a specific name and category; the second type of interest information may include interest vectors, that is, the second type of interest model implicitly embodies the user's interest in a multidimensional vector representation. The first type of interest information can intuitively embody the user interest, and the second type of interest information can contain richer information and can be better integrated into a common recall model based on the neural network. Based on the recall and the ordering of the interest points, the user can be better understood, more interest points which accord with the user are displayed in the retrieval and recommendation results of the map, and the user experience and the searching efficiency are improved.
Illustratively, embodiments of the present disclosure may be trained for map applications. When a user enters a search or recommendation scene of a map such as a discovery periphery, an interest service is requested to acquire explicit interests and implicit interests of the user, and different contents are recommended to the user according to the interests. For example, for a user who likes daily feed, more japan is recommended when the user searches for food or searches for surrounding food. For example, for a user who frequently drives himself to go out, when the user searches for the scenic spot, more scenic spots suitable for the user to drive out are recommended.
In one embodiment, the first type of interest model is trained using an attention model, a training sample of the attention model comprising: historical behavior data during a first time period and historical behavior data during a second time period, wherein the first time period precedes the second time period.
In particular, to efficiently and accurately mine the explicit interests of the user, the attention model may employ an attention sequence-to-sequence (attention seq2 seq) model that models the interests of the user. The attention sequence-to-sequence model is a common model in natural language processing, and a typical application scenario is language translation. This model is applied here to fit the relationship between the interests of the user and the behavior of the user.
Illustratively, the behavior of the user for a second period of time (e.g., 20 days later) may be fitted by a history of behavior of the user for the first period of time (e.g., the first 60 days). Among the vast number of user behaviors, there are some rules of interest. For example, users who usually remove chafing dish and Sichuan dish in the first 60 days, and remove the string fragrance once in the last 20 days. The user who likes tourism in the first 60 days and the trip mode is self-driving, goes to the scenic spot suitable for self-driving once in the last 20 days. A large number of such input sequences and output sequences can be constructed from map-rich user history behavior and a model trained to capture such laws.
Based on the thought, the behavior history data of the first 60 days of the user and the behavior history data of the last 20 days of the user are obtained. Then, based on the behavior history data of the first 60 days and the behavior history data of the last 20 days of the user, historical behavior sequences are constructed, and each historical behavior sequence comprises a plurality of interest points. And taking the constructed historical behavior sequence data as attention sequence-to-sequence training data, wherein the historical behavior sequence of the first 60 days of the user is taken as input data of a model, and the historical behavior sequence of the last 20 days of the user is taken as supervision data output by the model. Furthermore, each interest point in the training data can be converted into a corresponding interest category for training, for example. The interest category corresponding to the XX Sichuan vegetable restaurant is Sichuan vegetable, so that the trained model can output the interest category sequence of the user.
Further, the attention sequence to sequence model sequence finds content associated with the output sequence from the input sequence by an attention (attention) mechanism, and deduces the output sequence therefrom. The attention mechanism may determine the relevance of each interest category in the input sequence to each interest category in the output sequence, e.g., to include three categories of Zhejiang, sichuan dish, and hot pot in the input sequence, the output sequence contains the cluster fragrance as an example, the relevance of the Jiang Zhe vegetables and the cluster fragrance is weak, the relevance of the Sichuan dishes and the cluster fragrance is higher, and the relevance of the chafing dish and the cluster fragrance is higher. The attention mechanism can adaptively extract useful combinations from a relatively unordered sequence of user behaviors, thereby effectively improving the accuracy and efficiency of interest extraction.
In one embodiment, the first type of interest model is an explicit interest model and the first type of interest information is explicit interest information.
The second type of interest model is an implicit interest model, and the second type of interest information is implicit interest information.
Specifically, the first class interest model may directly capture user interests that may be interpreted into specific names and classes; the first type of interest information can comprise one of specific interest points, interest labels (also called as interest categories), brands, travel habits and other information of the user, and the first type of interest information can intuitively embody the interest of the user.
In particular, the second type of interest model may implicitly embody the user's interest in a multidimensional vector representation; the second type of interest information may include interest vectors. The second type of interest information can contain richer information and can be better integrated into a common recall model based on the neural network.
Fig. 4 is a flowchart of a point of interest recommendation method according to another embodiment of the present disclosure. The point of interest recommendation method of this embodiment may include the steps of the above-described embodiment. In this embodiment, in S33, recall and sort are performed based on the scene requirement, the first type of interest information and the second type of interest information, so as to obtain recommended interest points, including:
s41, recall is carried out on first type interest information output based on the first type interest model, a first type interest point queue is obtained, and the first type interest information comprises interest labels.
Specifically, in S41, first, based on the scene requirement, an interest list corresponding to the scene requirement is matched, and an interest tag may be included in the interest list; then, according to the interest list and the first interest information, obtaining first interest information meeting scene requirements; and then recalling according to the first type of interest information meeting the scene requirement to obtain a first type of interest point queue.
S42, recall is conducted on the second type of interest information output based on the second type of interest model, a second type of interest point queue is obtained, and the second type of interest information comprises interest vectors.
Specifically, in S42, since the second type interest model is based on the scene requirement and the user history behavior data, an interest vector meeting the scene requirement can be obtained, and then the recall model carries out recall based on the interest vector, so as to obtain a second type interest point queue.
S43, inputting the first type of interest point queues and the second type of interest point queues into a sequencing model to reorder, and obtaining recommended interest point queues.
Recall two interest point queues from interest information of two dimensions, namely explicit interest (i.e. interest tag) and implicit interest (interest vector), and then combine the two interest point queues to comprehensively consider and sort, and then output a recommended interest point queue. The advantages of both explicit interests and implicit interests are integrated, the user interest habits can be better understood, and interest points which meet requirements better are recommended for the user.
In one embodiment, the first type of interest is an explicit interest, the second type of interest is an implicit interest, and the corresponding first type of interest model is an explicit interest model, and the second type of interest is an implicit interest model.
FIG. 5 is an example diagram of an interest mining application scenario. For example, behavior sequence extraction may be performed on a map log in a map application. Explicit interest mining and implicit interest mining are performed based on the extracted behavior sequences. Explicit interest mining may be combined with matching interest lists derived based on user scene needs for explicit interest recall. Implicit interest mining may be combined with scene vectors derived based on user scene needs for implicit interest recall. And the interest recommendation can be performed by integrating the explicit interest recall and the implicit interest mining and sequencing, so as to obtain recommended interest points or interest point sequences.
Fig. 6 is a system implementing the flow chart, which may include a long-term-interest generation module 61 and a long-term-interest application module 62. The individual modules of the system are described below:
long-term interest generation module 61:
the long-term interest generation module 61 is configured to extract regular behaviors from a long-term behavior log (map log) of the user, and mine explicit interests (first type interest information) and implicit interests (second type interest information) of the user.
The long-term interest generation module 61 includes: an explicit interest mining module 611 and an implicit mining interest module 612. The goal of the explicit interest mining module 611 is to mine intuitively interpretable interests, such as user preferences for cuisine, preferences for way to go, preferences for brands, and so forth. The goal of the implicit interest mining module 612 is to refine the user historical behavior data into a multidimensional interest vector that is characteristic of the recall model of the recommender system.
Explicit interest mining submodule 611:
the goal of the explicit interest mining sub-module 611 is to capture intuitively interpretable interests from the user's historical behavior. For example, assuming that 60% of the points of interest in food historically retrieved by user A are hot pots, it can be inferred that this user has significant interest in hot pots. Assuming again that user B is traveling to the attraction multiple times by navigating the drive, it can be inferred that this user has a preference for self-driving tour. This type of interest information, which may be specific to a particular point of interest, interest tag, and brand, is referred to as explicit interest in this embodiment.
In order to efficiently and accurately mine the user's explicit interests, the explicit interest mining sub-module 611 trains using an attention sequence-to-sequence (attention seq2 seq) model, resulting in an explicit interest model (also referred to as a first type of interest model). The attention sequence-to-sequence model is a common model in natural language processing, and a typical application scenario is language translation. The model is employed to fit the relationship between the interests of the user and the behavior of the user.
For example, the behavior of the user 20 days later may be fitted by a history of behavior of the user over the first 60 days. Among the vast number of user behaviors, there are some rules of interest. For example, users who usually remove chafing dish and Sichuan dish in the first 60 days, and remove the string fragrance once in the last 20 days. The user who likes tourism in the first 60 days and the trip mode is self-driving, goes to the scenic spot suitable for self-driving once in the last 20 days. A large number of such input sequences and output sequences can be constructed from map-rich user history behavior and a model trained to capture such laws.
A presentation of the effects of a training attention sequence to sequence model may be shown in fig. 7. Referring to fig. 7, the input sequence (user's history) contains three categories of Jiang Zhe vegetable, chuan vegetable and chafing dish, and the output sequence (user's follow-up behavior) contains Chuan-xiang. Attention sequence-to-sequence model finds content from the input sequence that is associated with the output sequence by an attention (attention) mechanism, and deduces the output sequence therefrom. The thickness of the attention layer line in fig. 7 reflects the correlation between input and output. The relevance of the Zhejiang vegetables and the string incense is very weak, the relevance of the Sichuan dishes and the string incense is relatively high, and the relevance of the chafing dish and the string incense is relatively high. The attention mechanism can adaptively extract useful combinations from a relatively unordered sequence of user behaviors, thereby effectively improving the accuracy and efficiency of interest extraction.
The method is also used for capturing the explicit interests of the user, and the traditional method is based on statistics, namely statistics of the interaction times of interests of each interest point category, interest point brand, travel mode and the like in the user history, and then selecting the content of which times exceed a preset threshold as the interests of the user. This statistical approach makes it difficult to capture the interactivity between user interests. For example, a user who has two preferable POIs of chafing dish and Sichuan dish at the same time can be interested in the series fragrance, but the statistics cannot identify the migration of the interest. Therefore, the simple statistical method cannot fully apply the application of the user, and the interest capturing capability is not strong.
Compared with the traditional method, the method can adaptively extract useful combinations from relatively unordered user behavior sequences based on the attention sequence-to-sequence model, and deduce output sequences according to the combinations, for example, the combination of two preferences of hot pot and Sichuan dish in fig. 7 is also interested in the series of the incense, the migration of the interest is identified, and the accuracy and efficiency of interest extraction are effectively improved.
Implicit interest mining module 612:
the implicit interest mining module 612 implicitly embodies the user interests through multidimensional vector representations. Specifically, the goal of the implicit interest mining module 612 is to encode the user's behavioral history (POI A, POI B, POI C, POI D …) into an interest vector. Compared with explicit interests, implicit interests (i.e., the vector of interest) not only contain more information, but also can be better incorporated into common neural network-based recall and sort models, because the interests of vector expressions can be directly added to the model as input features.
Implicit interest building relies on a recall model of a recommendation system, and the user's interests are trained end-to-end by adding user behavior histories to the model. The whole process is divided into the following result steps:
(a) User behavior sequence construction:
according to various behaviors of the user on the map, including searching, clicking, navigating and the like, POIs interacted by the user are collected, and a historical behavior sequence of the user is constructed.
(b) Word vector construction:
the user's historical sequence of actions is made up of POIs that need to be abstracted into word vectors if they want to be entered as features into the model.
The word vector tool (such as word2 vec) and the POI interaction sequence of the user are utilized to generate the word vector of each POI. The word vector tool can generate vector expressions with definite semantics for POIs according to a large number of user behavior sequences. This semantic meaning is mainly represented between similar POIs, and the similarity between vectors is also high, such as the similarity between vectors in a jojo garden and a round garden is high.
(c) Modeling the interest characteristics:
users often have a relatively clear goal when searching and finding the surrounding, for example, users need to search for nearby food when searching for food, and need to search for attractions that can be played when clicking on attractions in the surrounding page. The historical behavior of the user includes various points of interest, such as food, scenic spots, companies, cells, etc. Not all of the historical behaviors are related to the current needs of the user, so modeling the direct relationship of the current needs of the user to the historical behaviors is required when modeling interests.
The approach adopted here is to add a user demand scene attention module between behavior sequences in the recall model. The inputs of the module are the user's demand scene vector and the word vector for each historical point of interest, respectively, and the output of the module is the correlation weight between the two. And weighting and summing each historical behavior interest point according to the weight output by the attention module to obtain an interest vector which can be input into the downstream.
(d) The recall model is trained end-to-end. And after the whole recall model is converged, the generation of the interest model is completed.
Long-term-interest application module 62:
the long-term interest application module 62 is configured to determine a recommended recall ordering result according to the user interest. Specifically, the result of searching and peripheral returning is changed by utilizing the mined explicit interest and implicit interest of the user, so that the interest of the user can be better met.
As shown in fig. 8, the long-term-interest application module 62 may implement the following application content:
the first is an application of explicit interest, and the specific application process is as follows:
(a) And screening the explicit interests meeting the scene requirements from the explicit interests mined by the explicit interest mining module according to the current user scene requirements. The matching between the explicit interests and the scene demands can be set through manual rules, and screening is completed through the set manual rules.
(b) And recalling the interest points conforming to the interest according to the screened explicit interest conforming to the scene requirement. The specific recall mode can adopt the mode that the explicit user interests and the user positions are input into a corresponding search engine module or model to obtain the interest points. Such as recalling more sushi and wine houses in the vicinity of the user with the preferred daily feed. For example, the explicit interest service-determined interest tag (tag) of FIG. 8, may recall the corresponding point of interest (poi) queue.
(c) And transmitting the recalled contents of various interests and the attention degree of the user to each interest to a sequencing module at the downstream of the product, and comprehensively sequencing and displaying the output queue to the user.
The second is the application of implicit interest, and the specific application process is as follows:
because the modeling of implicit interests relies on the recall model of the overall system and the training is done end-to-end, the actual application is to input the user's historical behavior as a feature into the recall model. Then, the interest model in the recall model extracts the interest vector matched with the user requirement according to the different user requirements, and uses the interest vector as the input characteristic of the downstream module to return the personalized interest point queue for the user (namely, the implicit personalized recall of the model in fig. 8).
Embodiments of the present disclosure may be applied to map applications, for example, when a user enters a general demand word in a search box or clicks on a find periphery, a list of recommended points of interest may be determined. To better illustrate the technical effects achieved by the embodiments of the present disclosure, the following description will be made with reference to the application scenarios of fig. 9 to 12.
Fig. 9a, 9b, 10a and 10b are examples of the related art. Specifically, fig. 9a shows an example of a scene in which a search box inputs a general requirement word, fig. 9b shows an example of a recommendation result after a user inputs a general requirement word in a search box in the related art, and the related art gives a fixed search result list. Fig. 10a shows an example of a scene of clicking a found periphery, and fig. 10b shows an example of recommended results after the user clicks the found periphery in the related art, which gives a fixed periphery result list.
It can be seen that the result of retrieving and finding the surroundings is a fixed result of ordering the heat and quality of the points of interest. The search and discovery of the surrounding results cannot be changed according to different interests of different users. For example, if food search is performed similarly or peripheral shops are found, some users show preference for "daily feed" in previous behaviors, and some users show preference for "Sichuan dish", at this time, the same result list is shown to two groups of users again, so that the long-term interest demands of the users cannot be met. When users like Beijing always find surrounding scenery points, the head returns the result of the surrounding scenery points such as the palace, the jockey garden and the like, and the local users' tour demands cannot be met.
Fig. 11a and 11b, fig. 12a and 12b are graphs showing comparison of recommended effects of the related art and the embodiments of the present disclosure for the same user behavior. As can be seen from the comparison of fig. 11a and 11b, the result list shows more daily material when the disclosed embodiment (fig. 11 b) recognizes that the user has an interest preference for daily material, as compared with the related art (fig. 11 a). Compared to the related art (fig. 12 a), the embodiment of the disclosure (fig. 12 b) presents more scenic spots around suitable for self-driving for users who prefer self-driving tour.
Compared with the related art, the embodiment of the disclosure can identify various long-term interests and habits of the user, such as travel habits of the user, preference for interest categories, brands, price intervals and the like, on understanding the user. In the aspect of helping users, the return result of the map when the users find the interest points can be corrected for the users with different interest preferences, so that the map meets the interest requirements of the users. Embodiments of the present disclosure may achieve the following effect examples: for example, a user who prefers driving to go out of the journey recommends more points of interest around the city suitable for self-driving in the return list of the search scenic spots. For example, a user living in the Shanghai may recommend head attractions such as the hometown, while a user in Beijing may prefer non-head attractions such as the Orson park.
FIG. 13 is a block diagram of an interest model training apparatus according to an embodiment of the present disclosure. The apparatus may include:
a historical interest point obtaining module 131, configured to obtain a historical interest point from the historical behavior data;
the interest model training module 132 is configured to train an interest model by using word vectors of historical interest points and vectors of scene requirements.
In one embodiment, as shown in fig. 14, the interest model training apparatus further includes:
the word vector obtaining module 141 is configured to process the historical interest point by using a word vector tool to obtain a word vector of the historical interest point.
In one embodiment, the interest model training module 132 is specifically configured to:
and adding an attention module in the recall model, wherein the input of the attention module comprises a vector of scene requirements and a word vector of historical interest points, and the output of the attention module comprises a correlation weight of the scene requirements and the historical interest points.
And carrying out end-to-end training on the recall model added with the attention module to obtain an interest model.
Fig. 15 is a block diagram of a point of interest recommendation device according to an embodiment of the present disclosure. The apparatus may include:
the first type interest information obtaining module 151 is configured to input historical behavior data into a first type interest model to obtain first type interest information.
The second type interest information obtaining module 152 is configured to input historical behavior data into a second type interest model to obtain second type interest information, where the second type interest model is trained by using the interest model training device according to the embodiment of the present disclosure.
The recommended interest point determining module 153 is configured to recall and sort based on the scene requirement, the first type of interest information and the second type of interest information, and obtain recommended interest points.
In one embodiment, the first type of interest model is trained using an attention model, a training sample of the attention model comprising: historical behavior data during a first time period and historical behavior data during a second time period, wherein the first time period precedes the second time period.
In one embodiment, the first type of interest model is an explicit interest model and the first type of interest information is explicit interest information;
the second type of interest model is an implicit interest model, and the second type of interest information is implicit interest information.
In one embodiment, as shown in fig. 16, the recommended point of interest determination module includes:
the first type interest point queue obtaining sub-module 161 is configured to recall first type interest information output based on the first type interest model, to obtain a first type interest point queue, where the first type interest information includes an interest tag.
The second type interest point queue obtaining sub-module 162 is configured to recall the second type interest information output based on the second type interest model, to obtain a second type interest point queue, where the second type interest information includes an interest vector.
The recommended interest point queue obtaining sub-module 163 is configured to reorder the input ordering models of the first class interest point queue and the second class interest point queue to obtain a recommended interest point queue.
The functions of each unit, module or sub-module in each interest model training apparatus in the embodiments of the present disclosure may be referred to the corresponding description in the above interest model training method, which is not repeated herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 17 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 17, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of explicit means, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, such as an interest model training method or an interest point recommendation method. For example, in some embodiments, the interest model training method or the interest point recommendation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM803 and executed by the computing unit 801, one or more steps of the interest model training method or the point of interest recommendation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the interest model training method or the point of interest recommendation method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: explicit means (e.g., CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for explicit information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. An interest model training method, comprising:
acquiring historical interest points from historical behavior data;
training an interest model by using word vectors of historical interest points and vectors of scene requirements;
wherein the method further comprises:
processing the historical interest points by using a word vector tool to obtain word vectors of the historical interest points;
wherein training the interest model using the word vector of the historical interest point and the vector of the scene requirement comprises:
adding an attention module in a recall model, wherein the input of the attention module comprises a vector of a scene demand and a word vector of a historical interest point, and the output of the attention module comprises a correlation weight of the scene demand and the historical interest point;
and carrying out end-to-end training on the recall model added with the attention module to obtain an interest model.
2. A point of interest recommendation method, comprising:
the historical behavior data are input into a first type interest model to obtain first type interest information;
inputting the historical behavior data into a second type of interest model to obtain second type of interest information, wherein the second type of interest model is trained by the method of claim 1;
and recalling and sorting based on scene requirements, the first type of interest information and the second type of interest information to obtain recommended interest points.
3. The method of claim 2, wherein the first type of interest model is trained using an attention model, a training sample of the attention model comprising: historical behavior data over a first time period and historical behavior data over a second time period, wherein the first time period precedes the second time period.
4. The method of claim 3, wherein the first type of interest model is an explicit interest model and the first type of interest information is explicit interest information;
the second type of interest model is an implicit interest model, and the second type of interest information is implicit interest information.
5. The method of claim 3, wherein recalling and ordering based on scene needs, the first type of interest information, and the second type of interest information, results in recommended points of interest, comprising:
Recall is carried out on the first type of interest information output based on the first type of interest model, a first type of interest point queue is obtained, and the first type of interest information comprises interest labels;
recall the second type of interest information based on the second type of interest model output to obtain a second type of interest point queue, wherein the second type of interest information comprises interest vectors;
and inputting the first type of interest point queues and the second type of interest point queues into a sequencing model to reorder, so as to obtain recommended interest point queues.
6. An interest model training apparatus, comprising:
the historical interest point acquisition module is used for acquiring historical interest points from the historical behavior data;
the interest model training module is used for training an interest model by utilizing word vectors of historical interest points and vectors of scene requirements;
wherein the apparatus further comprises:
the word vector acquisition module is used for processing the historical interest points by using a word vector tool to obtain word vectors of the historical interest points;
the interest model training module is specifically configured to:
adding an attention module in a recall model, wherein the input of the attention module comprises a vector of a scene demand and a word vector of a historical interest point, and the output of the attention module comprises a correlation weight of the scene demand and the historical interest point;
And carrying out end-to-end training on the recall model added with the attention module to obtain an interest model.
7. A point of interest recommendation device, comprising:
the first interest information acquisition module is used for inputting historical behavior data into the first interest model to obtain first interest information;
a second type interest information acquisition module, configured to input the historical behavior data into a second type interest model to obtain second type interest information, where the second type interest model is trained using the apparatus of claim 6;
and the recommended interest point determining module is used for recalling and sorting based on scene requirements, the first type of interest information and the second type of interest information to obtain recommended interest points.
8. The apparatus of claim 7, wherein the first type of interest model is trained using an attention model, a training sample of the attention model comprising: historical behavior data over a first time period and historical behavior data over a second time period, wherein the first time period precedes the second time period.
9. The apparatus of claim 7, wherein the first type of interest model is an explicit interest model and the first type of interest information is explicit interest information;
The second type of interest model is an implicit interest model, and the second type of interest information is implicit interest information.
10. The apparatus of claim 7, wherein the recommended point of interest determination module comprises:
the first interest point queue obtaining sub-module is used for recalling first interest information output based on the first interest model to obtain a first interest point queue, wherein the first interest information comprises interest labels;
the second class interest point queue obtaining sub-module is used for recalling second class interest information output based on the second class interest model to obtain a second class interest point queue, wherein the second class interest information comprises interest vectors;
and the recommended interest point queue obtaining sub-module is used for inputting the first class interest point queue and the second class interest point queue into a sequencing model to reorder so as to obtain a recommended interest point queue.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-5.
CN202011550110.7A 2020-12-24 2020-12-24 Interest model training method, interest point recommending method, device and equipment Active CN112559879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011550110.7A CN112559879B (en) 2020-12-24 2020-12-24 Interest model training method, interest point recommending method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011550110.7A CN112559879B (en) 2020-12-24 2020-12-24 Interest model training method, interest point recommending method, device and equipment

Publications (2)

Publication Number Publication Date
CN112559879A CN112559879A (en) 2021-03-26
CN112559879B true CN112559879B (en) 2023-10-03

Family

ID=75033367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011550110.7A Active CN112559879B (en) 2020-12-24 2020-12-24 Interest model training method, interest point recommending method, device and equipment

Country Status (1)

Country Link
CN (1) CN112559879B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220816A (en) * 2021-05-19 2021-08-06 北京百度网讯科技有限公司 Data processing method, device and equipment for POI (Point of interest) of electronic map
CN113254782B (en) * 2021-06-15 2023-05-05 济南大学 Question-answering community expert recommendation method and system
CN113656698B (en) * 2021-08-24 2024-04-09 北京百度网讯科技有限公司 Training method and device for interest feature extraction model and electronic equipment
CN115329211B (en) * 2022-08-01 2023-06-06 山东省计算中心(国家超级计算济南中心) Personalized interest recommendation method based on self-supervision learning and graph neural network
CN117992676B (en) * 2024-04-02 2024-06-07 福建省君诺科技成果转化服务有限公司 Intelligent scientific and technological achievement recommendation method based on big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645366B1 (en) * 2011-12-30 2014-02-04 Google Inc. Generating recommendations of points of interest
US8782034B1 (en) * 2011-08-17 2014-07-15 Google Inc. Utilizing information about user-visited places to recommend novel spaces to explore
CN104063383A (en) * 2013-03-19 2014-09-24 北京三星通信技术研究有限公司 Information recommendation method and device
US9282161B1 (en) * 2012-10-26 2016-03-08 Amazon Technologies, Inc. Points of interest recommendations
CN106919641A (en) * 2017-01-12 2017-07-04 北京三快在线科技有限公司 A kind of interest point search method and device, electronic equipment
CN110020144A (en) * 2017-11-21 2019-07-16 腾讯科技(深圳)有限公司 A kind of recommended models method for building up and its equipment, storage medium, server
CN110930203A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Information recommendation model training method and device and information recommendation method and device
CN111708876A (en) * 2020-06-16 2020-09-25 北京百度网讯科技有限公司 Method and device for generating information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782034B1 (en) * 2011-08-17 2014-07-15 Google Inc. Utilizing information about user-visited places to recommend novel spaces to explore
US8645366B1 (en) * 2011-12-30 2014-02-04 Google Inc. Generating recommendations of points of interest
US9282161B1 (en) * 2012-10-26 2016-03-08 Amazon Technologies, Inc. Points of interest recommendations
CN104063383A (en) * 2013-03-19 2014-09-24 北京三星通信技术研究有限公司 Information recommendation method and device
CN106919641A (en) * 2017-01-12 2017-07-04 北京三快在线科技有限公司 A kind of interest point search method and device, electronic equipment
CN110020144A (en) * 2017-11-21 2019-07-16 腾讯科技(深圳)有限公司 A kind of recommended models method for building up and its equipment, storage medium, server
CN110930203A (en) * 2020-02-17 2020-03-27 京东数字科技控股有限公司 Information recommendation model training method and device and information recommendation method and device
CN111708876A (en) * 2020-06-16 2020-09-25 北京百度网讯科技有限公司 Method and device for generating information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马理博 ; 秦小麟 ; .话题-位置-类别感知的兴趣点推荐.计算机科学.2020,(09),全文. *

Also Published As

Publication number Publication date
CN112559879A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112559879B (en) Interest model training method, interest point recommending method, device and equipment
CN112612957B (en) Recommendation method of interest points and training method and device of recommendation model of interest points
CN110941740B (en) Video recommendation method and computer-readable storage medium
US20090158161A1 (en) Collaborative search in virtual worlds
KR20200003106A (en) Information retrieval methods, devices and systems
JP2018517959A (en) Selecting a representative video frame for the video
CN104850546B (en) Display method and system of mobile media information
CN109657140A (en) Information-pushing method, device, computer equipment and storage medium
CN109168047B (en) Video recommendation method and device, server and storage medium
TW201214173A (en) Methods and apparatus for displaying content
CN109241243B (en) Candidate document sorting method and device
CN111666292B (en) Similarity model establishment method and device for retrieving geographic position
CN111814077B (en) Information point query method, device, equipment and medium
CN110597962A (en) Search result display method, device, medium and electronic equipment
US10592514B2 (en) Location-sensitive ranking for search and related techniques
CN112632379A (en) Route recommendation method and device, electronic equipment and storage medium
CN111400586A (en) Group display method, terminal, server, system and storage medium
CN112231580B (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN114265981A (en) Recommendation word determining method, device, equipment and storage medium
CN112597389A (en) Control method and device for realizing article recommendation based on user behavior
CN111159242B (en) Client reordering method and system based on edge calculation
CN111666461A (en) Method, apparatus, device and computer storage medium for retrieving geographical location
CN109791545A (en) The contextual information of resource for the display including image
KR20200133976A (en) Contents Curation Method and Apparatus thereof
US20170109411A1 (en) Assisted creation of a search query

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant