EP2162828A1 - Système de recommandation à multiples outils de recommandation intégrés - Google Patents

Système de recommandation à multiples outils de recommandation intégrés

Info

Publication number
EP2162828A1
EP2162828A1 EP08771411A EP08771411A EP2162828A1 EP 2162828 A1 EP2162828 A1 EP 2162828A1 EP 08771411 A EP08771411 A EP 08771411A EP 08771411 A EP08771411 A EP 08771411A EP 2162828 A1 EP2162828 A1 EP 2162828A1
Authority
EP
European Patent Office
Prior art keywords
scores
recommendations
candidate
recommender
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08771411A
Other languages
German (de)
English (en)
Other versions
EP2162828A4 (fr
Inventor
Kushal Chakrabarti
James D. Chan
George M. Ionkov
Sung H. Kim
Brett W. Witt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/772,010 external-priority patent/US7949659B2/en
Priority claimed from US11/771,914 external-priority patent/US8260787B2/en
Application filed by Amazon Technologies Inc filed Critical Amazon Technologies Inc
Publication of EP2162828A1 publication Critical patent/EP2162828A1/fr
Publication of EP2162828A4 publication Critical patent/EP2162828A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • Web sites and other types of interactive systems commonly include recommendation systems for providing personalized recommendations of items stored or represented in a data repository.
  • the recommendations are typically generated based on monitored user activities or behaviors, such as item purchases, item viewing events, item rentals, and/or other types of item selection actions.
  • the recommendations are additionally or alternatively based on users ' explicit ratings of items.
  • item-to-item similarity mappings may be generated periodically based on computer-detected correlations between the item purchases, item viewing events, or other types of item selection actions of a population of users. Once generated, a dataset of item-to-item mappings may be used to identify and recommend items similar to those already "known' " to be of interest to the user.
  • a recommendations system for selecting items to recommend to users.
  • the system comprises a recommendation engine comprising a plurality of recommenders.
  • Each recommender corresponds to a different type of reason for recommending items, and is operative to: retrieve item preference data reflective of actions performed by a user; generate candidate recommendations responsive to a subset of the item preference data, identify one or more reasons for recommending the candidate recommendations, and score the candidate recommendations to provide relative indications of the strength of the candidate recommendations.
  • the recommendations system also comprises a normalization engine operative to normalize the scores of the candidate recommendations provided by each recommender.
  • the recommendations system further comprises a candidate selector component operative to: select at least a portion of the candidate recommendations based on the normalized scores to provide as recommendations to the user, and output the recommendations with associated reasons for recommending the items.
  • a computer-implemented method of selecting items to recommend comprises: retrieving item preference data reflective of actions performed by a user; and providing the item preference data to a plurality of recommenders, each recommender corresponding to a different type of reason for recommending items. Each recommender is operative to generate candidate recommendations responsive to a subset of the item preference data, and to identify one or more reasons for recommending the candidate recommendations. The method also comprises selecting at least a portion of the candidate recommendations to provide as recommendations to the user; and outputting the recommendations with associated reasons for recommending the items.
  • the apparatus comprises means for retrieving item preference data reflective of actions performed by a user; and means for providing the item preference data to a plurality of recommenders.
  • Each recommender corresponds to a different type of reason for recommending items, and is operative to: generate candidate recommendations responsive to a subset of the item preference data, and identify one or more reasons for recommending the candidate recommendations.
  • the apparatus further includes means for selecting at least a portion of the candidate recommendations to provide as recommendations to the user.
  • the method comprises receiving scores for candidate recommendations from first and second recommenders configured to provide recommendations to a target user, the first recommender operative to assign the scores to the candidate recommendations using a different scoring scale from the second recommender.
  • the method also comprises, for each recommender, normalizing the scores assigned by the recommender by: calculating a range of scores, the range comprising a difference between a minimum score and a maximum score, and calculating normalized scores as a function of the range.
  • the method further comprises using the normalized scores to select at least a portion of the candidate recommendations to recommend to the target user.
  • Also disclosed is computer-implemented method of normalizing item recommendation scores comprising: receiving scores for candidate recommendations from first and second recommenders configured to provide recommendations to a target user, the first recommender configured to assign the scores to the candidate recommendations using a different scoring scale from the second recommender; for each recommender, normalizing the scores assigned by the recommender by: combining the scores for at least some of the candidate recommendations to generate a combined score, and calculating normalized scores as a function of the combined score and the scores for at least some of the candidate recommendations; and using the normalized scores to select at least a portion of the candidate recommendations to recommend to the target user.
  • Yet another disclosed computer-implemented method of normalizing item recommendation scores comprises: receiving scores for candidate recommendations from first and second recommenders configured to provide recommendations to a target user, the first recommender operative to assign the scores to the candidate recommendations using a different scoring scale from the second recommender; for each recommender, normalizing the scores assigned by the recommender by assigning percentile rankings to the scores and using the percentile rankings as normalized scores; and using the normalized scores to select at least a portion of the candidate recommendations to recommend to the target user.
  • the system comprises a plurality of recommenders operative to assign scores to candidate recommendations using different scoring scales; and a normalization engine operative to normalize scores assigned by the plurality of recommenders.
  • the normalization engine is operative to: calculate a range of scores, the range comprising a difference between a minimum score and a maximum score, and calculate normalized scores as a function of the range.
  • the system also comprises a candidate selector operative to use the normalized scores to select at least a portion of the candidate recommendations to recommend to a target user.
  • Another system for normalizing item recommendation scores comprises: a plurality of recommenders operative to assign scores to candidate recommendations using different scoring scales; and a normalization engine operative to normalize scores assigned by the plurality of recommenders.
  • the normalization engine is operative to combine the scores for at least some of the candidate recommendations to generate a combined score, and to calculate normalized scores as a function of the combined score and of the scores for at least some of the candidate recommendations.
  • the system also comprises a candidate selector operative to use the normalized scores to select at least a portion of the candidate recommendations to recommend to a target user.
  • Another system for normalizing item recommendation scores comprises: a plurality of recommenders operative to assign scores to candidate recommendations using different scoring scales; and a normalization engine operative to normalize scores assigned by the plurality of recommenders.
  • the normalization engine is configured to assign percentile rankings to the scores and to use the percentile rankings as normalized scores.
  • the system also comprises a candidate selector operative to use the normalized scores to select at least a portion of the candidate recommendations to recommend to a target user.
  • FIGURE 1 illustrates an embodiment of a recommendation system
  • FIGURE 2 illustrates an embodiment of a process for generating item recommendations for a user
  • FIGURE 3A illustrates an embodiment of a process for generating tag- based item recommendations for a user
  • FIGURE 3B illustrates another embodiment of a process for generating tag-based item recommendations for a user
  • FIGURE 4 illustrates an embodiment of a process for normalizing item recommendation scores
  • FIGURE 5 illustrates another embodiment of a process for normalizing item recommendation scores
  • FIGURE 6 illustrates yet another embodiment of a process for normalizing item recommendation scores
  • FIGURE 7 illustrates yet another embodiment of a process for normalizing item recommendation scores
  • FIGURE 8 illustrates a portion of a web page showing an example recommendation interface
  • FIGURE 9 illustrates one example of how the various recommendation features may be implemented in the context of a web-based electronic catalog system.
  • the processes are described primarily in the context of a system that recommends catalog items to users of an e-commerce web site that provides functionality for users to browse and make purchases from an electronic catalog of items.
  • the disclosed processes can also be used in other types of systems, and can be used to recommend other types of items, such as but not limited to web sites, news articles, blogs, podcasts, travel destinations, service providers, other users, events, discussion boards, photos and other images, videos, tagged items, and user-generated lists of items.
  • the disclosed processes need not be implemented as part of, or in conjunction with, a web site.
  • a significant deficiency in existing recommendation systems is that they typically use a single, monolithic algorithm for generating recommendations. These algorithms are often inflexible and not easily adapted to producing recommendations targeted at different customer wants or needs. For example, a recommendation algorithm might recommend items because they are similar to an item the customer purchased. However, on a given day the customer might be interested in shopping for a friend's birthday or exploring new interests rather than buying items similar to what the customer already owns.
  • the recommendation system 100 includes multiple recommenders 1 12 for generating recommendations that target users' varied interests.
  • the recommenders 1 12 provide reasons for recommending items that can be more compelling than reasons provided by other systems, thereby increasing consumer confidence in the recommendations.
  • the various components of the recommendation system 100 may be implemented as software applications, modules, or components on one or more computers, such as servers. While the various components are illustrated separately, they may share some or all of the same underlying logic or code.
  • the recommendation system 100 receives item preference data 102 and uses the item preference data 102 to produce personalized item recommendations for a target user.
  • the item preference data 102 is reflective of actions performed by the user. These actions might include, for example, purchasing items, rating items, adding items to the user's wish list, providing data on the user's friends, tagging items, searching for items, and the like.
  • the item preference data 102 may include browse history data, purchase history data, friends data, tags data, and many other types of data.
  • the item preference data 102 is provided to a recommendation engine 1 10.
  • the recommendation engine 1 10 includes multiple recommenders 112.
  • each recommender 1 12 may be implemented as a component or algorithm that generates personalized item recommendations targeted to a different interest or need of a user.
  • the multiple recommenders 1 12 of the recommendation engine 1 10 can provide more effective recommendations than the monolithic algorithms of currently-available systems.
  • each recommender 112 analyzes a subset of the item preference data to identify items as candidate recommendations for recommending to a user.
  • Each recommender 112 also identifies one or more reasons for recommending the items. As discussed below, different recommenders 112 may use different types of item preference data than others to select candidate items to recommend. Different recommenders 1 12 may also provide different types of reasons for recommending items.
  • a particular recommender 1 12 might retrieve the user's purchase history data. Using this data, the recommender 112 can find items owned by the user that are part of a series. A series might include, for instance, books in a trilogy, movies and their sequels, or all albums by a musician. If the user has purchased fewer than all the items in the series, the recommender 112 might select the remaining items as candidate recommendations and provide a reason such as, "this item is recommended because you purchased items A and B, and this item would complete your series.”
  • this reason can be more compelling than a reason such as "because you purchased items A and B, and this item is similar.” Users may therefore be more inclined to trust the reasons provided by the recommenders 1 12.
  • a recommender 112 might obtain data about a user's friends. This friends data might include information on the friends' birthdays, their wish lists, and their purchase histories. Using this data, a recommender 1 12 might suggest gifts that could be bought for a friend's upcoming birthday and provide a reason such as "this item is recommended because your friend John's birthday is on July 5th, and this item is on his wish list.” Provided with such a reason, the user might be more inclined to buy the item.
  • item preference data 102 may be used by the recommenders 112 to generate candidate recommendations and corresponding reasons.
  • browse history data e.g., data on user searches, clicks, and the like
  • Purchase history data and/or wish list data might be used to provide a recommendation with the reason, "because this item might be interesting to an early adopter such as you.”
  • Browse history data on a browse node of interest to the user e.g., a category browsed by the user
  • Various other forms of item preference data 102 may be used to provide recommendations with reasons such as "because you recently moved,” “because you bought an item that may need replacing,” “because most people upgrade their DVD player after two years,” or the like.
  • recommenders 1 12 may each provide the same candidate recommendation along with a different reason for that recommendation. For instance, several recommenders 1 12 may be used to recommend a particular war movie because 1) a user recently rated several war movies, 2) this is the best selling movie in the war movie category, and 3) this movie was nominated for two Academy awards. Using multiple reasons may provide further motivation to the user to view or buy an item.
  • the user may also see greater diversity in the reasons that are provided. For example, the user may see one recommendation that is based on an item the user purchased, another based on one or more search queries submitted by the user, and another based on an item listed on a friend's wish list.
  • the diversity of recommendations and reasons provided to the user may heighten user interest in the recommendations.
  • At least some of the recommenders 112 are modular. Recommenders 1 12 can therefore be selectively added to or removed from the recommendation engine 110. As more diverse items or services are added to an online catalog, for instance, new recommenders 112 can be added that target different user interests. Conversely, some recommenders 112 may be removed from the recommendation engine 1 10 if they become less useful.
  • Some of the recommenders 1 12 may use particular types of behavior- based associations to select candidate items to recommend.
  • one recommender may use purchase-based item associations, as generated by mining the purchase histories of large numbers of users, to select candidate items similar to those purchased or owned by the target user.
  • a particular recommender may use item- viewing based associations, as generated by mining the item viewing histories of large numbers of users, to select candidate items similar to those recently viewed by the target user.
  • Another recommender may use behavior-based associations between particular search queries and items to select candidate items that are related to the search history of the target user.
  • recommenders may select candidate items that are unusually popular in the particular geographic region of the target user, or that are unusually popular among users whose email addresses contain the same domain name (e.g., nasa.gov) as the target user. Examples of recommendation methods that use these approaches are described in the following U.S. patent documents, the disclosures of which are hereby incorporated by reference in their entirety: U.S. Patent Nos. 6,853,982 and 6,963,850, and U.S. Appl. No. 10/966,827, filed October 15, 2004.
  • the recommenders 112 are modular, the recommenders 112 can be added to an existing recommendation system to improve the quality of recommendations provided by the system.
  • the recommenders 1 12 in certain implementations score the candidate recommendations.
  • the scores can provide indications of the relative strength of the candidate recommendations.
  • Each recommender uses one or more factors to generate the scores.
  • a recommender 1 12 that provides recommendations to complete series of items owned by the user might base scores on the total number of items in a series, the number of those items owned by the user, and the sales rank of the items not owned by the user.
  • One or more of the recommenders 112 may further take into account negative feedback provided by a user when generating and scoring candidate recommendations, as described in related U.S. Patent Application No. 11/752,251, filed May 22, 2007, and titled "Probabilistic Recommendation System," the disclosure of which is hereby incorporated by reference in its entirety.
  • Negative feedback may be used for items the user has explicitly rated poorly, such as by designating as "not interested” or by rating two stars or less on a scale of 1 -5 stars (see FIGURE 7).
  • Other types of negative feedback including implicit negative feedback, may be used to score candidate recommendations.
  • negative feedback can cause a candidate recommendation to receive a negative score.
  • a candidate recommendation may also have an overall score that is the sum of both positive scores and negative scores.
  • each recommender 1 12 may be based on factors that might be pertinent to one recommender 112 but not another. For instance, recommendations for top sellers in a browse node of interest to the user might score items based on their relative sales ranking. However, relative sales ranking might not be relevant to recommendations for items an early adopter might buy since there may be little sales data for these items.
  • the resulting scores from each recommender 1 12 can have different scoring scales.
  • One recommender 1 12 might output, for example, scores in a range of -10,000 to 10,000, whereas another recommender 1 12 might output scores in a range of 90 to 120. It can be difficult to compare scores from these different score ranges.
  • the same score outputted by different recommenders may have different meanings because the underlying scoring methodologies may be different. For instance, a score of "2" from one recommender that has a scoring scale of 0 to 100 may have a different meaning than a score of "2" from a recommender that has a scoring scale of 1 to 5.
  • a normalization engine 120 normalizes the scores from the various recommenders 1 12 to produce normalized scores.
  • the normalized scores enable the candidate recommendations generated by each recommender 112 to be more easily compared.
  • Many different algorithms may be used to normalize the scores. A few example embodiments of these algorithms are described below, with respect to FIGURES 4 through 6.
  • the normalization engine 120 facilitates adding or removing modular recommenders 112 to the recommendation engine 1 10. The normalization engine 120 facilitates this by normalizing scores from any recommender 1 12 added to the recommendation engine 1 10. Consequently, recommenders 112 may be added that use different scoring scales from the other recommenders 1 12.
  • the normalization engine 130 facilitates removing recommenders 1 12 from the recommendation engine 1 10 because scores from the remaining recommenders 1 12 are normalized and can therefore still be compared.
  • the normalization engine 120 can also apply weights to the output from each recommender 112.
  • the weights in one embodiment are multipliers that effectively increase or decrease candidate recommendations' normalized scores. Weights may be applied to emphasize the output of certain recommenders 1 12 over others. Because some recommenders 1 12 may produce stronger recommendations than others, applying weights emphasizes the stronger recommendations and deemphasizes the weaker recommendations.
  • the weights may be adjusted for each user to reflect the user's preferences. For instance, if a particular user demonstrates an affinity for items selected by a particular recommender, that recommender's selections may be weighted more heavily for this particular user. These weights may also be adjusted over time to reflect the user's changing interests.
  • multiple recommenders 112 will generate the same candidate recommendation.
  • One option in this scenario is to add the scores for the candidate provided by each recommender 112. Adding the scores causes a candidate to appear stronger, indicating that candidates provided by multiple recommenders may be good candidates.
  • a potential problem with this approach is that when two recommenders 1 12 generate a poor candidate, the addition of the scores makes the candidate look stronger than it should.
  • the normalization engine 120 in one embodiment therefore applies exponential decay to the scores, such that scores for the same item are given exponentially less weight as more recommenders 112 recommend the same item. Other decay functions may also be used, such as linear decay. [0049]
  • the normalization engine 120 passes the candidate recommendations to the candidate selector 130.
  • the candidate selector 130 selects a subset of the candidate recommendations to recommend to the user based on the candidates' normalized scores. For example, the candidate selector 130 may select the N most highly scored candidates to recommend. Alternatively, the candidate selector 130 may select a different subset. For example, in some cases it can be beneficial to show recommendations that are not determined to be the best in order to provide fresh recommendations to the user, among other reasons.
  • the candidate selector 130 may provide the entire set of candidates to the user. Because this set is typically large (e.g., several thousand items), a user interface used to display the recommendations may allow the user to page or scroll through this recommendations set from highest to lowest ranking. Because users commonly do not take the time to scroll or page through the entire set of recommendations, the practical effect is the same as selecting a subset, e.g., the user is only presented with those items falling near the top of the list.
  • the candidate selector 130 may output, with the recommendations, associated reasons for recommending the items. As described above, a single reason may be provided for each recommendation, or multiple reasons may be provided.
  • FIGURE 2 illustrates an embodiment of a process 200 for generating item recommendations for a user.
  • the process 200 is implemented in one embodiment by a recommendation system, such as the recommendation system 100 of FIGURE 1.
  • the process 200 begins at 202 by retrieving item preference data associated with a user. This step may be performed by a recommendation engine, such as the recommendation engine 110 of FIGURE 1. At 204, the process 200 generates candidate recommendations using multiple recommenders. In an embodiment, this step is performed by analyzing item preference data to identify one or more reasons for recommending candidate recommendations to a user.
  • the process 200 scores the candidate recommendations. This step may also be performed by the recommenders.
  • the scores can provide indications of the relative strength of the candidate recommendations.
  • the process 200 in one embodiment scores candidate recommendations from different recommenders using scoring scales that may be based on factors pertinent to one recommender but not another. In an embodiment, the process 200 also provides negative feedback scores.
  • the process 200 normalizes scores from each recommender.
  • This step may be performed by a normalization engine, such as the normalization engine 120 of FIGURE 1.
  • the normalized scores enable the candidate recommendations to be more easily compared.
  • this step further includes the step of assigning weights to the scores provided by the recommenders so that some recommenders may be emphasized over others.
  • the process 200 may also normalize scores using an exponential decay function, to reduce the effect of the same item being recommended by multiple recommenders.
  • the process 200 selects candidates based on the normalized scores. This step may be performed by a candidate selector, such as the candidate selector 130 of FIGURE 1.
  • the process 200 may select a subset of most highly scored candidates to recommend, or alternatively, provide a different subset of the entire set of candidates as recommendations.
  • the process 200 outputs recommendations with reasons for recommending the candidate items. This step may also be performed by a candidate selector.
  • FIGURE 3A illustrates an embodiment of a process 300A for generating tag-based item recommendations for a user.
  • the process 300A is implemented in one embodiment by a recommender, such as one of the recommenders 1 12 of FIGURE 1.
  • Items are tagged in certain embodiments through a user interface that allows users to flexibly apply user-defined tags to individual items in an electronic catalog.
  • the tags may, for example, be in the form of textual annotations or labels that are typed in by users, although other forms of content are possible.
  • the tags and tag-item assignments created by each user are stored persistently in association with the user, and may be kept private to the user or exposed to others.
  • a user can flexibly define personal item categories or groupings. For example, a user might create the tag "work” for tagging items relevant to the user's profession, or might create the tag "Tom" for tagging potential items to purchase for a friend or family member named Tom.
  • the users may also have the option to make their tags "public,” meaning that these tags are exposed to other users. Further details on how tags are created are described in U.S. Patent Application No. 1 1/281,886, filed November 17, 2005, and titled “Recommendations Based on Item Tagging Activities of Users,” the disclosure of which is hereby incorporated by reference in its entirety.
  • the process 300A begins at 302 by identifying a tagged item associated with, although not necessarily tagged by, a target user. This step is performed in one embodiment by searching item preference data of the target user to find tagged items that the user has purchased, added to a wish list or shopping cart, rated, searched for, or the like.
  • the tags associated with the tagged items need not have been created by the user, although they may have been in some instances. In one embodiment, only public tags are used.
  • the process 300A selects one or more of the tags associated with the tagged item. As items can have multiple tags, the process 300A may select the most popular tag, which may be a tag most frequently attached to the item. Alternatively, the process 300A may select other tags, such as the top three most popular tags.
  • the process 300A at 306 performs a search using one or more of the selected tags.
  • the search results are related to the information contained in the tags. Since the tags describe a product associated with the user, at least some of the search results may include items that the user would find interesting.
  • the process 300A uses at least some of the items in the search result list as candidate recommendations. The process 300A might score the items, for instance, based on search result relevance scores returned by the search engine. In addition, the process 300A may also provide reasons for recommending the items.
  • a user might have purchased a movie in the past starring the fictional character James BondTM.
  • the process 300A can select this movie from the item preference data of the user and determine what tags, if any, are associated with the item. Some possible tags might be "James Bond” and "adventure.”
  • the process 300A may then perform a keyword search of an electronic database or catalog using these tags as keywords. The scope of this search may optionally be limited to a particular type of item or collection of items, such as "all products” or "all movies.”
  • the search results might include more James BondTM movies, James BondTM books, other action or adventure movies, and so forth. Since at least some of these items are probably related to the movie purchased by the user, some or all of these items may be used as recommendations. Additionally, the process 300A may provide a reason for recommending the items that includes a reference to the tag searched on, such as "recommended because you purchased a movie starring James Bond.”
  • FIGURE 3B illustrates another embodiment of a process 300B for generating tag-based item recommendations for a user.
  • the process 300B is also implemented in one embodiment by a recommender, such as one of the recommenders 112 of FIGURE 1.
  • the process 300B begins by identifying a number N of tagged items associated with a target user at 320.
  • the items may be associated with the user through the user's purchases, items added to a wish list or shopping cart, items the user rated, items the user searched for, or the like.
  • the process 300B identifies all of the items associated with a user.
  • the process 300B identifies a subset of these items, such as items that were more recently associated with the user.
  • the process 300B can reduce the processing burden on a system implementing the process 300B.
  • the process 300B identifies tags associated with the N items. Since each item may have multiple tags, there may be a large number of tags among the N items. From this group of tags, the process 300B selects tags at 324 that satisfy specified criteria. For instance, the process 300B might select a threshold number of the most popular tags, such as the ten most popular tags. Or the process 300B might select all tags that were applied to an item a threshold number of times, such as 3 times.
  • the process 300B then performs a search to obtain a list of scored items at 326.
  • the process 300B does this in one embodiment by sending the tags to a search engine, which performs a search for each tag separately.
  • the search engine returns a ranked list of scored items for each tag searched on.
  • the scores may be based on, for example, the search result scores for each item.
  • the process 300B performs a search for all of the tags at once, using a logical OR operation.
  • the process 300B at 328 merges the lists of scored items while adding scores of alike items. Merging the lists of scored items includes re-ranking the scored items according to their search results scores to produce a single ranked list of items. The scores of alike items (items appearing in multiple lists) are added to increase the scores, and hence rankings, of these items.
  • the process 300B selects a set of top scored items from the merged list to provide as candidate recommendations. This step can include, for instance, selecting a threshold number of items, such as 10 items, or selecting items having a score above a threshold score.
  • FIGURE 4 illustrates an embodiment of a process 400 for normalizing item recommendation scores.
  • the process 400 is implemented in one embodiment by a normalization engine, such as the normalization engine 120 of FIGURE 1.
  • the process 400 begins at 402 by receiving candidate recommendation scores from a recommender. As described above, the scores received from one recommender may differ in scale from scores received from other recommenders. At 404, the process 400 calculates the range of the scores by subtracting the minimum score from the maximum score. Thus, for example, if the minimum score assigned to a candidate recommendation is 10, and the maximum score is 120, then the range is 120 - 10, or 110.
  • the process 400 at 406 subtracts the minimum score value from each score provided by the recommender in order to generate a set of translated scores.
  • This step causes the normalized scores to be less than or equal to 1 after step 408. In some embodiments, this step is optional.
  • the process 400 divides the translated scores by the range to produce normalized scores.
  • the resulting set of normalized scores in one embodiment ranges from 0 to 1.
  • the process 400 can be illustrated by an example.
  • Two sets of scores from different recommenders might be as follows: a first set of 1, 3, 5, 2 and a second set of 60, 40, 20, and 10.
  • the score sets are then divided by the ranges 4 and 50, respectively, to generate normalized scores 0, 0.5, 1, and 0.25 for the first set and 1, 0.6, 0.2, and 0 for the second set. Since the scores from each set lie in the same range, they may be compared. Thus, for example, a candidate selector that chooses the top three items from these score sets would choose the item in the first set having score 1 and the items in the second set having scores 1 and 0.6, assuming that the scores from each set are weighted equally.
  • negative scores may be provided by recommenders.
  • the process 400 can also normalize these negative scores. However, when both positive and negative scores are normalized together according to the process 400, the normalized negative scores may be in the range of 0 to 1. Negative scores may therefore have positive normalized scores, eliminating the benefit of adding negative scores to positive scores. In some embodiments, the process 400 overcomes this problem by analyzing negative and positive scores separately. The normalized negative scores can then be subtracted from the positive scores.
  • the process 400 normalizes scores dynamically.
  • the process 400 normalizes the scores using a window, which may be a list or the like.
  • the window might include, for example, a list of 10,000 scores.
  • the number of scores in the window increases until a maximum number of scores are reached, such as 10,000 scores.
  • the window is reset (e.g., by removing the old scores), and the window begins again to receive new scores.
  • each new score added to the window causes an old score to be removed.
  • the window may not include all of the scores generated by a particular recommender.
  • the minimum and maximum scores provided by the recommender may therefore not be in the window. Accordingly, in certain embodiments, the minimum and maximum scores are generated dynamically as the scores are received into the window.
  • the minimum and maximum scores are generated dynamically by determining if a new score inserted into the window is less than a previous minimum score or greater than a previous maximum score. If either of these conditions hold, then the new score is considered to be the new minimum or maximum.
  • An initial guess of the minimum and maximum scores may be provided when the window is first generated or reset.
  • the minimum and maximum are not evaluated for each new score received by the process 400. Instead, the scores are sampled periodically or probabilistically to evaluate for a new minimum or maximum score. Thus, for example, every 100th score may be evaluated to determine if it is a new maximum or minimum. As the number of scores received in the window increase over time, in some embodiments the minimum and maximum scores stabilize or converge. In certain embodiments, if the window is reset, the calculation of minimum and maximum scores restarts.
  • Recommendation scores may be normalized over multiple computers, servers, processors, processing cores, or the like (collectively, "computing devices") to balance processing loads.
  • computing devices may be normalized over multiple computers, servers, processors, processing cores, or the like (collectively, "computing devices") to balance processing loads.
  • windowing techniques when windowing techniques are used, differences in normalization can occur among the different computing devices. For example, if the same recommender on different computing devices provides different scores to a normalization engine, the minimums and maximums on these computing devices might be calculated differently. The resulting normalized scores might be inconsistent across the different computing devices.
  • This inconsistency can undesirably cause different recommendations to be displayed to the same user at different times.
  • Refreshing a web page of recommendations can cause a different computing device to generate the recommendations in some embodiments. If the normalization scores are different on each computing device, the refreshed recommendations might be different from the previously-displayed recommendations. These different recommendations may create user confusion and cause user mistrust in the recommendations.
  • the process 400 may reduce the number of digits of precision in each score. In effect, the process 400 selects a subset of digits used in the scores. Thus, a score of 0.529 might be modified to become simply 0.5.
  • outliers in a set of scores can skew the distribution of normalized scores.
  • Outliers include scores that are much smaller or much larger than most of the other scores. For example, in a set of scores 1, 2, 5, and 1001. the score 1 ,001 might be an outlier. Outliers can skew the normalized distribution by affecting the range. In the above example, the range is 1000. Dividing the various scores by this number (after translation by the minimum value) yields normalized scores 0, 0.001 , 0.004 and 1. The outlier in this example overwhelmingly dominates the other normalized scores.
  • Outliers may indicate very strong recommendations and therefore may be desirable to keep. However, when outliers overpower the other recommendations (such as in the above example), it may be desirable to discard the outliers.
  • One way of doing this is to have each recommender remove the outliers. For example, a recommender could set a threshold and remove scores above the threshold (or below the threshold, in the case of low- valued outliers).
  • Another way to remove outliers when dynamic normalization is used is to use the window technique described above, periodically resetting the window. For example, instead of using every score or even a sample of every score to generate minimums and maximums, the minimums and maximums could be reset after a certain number of scores (e.g., after 1000 scores) have been normalized.
  • the impact of outliers is lessened because the reset causes old minimums and maximums to be ignored for future calculations.
  • Yet another way of reducing the impact of outliers is taking the Nth largest (or Nth smallest) score as the maximum (or minimum) score. For instance, the second-to-largest score may be chosen as the maximum score instead of the largest score.
  • FIGURE 5 illustrates another embodiment of a process 500 for normalizing item recommendation scores.
  • the process 500 is implemented in one embodiment by a normalization engine, such as the normalization engine 120 of FIGURE 1.
  • the process 500 begins at 502 by receiving candidate recommendation scores from a recommender.
  • the process 500 determines an original range of the scores. This original range may be determined, for example, by subtracting a minimum score from a maximum score. This range may be calculated dynamically using the window techniques described above.
  • the process 500 determines a new range.
  • This new range includes a new minimum value and a new maximum value.
  • the new range is 0 to 1.
  • Another example range might be -10 to 10.
  • Other ranges may be chosen without limitation.
  • the process 500 maps the scores from the original range to the new range using a mathematical transformation.
  • the transformation in one embodiment is a nonlinear transformation.
  • the transformation in certain embodiments takes the form
  • Expression (1) illustrates that for each Item Score, a new score is generated as a function of the Item Scores, the Old Range, and the New Range.
  • the normalized scores in expression (2) are computed in the same or a similar manner as the normalized scores of FIG. 4.
  • the minimum value in expression (2) is subtracted from each item score to produce translated scores, which are divided by the range.
  • the process 500 can also use the techniques of the process 400 to calculate negative scores, to increase consistency among normalized scores across multiple computing devices, and to minimize the impact of outliers.
  • FIGURE 6 illustrates another embodiment of a process 600 for normalizing item recommendation scores.
  • the process 600 is implemented in one embodiment by a normalization engine, such as the normalization engine 120 of FIGURE 1.
  • the process 600 begins at 602 by receiving candidate recommendation scores from a recommender.
  • the process 600 determines whether a minimum score from the set of received candidate recommendation scores is different from a desired normalized minimum score.
  • the desired normalized minimum score in one embodiment is the value that will be chosen as the minimum score in the normalized range of scores.
  • the process 600 at 606 translates each score in the set of candidate recommendation scores by a difference between the minimum candidate recommendation score and the desired normalized minimum score.
  • a set of candidate recommendation scores might have a minimum score of 90 on a scale of 90 to 100. If the desired normalized minimum score is 0, the minimum score and the desired normalized minimum score differ by 90. Accordingly, each candidate recommendation score will be translated (e.g., subtracted) by 90, resulting in a new set of scores ranging from a minimum of 0 to a maximum of 10.
  • Translating the candidate recommendation scores advantageously enables sets of scores from different recommenders having different minimum scores to be more easily compared.
  • step 608 After translating the scores, the process 600 proceeds to step 608. If, however, the minimum candidate recommendation score is determined to be the same as the desired normalized minimum score at 604, the process 600 proceeds directly to step 608. In addition, it should be noted that in alternative embodiments, steps 604 and 606 may be omitted.
  • the process 600 in certain embodiments, combines the scores of all the items to create a combined score. In one embodiment, combining the scores is done by summing the scores. In another embodiment, block 608 is performed by computing a moving average of a subset of the scores and multiplying the average by the number of scores in the moving average.
  • the moving average may be implemented, for example, by using one or more of the window techniques described above. A moving average can reduce the processing burden on a computing system by reducing the number of calculations to be performed, since the average of all the scores is not computed each time a new score is received. In an embodiment, the moving average is an exponential moving average.
  • the process 600 calculates normalized scores by using the combined score and the candidate recommendation scores. This step is performed, for example, by dividing each candidate recommendation score by the combined score. In embodiments where the desired minimum normalized score is 0, the normalized scores might range from 0 to 1.
  • the process 600 may use a window technique, such as described above with respect to FIG. 4, to calculate the minimum candidate recommendation score.
  • the process 600 of certain embodiments can also use the techniques of the process 400 to calculate negative scores, to increase consistency among normalized scores across multiple computing devices, and to minimize the impact of outliers.
  • the process 600 also reduces the impact of outliers by periodically resetting a window of scores when window techniques are used. Resetting the window removes the impact of previous outliers. Conversely, the number of scores in the window could be allowed to increase (e.g., the window would be reset at longer intervals), spreading out the affect of outliers on the normalized scores.
  • FIGURE 7 illustrates yet another embodiment of a process 700 for normalizing item recommendation scores.
  • the process 700 is implemented in one embodiment by a normalization engine, such as the normalization engine 120 of FIGURE 1.
  • the process 700 begins at 702 by receiving candidate recommendation scores from a recommender. Thereafter, the process 700 assigns percentile rankings to the scores.
  • a score's percentile ranking (or equivalently, a candidate recommendation's percentile ranking) reflects the strength of a particular candidate's score.
  • a candidate recommendation in the 95th percentile has a score that is higher than 95% of the other candidates' scores.
  • the percentile rankings may be used to determine the weakness of a candidate's score. For example, a candidate recommendation in the 95th percentile in this implementation might have a score that is lower than 95% of the other candidates' scores.
  • the percentile rankings may be assigned in a variety of ways. One way is to calculate the mean and variance values of the set of candidate recommendation scores and use these values to derive the percentile rankings from a normal distribution having the calculated mean and variance.
  • the percentile rankings generated from the normal distribution may be obtained from a lookup table or the like.
  • the process 700 in one embodiment may use a window technique, such as described above with respect to FIG. 4, to calculate the mean and variance values.
  • percentiles may be calculated using the following formula:
  • the percentile rankings are generated dynamically using a window of scores, using similar techniques to those described above with respect to FIG. 4.
  • the window is implemented as a sorted list of scores, where an old score is removed from the list each time a new score is inserted into the list. Since the scores are sorted, a percentile ranking can be derived from each score's position or rank in the list using, for example, expression (2). For example, the first position in the list might be ranked 1st, the second position might be ranked 2nd, and so on.
  • the list is sorted automatically as new scores are inserted into the list. The position in the list where the new score is inserted can be determined by searching the list to find the correct position for the new score. In one embodiment, the new score replaces an old score in the same position in the list. Alternatively, the oldest score in the list, regardless of position, is removed from the list when the new score is inserted.
  • the process 700 at 706 uses the percentile rankings as normalized scores.
  • using percentile rankings as normalized scores reduces the sensitivity of the normalized scores to outliers.
  • the process 700 may not need to account for unusually low or high scores.
  • Percentile rankings are generally insensitive to outliers because the rankings of successively-ranked scores tend to be independent of the quantitative difference between those scores. For example, a first set of scores 1, 2, and 3 would be ranked the same way as a second set of scores 1 , 2, and 100.
  • the process 700 of certain embodiments can also use the techniques of the process 400 to increase consistency among normalized scores across multiple computing devices.
  • negative scores may be calculated separately from positive scores, as described above.
  • percentile rankings can be reversed, such that an item with a very negative score will have a very low percentile ranking.
  • FIGURE 8 illustrates a portion of a web page showing an example recommendation interface.
  • the example recommendations page 800 displays recommendations for a user.
  • the recommendations page 800 includes various details about the listed products 810 (four products shown), and includes buttons for adding each product to an electronic shopping cart or wish list.
  • the recommendation page 800 also includes a set of controls 812 for rating, indicating ownership of, and indicating a lack of interest in, each listed product 810.
  • the recommendations system may use this information and other information to improve the recommendations it makes. In an embodiment, this process is stateless, such that no information about which items have been recommended to which users needs be retained.
  • a refresh option 818 allows a user to see an updated list of recommendations, which may be updated when the user adjusts the controls 812.
  • One or more reasons 814 are displayed for recommending each item.
  • the item "The Arctic Incident” includes the reason 814a "Recommended because you said you owned The Eternity Code (Artemis Fowl, Book 3), and this item will complete your series.”
  • the reasons 814 provide compelling reasons for recommending items.
  • FIGURE 9 illustrates a set of components that may be included in an electronic catalog website 986 to implement the recommendation functions described above.
  • the system may also include functionality for users to perform various types of item-related actions such as purchasing items, tagging items, adding items to personal wish lists and shopping carts, rating items, reviewing items, etc.
  • the arrows in FIGURE 9 show the general flow of information between components.
  • the system may be accessed by user computers 988 over the Internet. Although shown as personal computers for purposes of illustration, the user computers 988 may include various other types of computing devices, including Personal Digital Assistants (PDAs), wireless phones, set-top television boxes, etc.
  • PDAs Personal Digital Assistants
  • the system 986 comprises web servers 990 which process HTTP (Hypertext Transfer Protocol) requests received over the Internet from the user computers 988 that run web browser software.
  • HTTP Hypertext Transfer Protocol
  • the web servers 990 dynamically generate content-dependent web pages according to user-specific information.
  • the web servers 990 access a repository of web page templates 992 that specify the layout and format of product detail pages, recommendations pages, and various other types of web pages.
  • the web servers 990 populate these templates with information that is typically dependent upon the identity of the particular user, as may be determined, for example, using browser cookies.
  • the web servers 990 retrieve catalog content for particular products from a Catalog Service 994, which includes or accesses a repository 996 of item content.
  • the item content may, for example, include photos, reviews, price and availability data, and other types of descriptive information about particular products that are available to purchase, rent, download, review, post for sale, etc. via the web site 986.
  • the web servers 990 also communicate with a tagging service 998 that maintains a database 900 of user-specific tag data.
  • the tag data stored for each user may, for example, include a set of tag-item ID pairs, optionally together with various other types of data such as permission data and a creation timestamp.
  • the tagging service 998 may receive both read requests from the web servers (e.g., when a user requests a page that displays personal tag data), and update requests (e.g., when a user tags an item).
  • each tag is stored in association with the corresponding user, meaning that if two or more users create identical tags, these tags are treated as separate and distinct from each other.
  • the tags may also be stored in association with one or more items in the electronic catalog.
  • the web servers 990 also communicate with a search engine 904 that allows users to search for information stored in the item content and tag data repositories 996 and 900.
  • the search engine may be used to generate recommendations by searching using tags of various items as keywords.
  • the web servers 990 also access a recommendations service 901 which generates item recommendations.
  • the recommendation service 901 may include multiple recommenders and a normalization engine as shown in FIGURE 1 and described above.
  • the a web server 990 sends a request to the recommendations service 901 , which responds with a list of recommended items according to the systems and processes described above with respect to FIGURES 1-8.
  • the recommendation service 901 may generate the recommendations in real time in response to a particular user action.
  • the system when a user clicks on a link that invokes the presentation of personalized recommendations, the system generates and returns item recommendations in real time as follows. Initially, a web server 990 sends a request to the recommendation service 901. The recommendation service then responds by invoking some or all of its recommenders 1 12. The recommenders 1 12 may, but need not, be invoked in parallel. Each invoked recommender 1 12 responds by retrieving item preference data 902, which may be distributed over several servers. Each recommender 112 then generates a list of candidate items for the user, together with associated scores and reasons.
  • the normalization engine 120 normalizes the scores as described above, and the candidate selector 130 then uses the normalized scores to select particular candidate items to recommend, and/or to rank the candidate items for display.
  • the recommendation service 901 then returns the ranked list of items and the associated reasons to the web server 990.
  • the web server 990 uses this information, together with item data retrieved from the catalog service 994 (and possibly other services 906), to generate and return a recommendations page of the type shown in FIGURE 8.
  • Reasons are generated in one implementation by providing several predefined reason types that identify different kinds of reason text.
  • a lookup table or the like may be provided, for example, that maps reason types to reason text. For instance, a reason type "A" might map to the reason text "because you purchased item X," and a reason type "B” might map to the reason text "because item X is on your friend's wish list.”
  • the recommenders 1 12 pass reason types along with candidate recommendations to the normalizer 120.
  • the normalizer 120 passes the reason types and candidate recommendations to the candidate selector 130, which passes certain recommendations along with their reason types to a user interface component (not shown).
  • the user interface component matches reason types with reason text according to the lookup table and displays the recommendations with the associated reason text to a user (see, e.g., FIG. 8).
  • a particular recommender 1 12 may not return any candidate items. This may be the case where, for example, the user has not engaged in a particular type of user activity on which the recommender is based, or where the recommender otherwise relies on some type of user data that is not available for the particular user.
  • the recommendations service 901 also communicates with the tagging service in certain embodiments to obtain tagging data useful for producing recommendations, according to the process 300 described with respect to FIG. 3 above.
  • the recommendations service 901 also optionally communicates with one or more other services 906, such as a friends service that allows the user to save birthday and interest data about friends.
  • the web servers 990 also access one or more additional repositories of user data, logically represented in FIGURE 9 as item preference data 902. Because a group of individuals can share an account, a given "user" may include multiple individuals (e.g., two family members that share a computer). As illustrated by FIGURE 9, the data stored for each user may include one or more of the following types of information (among other things) that can be used to generate recommendations in accordance with the invention: (a) the user's purchase history, including dates of purchase, (b) a history of items recently viewed by the user, (c) the user's item ratings profile, if any, and (d) items tagged by the user. Various other types of user information, such as wish list/registry contents, email addresses, shipping addresses, shopping cart contents, and browse (e.g., clickstream) histories, may additionally be stored.
  • wish list/registry contents email addresses, shipping addresses, shopping cart contents, and browse (e.g., clickstream) histories
  • the various components of the web site system 986 may run, for example, on one or more servers (not shown). In one embodiment, various components in or communicating with the recommendations service 901 are replicated across multiple machines to accommodate heavy loads.
  • Each of the processes and algorithms described above may be embodied in, and fully automated by, code modules executed by one or more computers or computer processors.
  • the code modules may be stored on any type of computer-readable medium or computer storage device.
  • the processes and algorithms may also be implemented partially or wholly in application-specific circuitry.
  • the results of the disclosed processes and process steps may be stored, persistently or otherwise, in any type of computer storage.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Plusieurs modes de réalisation concernent un système de recommandations (100) pour sélectionner des articles à recommander à un utilisateur. Le système comprend un moteur de recommandation (110) comprenant une pluralité d'outils de recommandation (112). Chaque outil de recommandation (112) identifie ou correspond à un différent type de raison pour recommander des articles. Dans un mode de réalisation, chaque outil de recommandation (112) extrait des données de préférence d'article (102) et génère des recommandations candidates répondant à un sous-ensemble desdites données. Les outils de recommandation (112) donnent également un score aux recommandations candidates. Dans certains modes de réalisation, un moteur de normalisation (120) normalise les scores des recommandations candidates fournies par chaque outil de recommandation (112). Un sélecteur de candidat (130) sélectionne au moins une partie des recommandations candidates en fonction des scores normalisés à fournir en tant que recommandations à l'utilisateur. Les recommandations peuvent être fournies à l'utilisateur avec les raisons associées pour recommander les articles.
EP08771411A 2007-06-29 2008-06-18 Système de recommandation à multiples outils de recommandation intégrés Withdrawn EP2162828A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/772,010 US7949659B2 (en) 2007-06-29 2007-06-29 Recommendation system with multiple integrated recommenders
US11/771,914 US8260787B2 (en) 2007-06-29 2007-06-29 Recommendation system with multiple integrated recommenders
PCT/US2008/067404 WO2009006029A1 (fr) 2007-06-29 2008-06-18 Système de recommandation à multiples outils de recommandation intégrés

Publications (2)

Publication Number Publication Date
EP2162828A1 true EP2162828A1 (fr) 2010-03-17
EP2162828A4 EP2162828A4 (fr) 2010-09-15

Family

ID=40226457

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08771411A Withdrawn EP2162828A4 (fr) 2007-06-29 2008-06-18 Système de recommandation à multiples outils de recommandation intégrés

Country Status (2)

Country Link
EP (1) EP2162828A4 (fr)
WO (1) WO2009006029A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7991650B2 (en) 2008-08-12 2011-08-02 Amazon Technologies, Inc. System for obtaining recommendations from multiple recommenders

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134532A (en) * 1997-11-14 2000-10-17 Aptex Software, Inc. System and method for optimal adaptive matching of users to most relevant entity and information in real-time
US6182050B1 (en) * 1998-05-28 2001-01-30 Acceleration Software International Corporation Advertisements distributed on-line using target criteria screening with method for maintaining end user privacy
US6317722B1 (en) * 1998-09-18 2001-11-13 Amazon.Com, Inc. Use of electronic shopping carts to generate personal recommendations
US7912868B2 (en) * 2000-05-02 2011-03-22 Textwise Llc Advertisement placement method and system using semantic analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"STATEMENT IN ACCORDANCE WITH THE NOTICE FROM THE EUROPEAN PATENT OFFICE DATED 1 OCTOBER 2007 CONCERNING BUSINESS METHODS - PCT / ERKLAERUNG GEMAESS DER MITTEILUNG DES EUROPAEISCHEN PATENTAMTS VOM 1.OKTOBER 2007 UEBER GESCHAEFTSMETHODEN - PCT / DECLARATION CONFORMEMENT AU COMMUNIQUE DE L'OFFICE EUROP" 20071101, 1 November 2007 (2007-11-01), XP002456414 *
See also references of WO2009006029A1 *

Also Published As

Publication number Publication date
WO2009006029A1 (fr) 2009-01-08
EP2162828A4 (fr) 2010-09-15

Similar Documents

Publication Publication Date Title
US8260787B2 (en) Recommendation system with multiple integrated recommenders
US7949659B2 (en) Recommendation system with multiple integrated recommenders
US8751507B2 (en) Recommendation system with multiple integrated recommenders
US8117072B2 (en) Promoting strategic documents by bias ranking of search results on a web browser
US7249058B2 (en) Method of promoting strategic documents by bias ranking of search results
US7272573B2 (en) Internet strategic brand weighting factor
US9342563B2 (en) Interface for a universal search
US11036795B2 (en) System and method for associating keywords with a web page
US8606770B2 (en) User-directed product recommendations
US8301623B2 (en) Probabilistic recommendation system
US10373230B2 (en) Computer-implemented method for recommendation system input management
US7603367B1 (en) Method and system for displaying attributes of items organized in a searchable hierarchical structure
CN102667768B (zh) 动态搜索建议和类别特定完成
US8356248B1 (en) Generating context-based timelines
US20110035329A1 (en) Search Methods and Systems Utilizing Social Graphs as Filters
US8239399B2 (en) Providing tools for navigational search query results
US20090164453A1 (en) System and method for providing real-time search results on merchandise
US20080275863A1 (en) Selecting advertisements based upon search results
US11321761B2 (en) Computer-implemented method for recommendation system input management
US20140351052A1 (en) Contextual Product Recommendation Engine
WO2015048292A2 (fr) Procédé pour afficher et naviguer dans des résultats de recherche internet
US20140201620A1 (en) Method and system for intelligent web site information aggregation with concurrent web site access
US20240070210A1 (en) Suggesting keywords to define an audience for a recommendation about a content item
EP2162828A1 (fr) Système de recommandation à multiples outils de recommandation intégrés
US20240331003A1 (en) Determining and presenting attributes for search

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20100120

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

A4 Supplementary search report drawn up and despatched

Effective date: 20100816

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20110315