US20140108282A1 - Adaptive rating system and method - Google Patents

Adaptive rating system and method Download PDF

Info

Publication number
US20140108282A1
US20140108282A1 US14/103,901 US201314103901A US2014108282A1 US 20140108282 A1 US20140108282 A1 US 20140108282A1 US 201314103901 A US201314103901 A US 201314103901A US 2014108282 A1 US2014108282 A1 US 2014108282A1
Authority
US
United States
Prior art keywords
rating
user
product
processor
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/103,901
Inventor
Darren Pulito
Lou Vastardis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VineLoop LLC
Original Assignee
VineLoop LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VineLoop LLC filed Critical VineLoop LLC
Priority to US14/103,901 priority Critical patent/US20140108282A1/en
Publication of US20140108282A1 publication Critical patent/US20140108282A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products

Definitions

  • Social networking websites such as those hosted on FacebookTM and Yahoo!TM provide network services to facilitate interaction between users. Typically, users who sign up for these services are able to establish connections with other users. As the popularity of such network services has increased many social networking websites service millions of users with many individual users having large networks including hundreds or even thousands of connections to other users.
  • Users of such network services may be interested in requesting information or assistance from other users with whom they have established a connection or other members in the network to whom they don't have an established connection.
  • the development of systems and methods for users of such network services to request and retrieve relevant information from other users within a social network would be useful to users.
  • the present invention is embodied in adaptive rating methods and systems.
  • the adaptive rating method includes receiving a first rating for a first product from a user, receiving a second rating for a second product from the user, identifying a conflict with a processor by comparing the first rating and the second rating, soliciting feedback from the user to remedy the conflict, and adjusting at least one of the first or second ratings with the processor responsive to feedback from the user.
  • the steps of the method may be embodied in computer executable instructions stored on a non-transient machine readable medium that cause a processor to perform the method when executed by the processor.
  • the system includes a processor configured to receive a first rating for a first product from a user, receive a second rating for a second product from the user, identify a conflict with a processor by comparing the first rating and the second rating, solicit feedback from the user to remedy the conflict, and adjust at least one of the first or second ratings with the processor responsive to feedback from the user.
  • FIG. 1 is a system diagram depicting an exemplary system in accordance with aspects of the present invention
  • FIG. 2 is a flow chart depicting exemplary steps for requesting and retrieving information in accordance with aspects of the present invention
  • FIG. 2A is a block diagram illustrating the establishment of a category-based network and the establishment of trusted information resource contacts within the category-based network in accordance with an aspect of the present invention
  • FIG. 2B is a table depicting exemplary categories and sub-categories for use with the present invention.
  • FIG. 3 is a block diagram illustrating a pending category trust request in accordance with aspects of the present invention.
  • FIG. 3A is a block diagram illustrating established trusted information resource contacts of a user for a category in accordance with aspects of the present invention
  • FIGS. 3B and 3C are block diagrams illustrating established trusted information resource contacts of established trusted information resource contacts in accordance with aspects of the present invention.
  • FIG. 3D is a flow chart of exemplary steps for requesting information on other products related to a product of interest to the user in accordance with an aspect of the present invention
  • FIG. 4 is a flow chart of exemplary steps for adapting a rating scale in accordance with aspects of the present invention
  • FIG. 4A is a flow chart of exemplary sub-steps for performing steps of the flow chart of FIG. 4 ;
  • FIGS. 5A , 5 B, and 5 C are illustrations of a rating scale in accordance with aspects of the present invention.
  • FIGS. 6A and 6B are illustrative representations of an exemplary comparative rating scale in accordance with an aspect of the present invention.
  • An aspect of the present invention provides a system that supports the natural human tendency for learning and changing behavior; a system that is rooted in how individual users naturally seek out trusted information resources to provide them with what they deem as valuable information.
  • the system extends the existence of an individual user's relationship beyond their immediate circle of contacts by perpetuating “trusted” knowledge sharing category-based networks extending from their existing social networks.
  • trusted knowledge sharing category-based networks extending from their existing social networks.
  • the value of indirect relationships beyond the first degree of an individual user's social graph is extended so that individual user can receive a greater number of useful: (1) trusted recommendations; (2) trusted search results; and/or (3) trusted answers to questions.
  • Embodiments of the present invention allow a user of a social network to request information from other users.
  • the information request can include, for example, a question for dissemination to other users, a search request for information maintained in a electronic database, and/or an alert request for information once it is added to the database.
  • a user builds one or more category-based networks based on categories they have in common with other network users (e.g., investing, wine, fitness regiments, book-types, movie-types, restaurants, music-types, etc). Users are able to establish a select number of users within each category-based network as trusted information resource contacts (TIRCs; e.g., users they trust most within a specific category and/or from which they desire to receive rating information from).
  • TIRCs trusted information resource contacts
  • users are able to filter valuable user-generated content (UGC; such as questions and answers, reviews, ratings) from a network of trusted resources (e.g., other users they may view as experts) including the user's established TIRCs, the user's established TIRCs' TIRCs, etc.
  • URC user-generated content
  • FIG. 1 is a diagram illustrating an exemplary system 100 in which exemplary embodiments of the present invention may operate.
  • the system 100 includes multiple user devices 102 a - n in communication with a host server 104 over a network 106 such as the Internet, an intranet, a wide area network (WAN), a local area network (LAN), or other communication network capable of transporting data.
  • a network 106 such as the Internet, an intranet, a wide area network (WAN), a local area network (LAN), or other communication network capable of transporting data.
  • WAN wide area network
  • LAN local area network
  • Each of the user devices 102 includes memory 108 and a processor 110 such as a microcontroller, microprocessor, an application specific integrated circuit (ASIC), and/or a state machine coupled to the memory 108 .
  • Memory 108 may be a conventional computer-readable medium, such as a random access memory (RAM).
  • processor 110 executes computer-executable program instructions stored in memory 108 . Suitable memory 108 and processors 110 will be understood by one of skill in the art from the description herein. /
  • User devices 102 a - n may also include a number of input/output (10) devices (not shown) such as a mouse, a CD-ROM, DVD, a keyboard, a display, or other input or output devices.
  • Exemplary user devices 102 include personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones, pagers, digital tablets, laptop computers, Internet appliances, and processor-based devices.
  • a user device 102 a may be any type of device capable of communication with a network 106 and of interaction with one or more application programs.
  • user devices 102 a - n may operate on any operating system capable of supporting a browser or browser-enabled application, such as MicrosoftTM WindowsTM.
  • the user devices 102 a - n shown include, for example, personal computers executing a browser application program such as Microsoft Corporation's Internet ExplorerTM.
  • the illustrated host server 104 includes a processor 116 and a memory 118 .
  • processor 116 executes a social network application program (SNAP) 112 stored in memory 118 .
  • SNAP 112 allows users, such as user 103 a , to interact with and participate in a computer-based social network (herein “social network”).
  • a social network can refer to a computer network connecting users, such as people or organizations.
  • An example of a social network in which the present invention may be implemented is FacebookTM.
  • a social network can comprise user profiles that can be associated with other user profiles.
  • Each user profile may represent a user and a user can be, for example, a person, an organization, a business, a corporation, a community, a fictitious person, an institution, information source, or other entity.
  • Each profile can contain entries, and each entry can comprise information associated with a profile.
  • Memory 118 may be a conventional computer-readable medium, such as a random access memory (RAM).
  • processor 116 executes computer-executable program instructions stored in memory 118 . Suitable memory 118 will be understood by one of skill in the art from the description herein.
  • Host server 104 depicted as a single computer system, may be implemented as a network of computers. Examples of a host server 104 are servers, mainframe computers, networked computers, processor-based devices, and similar types of systems and devices.
  • Processor 110 and processor 116 can be any of a number of computer processors, such as processors from Intel Corporation of Santa Clara, Calif and Motorola Corporation of Schaumburg, Ill., which will be understood by one of skill in the art from the description herein.
  • SNAP 112 can include a category-based information processor 120 .
  • processor 120 enables a user 103 to establish trusted information resource contacts/relationships with other users that are based on categories and to request information from these TIRCs.
  • Processor 120 can cause the display of information provided by one or more users 103 of the social network on a user device 102 .
  • Processor 120 in some embodiments, can generate, distribute, and/or update a search record. Multiple processors and other hardware can be provided to perform operations associated with embodiments of the present invention.
  • Host server 104 also provides access to electronic data storage elements, such as a social network storage element, in the example shown in FIG. 1 , an electronic social network database 122 , which may be stored in memory 118 of host server 104 or external to host server 104 as illustrated.
  • the social network database 122 may be physically attached or otherwise in communication with the social network engine 112 by way of a network or other connection.
  • the social network database 122 can be used to store users' member profiles including TIRCs of those users.
  • Electronic data storage elements may include any one or combination of methods for storing data, including without limitation, arrays, hash tables, lists, and pairs. Other similar types of data storage devices can b e accessed by the host server 104 .
  • SNAP 112 can receive data comprising the user profiles from the social network database 122 and can also send data comprising user profiles to the social network database 122 for storage.
  • host server 104 may comprise a single physical or logical server.
  • the system 100 shown in FIG. 1 is merely exemplary, and is used to help explain the social network and adaptive rating systems and methods illustrated in FIGS. 2-6 .
  • FIG. 2 depicts a flow chart 200 of exemplary steps for retrieving information about a category of interest from a social network in accordance with aspect of the present invention.
  • the social network includes multiple user networks where each user network includes multiple users.
  • the steps of flow chart 200 will be described with reference to the system 100 depicted in FIG. 1 to facilitate description. Other systems in which the steps of flow chart 200 may be carried out will be understood by one of skill in the art from the description herein.
  • information associated with users are stored in a database.
  • information generated by users 103 may be stored in social network database 122 .
  • the information may include ratings and reviews of products, answers to questions links, or any other form of user-generated content (UGC). All forms of information may be generated and stored by users of the social network prior to receiving a request for information. Additionally, information generated and stored after a request for information may be used to satisfy a standing request.
  • FIG. 2 a depicts an exemplary user network 250 including multiple contacts/friends 255 a - x (24 contacts in the illustrated embodiment) within a user's network.
  • Contacts 255 of the user may be associated with a category such as a category or sub-category (described below) to build a category-based network.
  • contacts 255 x, t, p, l, h and d are associated with a category (e.g., wine) to build category-based network 265 .
  • Step 204 may be performed for every user 103 within social network database 122 .
  • User category-based networks such as category-based network 265 may be built based on the user associating one or more contacts 255 with a particular category 260 .
  • the user may unilaterally assign contacts 255 to one or more category-based networks.
  • the host server 104 may create a graphical user interface (GUI) for display on a user device 102 .
  • GUI graphical user interface
  • the GUI may display each contact 250 of the user along with a series of check boxes corresponding to categories next to each user. The user may then simply select the appropriate check boxes to associate contacts with a category.
  • bilateral agreement may be necessary to establish a category-based network 265 .
  • the host server 104 may create a GUI for display on a user device 102 .
  • the GUI may display each contact 255 of the user along with a series of check boxes corresponding to categories next to each user. Selection of category check boxes associated with a particular contact 255 may result in an email message to that contact requesting consent.
  • the contact may then be associated with the category and become a member of the category-based network 265 upon a positive response to the consent request.
  • FIG. 2B depicts exemplary categories 275 and sub-categories 276 associated with particular categories with which users may be associated.
  • the sub-categories provide finer granularity for categorizing. For example, a category may be “wine” and a subcategory may be “varietal” (Cabernet, Merlot, Zinfandel, etc).
  • contacts are established as TIRCs (e.g., experts) from which the user desires to receive information.
  • the TIRCs form a set 270 of one or more contacts 255 of the user that are associated with the category and are established as TIRCs of the user for that category.
  • the user sends a trusted information resource request to one or more contacts 255 for a category/subcategory requesting that those contacts become TIRcs of the user for that category/subcategory.
  • the user may send trusted information resource requests to three of the contacts 255 (e.g., contacts 255 x, p, d ) within category-based network 265 to become TIRCs of the user for the category/subcategory.
  • the trusted information resource requests for the category are received by the host server 104 , which forwards the trusted information resource requests to the intended contacts 255 x, p, d and waits for a response.
  • the trusted information resource requests are pending and a trusted information resource relationship has not been established, which is illustrated in FIG. 3 .
  • the host server 104 then establishes each user from which a positive response to the trusted information resource request is received as a TIRC of the user.
  • FIG. 3A depicts the establishment of a set 270 of trusted information resource relationships between the user and contacts 255 x, p, d for category-based network 265 .
  • FIG. 3B illustrates /the trusted connections between the user and contacts 255 x and p turned on, and the trusted connection to expert 255 d turned off.
  • the user is able to retrieve information from TIRCs 255 x and 255 p (but not 255 d ), and from the TIRCs with which contacts 255 x and 255 p have active trusted connections (e.g., 255 xa xb, x c and 255 pa, pb, pc ); and from the active TIRCs of contacts 255 xa, xb, xc and 255 pa, pb, pc , etc.
  • active trusted connections e.g., 255 xa xb, x c and 255 pa, pb, pc
  • FIG. 3C illustrates the trusted connections between the user and contacts 255 x and d turned on and the trusted connection to expert 255 p turned off.
  • the user is able to retrieve information from TIRCs 255 x and 255 d (but not 255 p ), and from the TIRCs with which contacts 255 x and 255 d have active trusted connections (e.g., 255 xa, xb, da , and db , but not 255 xc ); and from the active TIRCs of contacts 255 xa, xb, da , and db .
  • a contact such as contact 255 xc in FIG. 3C may be designated as inactive by the user with which that contact has a trusted information resource connection (e.g., by contact 255 x for 255 xc ).
  • a user requesting the search may designate one or more TIRCs of their TIRCs as inactive for purposes of generating search results for queries by that user. For example, a user may designate contact 255 xc as inactive if the user does not want results from that contact (e.g., does not trust that contact's recommendations based on past experience).
  • designation of a contact as inactive for the user's queries only renders that contact inactive from the user's viewpoint and does not render that contact inactive as a TIRC of other users (e.g., contact 255 xc may remain an active TIRC of contact 255 x for contact 255 x and other users unless contact 255 x designates contact 255 xc as inactive.
  • the number of active TIRCs per category may be limited. In an exemplary embodiment, the number of active TIRCs per category is limited to ten or less and, more preferably, to three or less. Step 208 may be performed for every user 103 within social network database 122 .
  • an information request is received that specifies a category.
  • the host server 104 receives an information request from a user 103 .
  • the information request may include content filtering information such as the standard filters 277 a and/or advanced filters 277 b set forth in FIG. 2B .
  • the host server 104 may generate and present a GUI (not shown) to the user 103 for submitting an information request.
  • the information request GUI may include a series of check boxes associated with various categories/sub-categories and a submit button.
  • an information request may be generated by selecting one or more categories/subcategories and selecting the submit button.
  • the GUI may include a text box for entering a question for submission to a user's trusted information resources.
  • the GUI may further include check boxes or other means for entering filter information for standard filters 277 a and/or advanced filters 277 b.
  • a first set of users within the user's network are identified that are associated with the category (i.e., contacts 255 in category-based network 265 ) and that are established as TIRCs for that category (i.e., contacts 255 in set 270 ).
  • the host server 104 identifies the first set of users by examining the social network database 122 based on the category specified in the information request and the user's established TIRCs for that category.
  • the first set of users may be thought of as “experts” from the viewpoint of the user.
  • a second set of users within the category-based networks of the first set of users are identified that are associated with the category and that are designated as TIRCs for the category by the first set of users.
  • the host server 104 identifies the second set of users by examining the social network database 122 based on the category specified in the information request and the TIRCs established of the first set of users for that category.
  • the second set of users may be thought of as “experts” of the first set of users, e.g., the expert's experts.
  • the steps of block 214 may be repeated to obtain information from TIRCs that are farther removed from the user, e.g., the expert's expert's expert, the expert's expert's expert's expert, and so on.
  • the host server 104 retrieves information from the database 122 for identified users (e.g., those identified in steps 212 and/or 214 ) corresponding to the information request.
  • the information may be ratings and/or reviews of products within the selected category (step 210 ), or answers to questions within the selected category. For example, assume the category is action films.
  • the host server 104 may retrieve all ratings and/or reviews of action films by the TIRCs identified in steps 212 and/or 214 . If a user has a question associated with a category, the information may be retrieved by disseminating the question to the identified users and gathering responses from the identified users.
  • retrieved information is provided to a user.
  • information retrieved by the host server 104 from the database 122 at block 216 is transmitted to the client device 102 from which the information request was received (step 210 ) where it may be viewed by the user 103 .
  • the exemplary steps described above enable a user to monitor new ratings, reviews and other UGC of their TIRCs within a desired category and the TIRCs of these TIRCs, etc.; search ratings, reviews and other UGC of TIRCs within a desired category and the TIRCs of these TIRCs, etc.; and send questions to or communicate directly with TIRCs within a desired category and to/with the TIRCs of these TIRCs, etc.
  • Monitoring, searching, and sending functionality is described in further detail below:
  • Monitoring—user 103 can set personal preferences within the social network to receive information through direct links established through extended category-based networks of users identified as TIRCs within those category-based networks.
  • the information from these TIRCs can include ratings, reviews, links, UGC, etc.
  • the user receives the information automatically, e.g., periodically or as it is posted by users.
  • the information can be filtered by criteria such as set forth in standard filters 277 a and/or advance filters 277 b ( FIG.
  • the degrees of separation from the TIRC including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the number of UGC posts, ratings or reviews within a specific topic category by each TIRC, and the social network communities' approval or rating of a TIRC's UGC, ratings, reviews, etc.
  • a user may set their “monitor” preferences to notify them of reviews down to the third degree of separation by TIRCs within category-based networks for a particular category (e.g., Italian restaurants) with a particular rating (e.g., above 9.3).
  • category-based networks e.g., Italian restaurants
  • a particular rating e.g., above 9.3
  • FIG. 3D depicts a flowchart 300 of exemplary steps for monitoring reviews in accordance with one aspect of the present invention.
  • a information request is received (e.g., at host server 104 ) from a user identifying a particular product (e.g., product T5 from a group of products including products T1-T6).
  • a category/subcategory associated with the identified product is identified.
  • the host server 104 may identify the category/subcategory (e.g., Napa Cabernets) associated with product T5 by comparing a product identifier (e.g., UPC code) for product T5 with entries in a database.
  • a product identifier e.g., UPC code
  • TIRCs of the user for the identified category are identified.
  • host server 104 identifies TIRCs for the identified category as described above for blocks 212 and 214 of flow chart 200 .
  • host server 304 determines if the TIRCs have reviewed the product identified by the user.
  • host server 104 compares a product identifier of the identified product to product identifiers of all products reviewed by the TIRCs. If there is not a match, processing ends at block 310 . If there is a match, indicating that one or more of the TIRCs have reviewed the identified product, processing proceeds at block 312 .
  • host server 304 determines for each TIRC that has reviewed the identified product whether they rated another product the same or higher than the identified product. If no TIRC has rated any other products within the category equal to or greater than they rated the identified product, processing ends at block 314 . If one or more TIRCs rated one or more other products equal to or greater than the identified product, processing ends at block 316 with information for those products being transmitted to the user device 102 of the user 103 requesting the information. This process allows a user to quickly and easily identify other products that the user may wish to try because they were rated by the user's expert, expert's expert, and/or expert's expert's expert, as equal to or better than the identified product.
  • the information can be filtered by criteria such as set forth in standard filters 277 a and/or advanced filters 277 b ( FIG.
  • the degrees of separation from the TIRC including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the number of UGC posts, ratings or reviews within a specific topic category by each TIRC, and the social network communities' approval or rating of a TIRC's UGC, ratings, reviews, etc.
  • a user may search for ratings, reviews, or other valuable UGC by scanning the barcode on Malcom Gladwell's book “Outliers” in order to receive relevant information from up to the fifth degree of separation within his trusted resource or expert category-based network for books.
  • the TIRC can filter questions to answer based on, for example, the degrees of separation from the questioning user.
  • the answers can be filtered by criteria such as set forth in standard filters 277 a and advance filters 277 b ( FIG. 2B ), including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the social network communities' approval or rating of a TIRC's answers, and other indications of credibility or status.
  • a user may send a question out to his trusted resource network for wine, “I am going to San Francisco next month. If I have two days in Napa, what wineries should I try to schedule a tasting?”
  • Another aspect of the present invention relates to an adaptive rating system and method that ensures that ratings of entities (e.g., (product, person, service, experience, etc.) remain relevant for a user as that user's level of experience matures. For example, a user rating a bottle of wine may have a different rating opinion after having rated 50 bottles of wine than after rating three bottles of wine.
  • the present invention enables past and/or new ratings to be automatically adjusted in order to make them more relevant.
  • FIG. 4 depicts a flow chart 400 of exemplary steps for adapting ratings and FIG. 4A depicts a flow chart 452 of exemplary sub-steps within the steps of flow chart 400 .
  • the steps of flow charts 400 and 450 will be described with reference to the system 100 depicted in FIG. 1 to facilitate description. Other suitable systems will be understood by one of ordinary skill in the art from the description herein.
  • a first rating for a first product is received from a user.
  • the rating may be a rating on a scale of 1 to 10 (e.g., a nine) for a product within a category or within a subcategory (e.g., a wine or a California Pinot Noir).
  • processor 116 may be coupled to a receiver (not shown) that receives the rating from a user 103 via user device 102 over network 106 .
  • a second rating for a second product is received from the user.
  • the rating may be a rating on a scale of 1 to 10 (e.g., a nine) for another product within the category or subcategory (e.g., a wine or a California Pinot Noir).
  • processor 116 may be coupled to a receiver (not shown) that receives the rating from the user 103 via user device 102 over network 106 .
  • FIG. 5A depicts a user attempting to rate a second/new product that same as a first/benchmark product (e.g., as a “9”).
  • FIG. 4A depicts exemplary sub-steps for identifying a potential conflict (step 406 ).
  • processor 116 compares the first rating to the second rating.
  • processor 116 determines if the first rating equals the second rating. If the ratings are equal, processor 116 identifies a potential conflict and processing proceeds at block 408 . If the ratings are not equal, processing ends at block 456 .
  • feedback is solicited from the user to remedy the potential conflict.
  • processor 116 solicits feedback to remedy the potential conflict.
  • FIG. 4A depicts exemplary sub-steps for soliciting feedback to remedy the potential conflict (step 408 ).
  • processor 116 determines if the second rating is accurate based on the current rating scale for the category.
  • the current rating scale includes at least one rating of a product (e.g., the first rating for the first product).
  • processor 116 sends a first inquiry to the user asking if the second rating is accurate based on the current rating scale (e.g., should the second product have the same rating as the first product). If the second rating is inaccurate (e.g., no, the first and second products are not equivalent to the user rating the products, processing proceeds at block 462 . If the second rating is accurate (e.g., yes, the first and second products are essentially equivalent to the user rating the products), processing ends at block 460 .
  • processor 116 receives a comparative rating between the first product and the second product.
  • processor 116 sends a rating scale such as depicted in FIG. 5B for display by user device 102 to solicit feedback from user 103 .
  • the depicted rating scale provides a number of sub-intervals in the vicinity of the first product rating for selection by user 103 . For example, if the second product is a little better than the first product and the first product has a rating by user 103 of “9”, the user may select a slightly higher rating, e.g., “9.5” on the rating scale. In this case, the comparative rating would be “0.5” better.
  • the user may select a slightly lower rating, e.g., “8.5” on the rating scale.
  • the comparative rating would be “0.5” worse.
  • the user may enter the comparative rating in other well known manners, e.g., by typing in a comparative value or other value from which a comparative value may be obtained.
  • the first or second rating is adapted responsive to the feedback solicited from the user.
  • processor 116 adapts the first or second rating.
  • FIG. 4A depicts an exemplary sub-step for adapting that rating of the first or second rating (step 410 ).
  • processor 116 proportionally adjusts the first rating based on the comparative rating.
  • the rating of a first product is only adjusted when the first product has the maximum value rating on the rating scale (e.g., a value of “10” on a ten-point scale) and a maximum value rating is received for a second product that they user believes should have a higher rating than the first rating.
  • a first product having a rating of 10 having a rating of 10 as previously rated by the user. If the user attempts to rate a second product as a 10, similar to as illustrated in FIG. 5A , the system (e.g., processor 116 ) will identify a conflict. Feedback will then be solicited from the user to determine if the second product should have the same rating as the first product. If the user indicates that it should not have the same value, the user submits a comparative rating of the second product to the first product, e.g., a rating of 9.1-9.9 or 10.1-10.9. In an exemplary embodiment, if a rating of 10.1 to 10.9 were received from the user (e.g., 10.6 as illustrated in FIG.
  • the second product would then be established as a benchmark for a rating of 10 and the first product (and any other previously rated products for the category) would be proportionally re-rated, e.g., by processor 116 .
  • the first product had a rating of 10 and the second product was given a comparative rating of 10.6
  • the second product would be established as a 10. It will be understood that the system could be applied to many ratings for many products, in which case all the previously rated products may be automatically adjusted in a manner similar to the first product.
  • the host server 104 may then proportionally adjust the ratings of the products to a standardized scale in which the rating of the highest rated product is set to the top value of the standardized scale and the ratings of the other products are proportionally adjusted.
  • the standardized scale is a ten-point scale
  • the host server 104 receives a rating for a product within the category from the user 103 that has a rating higher than the highest rated product within that category, e.g., product 4 equals 10.9. Finally (STEP FOUR), the host server 104 adjusts the new rating to the highest rating and proportionally adjusts the other ratings.
  • ratings are proportionally adjusted whenever a potential conflict is identified and a comparative rating (e.g., higher and/or lower) is received from a
  • aspects of the adaptive rating system may include by way of non-limiting example:
  • a rating system where the entity (product, person, service, experience, etc) with the highest rating serves as the benchmark for which all lower rated products or experiences are ranked against within a specific category.
  • a rating system where a process requires the user, when attempting to rate an entity that has an equal rating to an existing entity, to confirm that the rating of the entity is truly equal, where if the rating of the new entity is not equal, the rating of the new entity has to be set either greater than or less than the previous benchmark for that entity.
  • the present invention is capable of adjusting ratings as a user's tastes mature and experience within a category/subcategory evolves, while keeping scores based on a relative scale. For example, a user tries a mid-tier Bordeaux as one of their first wine experiences and give it a 10. As the user tries other wines they do not enjoy as much they will rate them less than 10 (using the mid-tier Bordeaux as the top of the scale). The user may eventually try a Bordeaux they enjoy more than any other he has previously experienced. When he tries to give it a score of 10, the adaptive rating system/method requires him to rate this Bordeaux in comparison to the mid-tier Bordeaux that is currently serving as his benchmark for “10”. If the user feels they are equal, both remain a 10.
  • the 10.5 Bordeaux becomes the new benchmark for “10”.
  • the previous mid-tier Bordeaux that represented 10, along with all the wines that were rated in comparison to the mid-tier Bordeaux are automatically adjusted in relation to the new 10 point scale now established by the 10.5 Bordeaux.
  • the rating scale maintaining a True10TM rating system
  • the value of an individual rating becomes significantly more valuable and relevant to users within a network—making one's own ratings more accurate to themselves, and more meaningful and relevant to others.
  • the adapted score makes an expert's ratings or recommendations more relevant, which can be further enhanced by considering additional features, including, but not limited to:
  • a trust index how many people directly trust a person as a TIRC (e.g., expert) for a specific category.
  • a like index the degree to which other users “like” the answers, recommendations, and/or ratings of an expert.
  • a reviewer/expert may be evaluated on a scale of 0 to 10 based on the following four characteristics: (1) number of reviews written (“WRITTEN”), (2) number of reviews read by other users (“READ”), (3) number of times identified as a TIRC by other users (“EXPERT”), and (4) number of times reviewed were identified by other users as helpful (“HELP”).
  • WRITTEN number of reviews written
  • READ number of reviews read by other users
  • EXPERT number of times identified as a TIRC by other users
  • HELP number of times reviewed were identified by other users as helpful
  • HELP number of times reviewed were identified by other users as helpful
  • HELP number of times reviewed were identified by other users as helpful
  • a maximum point level e.g. 10
  • Each evaluation characteristic may be assigned a weight coefficient correlated with its contribution to an overall evaluation to obtain a final evaluation score, e.g., ranging from 0 to 10. Maximum values for one or more characteristics may be designated.
  • N N where N is the total number of reviewers); (2) W i , number of reviews written by the ith reviewer; (3) W max , maximum number of reviews written by a reviewer/expert; (4) R i , number of reviews by ith reviewer/expert that were read by other users; (5) R max , maximum number of ith reviewer/expert read reviews; (6) E max number of times ith reviewer/expert identified as a TIRC by other users; (7) E. maximum number of TIRC identifications; (8) H i , number of reviews by ith reviewer/expert identified as helpful; (9) H max , maximum number of reviews by ith reviewer/expert identified as helpful.
  • An exemplary algorithm for determining a weight of each reviewers/experts, i may be as set forth in equation (1).
  • FIGS. 6A and 6B depict an exemplary user interfaces for rating products.
  • a user is presented with a portion of a rating scale 600 , e.g., integers 8, 9, and 10 of a ten-point scale.
  • the host server 104 may present the rating scale horizontally on a user device 102 .
  • a user 103 may select a rating by moving an indicator along the rating scale 600 and selecting a particular point on the rating scale when the position of the indicator corresponds to the desired rating.
  • the user may utilize a user input device such as a mouse (not shown) to move the indicator and may depress a key on the mouse to make a rating selection.
  • a rating conflict is identified, e.g., by host server 104 as described above with reference to block 458 (e.g., the user tries to rate a new product as a “9” and there is an existing products rated as a “9”)
  • the user is presented with a comparative rating scale such as depicted in FIG. 6B for use in making a comparative rating.
  • the host server 104 may present the comparative rating scale 602 in an orientation other than the orientation of the rating scale 600 , e.g., vertically, on a user device 102 .
  • comparative rating scale 602 has finer granularity than rating scale 600 .
  • the user may then be required to select a comparative rating on the comparative rating scale 602 between the next value greater “10” and the next value lower “8,” e.g., between 8.1 and 9.9, using an input device such as a mouse moving vertically along the comparative rating scale 602 .
  • one or more of the various components and steps described above may be implemented through software that configures a server to perform the function of these components and/or steps.
  • This software may be embodied in a non-transient machine readable storage medium, e.g., a magnetic disc, an optical disk, a memory-card, or other tangible medium capable of storing instructions.
  • the instructions when executed by computer, such as a server, cause the computer to execute a method for performing the function of one or more components and/or steps described above.

Abstract

The present invention is embodied in adaptive rating methods and systems. The adaptive rating method includes receiving a first rating for a first product from a user, receiving a second rating for a second product from the user, identifying a conflict with a processor by comparing the first rating and the second rating, soliciting feedback from the user to remedy the conflict, and adjusting at least one of the first or second ratings with the processor responsive to feedback from the user. The steps of the method may be embodied in computer executable instructions stored on a non-transient machine readable medium that cause a processor to perform the method when executed by the processor. The system includes a processor configured to perform the steps of the method.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 13/686,757, filed Nov. 27, 2012, which is a continuation of U.S. patent application Ser. No. 13/023,884 (now U.S. Pat. No. 8,321,355) filed Feb. 9, 2011, and claims priority to U.S. Provisional Patent Application Ser. No. 61/423,309, filed Dec. 15, 2010, entitled Expert Rating System for Social Network Method and System, the entireties of which are expressly incorporated herein by reference as if set forth in their entireties.
  • BACKGROUND OF THE INVENTION
  • Social networking websites such as those hosted on Facebook™ and Yahoo!™ provide network services to facilitate interaction between users. Typically, users who sign up for these services are able to establish connections with other users. As the popularity of such network services has increased many social networking websites service millions of users with many individual users having large networks including hundreds or even thousands of connections to other users.
  • Users of such network services may be interested in requesting information or assistance from other users with whom they have established a connection or other members in the network to whom they don't have an established connection. The development of systems and methods for users of such network services to request and retrieve relevant information from other users within a social network would be useful to users.
  • SUMMARY OF THE INVENTION
  • The present invention is embodied in adaptive rating methods and systems. The adaptive rating method includes receiving a first rating for a first product from a user, receiving a second rating for a second product from the user, identifying a conflict with a processor by comparing the first rating and the second rating, soliciting feedback from the user to remedy the conflict, and adjusting at least one of the first or second ratings with the processor responsive to feedback from the user. The steps of the method may be embodied in computer executable instructions stored on a non-transient machine readable medium that cause a processor to perform the method when executed by the processor.
  • The system includes a processor configured to receive a first rating for a first product from a user, receive a second rating for a second product from the user, identify a conflict with a processor by comparing the first rating and the second rating, solicit feedback from the user to remedy the conflict, and adjust at least one of the first or second ratings with the processor responsive to feedback from the user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is best understood from the following detailed description when read in connection with the accompanying drawings, with like elements having the same reference numerals. When a plurality of similar elements are present, a single reference numeral may be assigned to the plurality of similar elements with a small letter designation referring to specific elements. When referring to the elements collectively or to a non-specific one or more of the elements, the small letter designation may be dropped. The letter “n” may represent a non-specific number of elements. Also, lines without arrows connecting components may represent a bi-directional exchange between these components. Included in the drawings are the following figures:
  • FIG. 1 is a system diagram depicting an exemplary system in accordance with aspects of the present invention;
  • FIG. 2 is a flow chart depicting exemplary steps for requesting and retrieving information in accordance with aspects of the present invention;
  • FIG. 2A is a block diagram illustrating the establishment of a category-based network and the establishment of trusted information resource contacts within the category-based network in accordance with an aspect of the present invention;
  • FIG. 2B is a table depicting exemplary categories and sub-categories for use with the present invention;
  • FIG. 3 is a block diagram illustrating a pending category trust request in accordance with aspects of the present invention;
  • FIG. 3A is a block diagram illustrating established trusted information resource contacts of a user for a category in accordance with aspects of the present invention;
  • FIGS. 3B and 3C are block diagrams illustrating established trusted information resource contacts of established trusted information resource contacts in accordance with aspects of the present invention;
  • FIG. 3D is a flow chart of exemplary steps for requesting information on other products related to a product of interest to the user in accordance with an aspect of the present invention;
  • FIG. 4 is a flow chart of exemplary steps for adapting a rating scale in accordance with aspects of the present invention;
  • FIG. 4A is a flow chart of exemplary sub-steps for performing steps of the flow chart of FIG. 4;
  • FIGS. 5A, 5B, and 5C are illustrations of a rating scale in accordance with aspects of the present invention; and
  • FIGS. 6A and 6B are illustrative representations of an exemplary comparative rating scale in accordance with an aspect of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The inventors have recognized that the growing adoption within social media is creating a growing state of diminished utility for users. As the current social media products are establishing an increasing number of relationships, a state of information overload is beginning to occur. The reason is that the current social media models fail to address each user's true passions, how they learn, and why they try or buy. The inventors have further recognized that users are most strongly influenced by small numbers of individuals with whom they have trusting interpersonal relationships. Thus, larger social circles or social networks do not translate into improved social utility. An aspect of the present invention provides a system that supports the natural human tendency for learning and changing behavior; a system that is rooted in how individual users naturally seek out trusted information resources to provide them with what they deem as valuable information. The system extends the existence of an individual user's relationship beyond their immediate circle of contacts by perpetuating “trusted” knowledge sharing category-based networks extending from their existing social networks. Thus, the value of indirect relationships beyond the first degree of an individual user's social graph is extended so that individual user can receive a greater number of useful: (1) trusted recommendations; (2) trusted search results; and/or (3) trusted answers to questions.
  • Embodiments of the present invention allow a user of a social network to request information from other users. The information request can include, for example, a question for dissemination to other users, a search request for information maintained in a electronic database, and/or an alert request for information once it is added to the database. In an exemplary embodiment, a user builds one or more category-based networks based on categories they have in common with other network users (e.g., investing, wine, fitness regiments, book-types, movie-types, restaurants, music-types, etc). Users are able to establish a select number of users within each category-based network as trusted information resource contacts (TIRCs; e.g., users they trust most within a specific category and/or from which they desire to receive rating information from). In doing so, users are able to filter valuable user-generated content (UGC; such as questions and answers, reviews, ratings) from a network of trusted resources (e.g., other users they may view as experts) including the user's established TIRCs, the user's established TIRCs' TIRCs, etc.
  • FIG. 1 is a diagram illustrating an exemplary system 100 in which exemplary embodiments of the present invention may operate. The system 100 includes multiple user devices 102 a-n in communication with a host server 104 over a network 106 such as the Internet, an intranet, a wide area network (WAN), a local area network (LAN), or other communication network capable of transporting data. Through user devices 102 a-n, users 103 a-n can communicate over the network 106 with each other and with other systems and devices coupled to the network 106.
  • Each of the user devices 102 includes memory 108 and a processor 110 such as a microcontroller, microprocessor, an application specific integrated circuit (ASIC), and/or a state machine coupled to the memory 108. Memory 108 may be a conventional computer-readable medium, such as a random access memory (RAM). In an exemplary embodiment, processor 110 executes computer-executable program instructions stored in memory 108. Suitable memory 108 and processors 110 will be understood by one of skill in the art from the description herein. /
  • User devices 102 a-n may also include a number of input/output (10) devices (not shown) such as a mouse, a CD-ROM, DVD, a keyboard, a display, or other input or output devices. Exemplary user devices 102 include personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones, pagers, digital tablets, laptop computers, Internet appliances, and processor-based devices. In general, a user device 102 a may be any type of device capable of communication with a network 106 and of interaction with one or more application programs. In an exemplary embodiment, user devices 102 a-n may operate on any operating system capable of supporting a browser or browser-enabled application, such as Microsoft™ Windows™. The user devices 102 a-n shown include, for example, personal computers executing a browser application program such as Microsoft Corporation's Internet Explorer™.
  • The illustrated host server 104 includes a processor 116 and a memory 118. In an exemplary embodiment, processor 116 executes a social network application program (SNAP) 112 stored in memory 118. SNAP 112 allows users, such as user 103 a, to interact with and participate in a computer-based social network (herein “social network”). A social network can refer to a computer network connecting users, such as people or organizations. An example of a social network in which the present invention may be implemented is Facebook™.
  • A social network can comprise user profiles that can be associated with other user profiles. Each user profile may represent a user and a user can be, for example, a person, an organization, a business, a corporation, a community, a fictitious person, an institution, information source, or other entity. Each profile can contain entries, and each entry can comprise information associated with a profile. Memory 118 may be a conventional computer-readable medium, such as a random access memory (RAM). In an exemplary embodiment, processor 116 executes computer-executable program instructions stored in memory 118. Suitable memory 118 will be understood by one of skill in the art from the description herein.
  • Host server 104, depicted as a single computer system, may be implemented as a network of computers. Examples of a host server 104 are servers, mainframe computers, networked computers, processor-based devices, and similar types of systems and devices. Processor 110 and processor 116 can be any of a number of computer processors, such as processors from Intel Corporation of Santa Clara, Calif and Motorola Corporation of Schaumburg, Ill., which will be understood by one of skill in the art from the description herein.
  • SNAP 112 can include a category-based information processor 120. In an exemplary embodiment, processor 120 enables a user 103 to establish trusted information resource contacts/relationships with other users that are based on categories and to request information from these TIRCs. Processor 120 can cause the display of information provided by one or more users 103 of the social network on a user device 102. Processor 120, in some embodiments, can generate, distribute, and/or update a search record. Multiple processors and other hardware can be provided to perform operations associated with embodiments of the present invention.
  • Host server 104 also provides access to electronic data storage elements, such as a social network storage element, in the example shown in FIG. 1, an electronic social network database 122, which may be stored in memory 118 of host server 104 or external to host server 104 as illustrated. The social network database 122 may be physically attached or otherwise in communication with the social network engine 112 by way of a network or other connection. The social network database 122 can be used to store users' member profiles including TIRCs of those users. Electronic data storage elements may include any one or combination of methods for storing data, including without limitation, arrays, hash tables, lists, and pairs. Other similar types of data storage devices can b e accessed by the host server 104. SNAP 112 can receive data comprising the user profiles from the social network database 122 and can also send data comprising user profiles to the social network database 122 for storage.
  • It should be noted that the present invention may comprise systems having different architecture than that which is shown in FIG. 1. For example, in some systems according to the present invention, host server 104 may comprise a single physical or logical server. The system 100 shown in FIG. 1 is merely exemplary, and is used to help explain the social network and adaptive rating systems and methods illustrated in FIGS. 2-6.
  • FIG. 2 depicts a flow chart 200 of exemplary steps for retrieving information about a category of interest from a social network in accordance with aspect of the present invention. In an exemplary embodiment, the social network includes multiple user networks where each user network includes multiple users. The steps of flow chart 200 will be described with reference to the system 100 depicted in FIG. 1 to facilitate description. Other systems in which the steps of flow chart 200 may be carried out will be understood by one of skill in the art from the description herein.
  • At block 202, information associated with users are stored in a database. In an exemplary embodiment, information generated by users 103 may be stored in social network database 122. The information may include ratings and reviews of products, answers to questions links, or any other form of user-generated content (UGC). All forms of information may be generated and stored by users of the social network prior to receiving a request for information. Additionally, information generated and stored after a request for information may be used to satisfy a standing request.
  • At block 204, user category-based networks associated with categories are built. FIG. 2 a depicts an exemplary user network 250 including multiple contacts/friends 255 a-x (24 contacts in the illustrated embodiment) within a user's network. Contacts 255 of the user may be associated with a category such as a category or sub-category (described below) to build a category-based network. In the illustrated embodiment, contacts 255 x, t, p, l, h and d are associated with a category (e.g., wine) to build category-based network 265. Step 204 may be performed for every user 103 within social network database 122.
  • User category-based networks, such as category-based network 265, may be built based on the user associating one or more contacts 255 with a particular category 260. In an exemplary embodiment, the user may unilaterally assign contacts 255 to one or more category-based networks. For example, the host server 104 may create a graphical user interface (GUI) for display on a user device 102. The GUI may display each contact 250 of the user along with a series of check boxes corresponding to categories next to each user. The user may then simply select the appropriate check boxes to associate contacts with a category.
  • In an alternative exemplary embodiment, bilateral agreement may be necessary to establish a category-based network 265. For example, the host server 104 may create a GUI for display on a user device 102. The GUI may display each contact 255 of the user along with a series of check boxes corresponding to categories next to each user. Selection of category check boxes associated with a particular contact 255 may result in an email message to that contact requesting consent. The contact may then be associated with the category and become a member of the category-based network 265 upon a positive response to the consent request.
  • FIG. 2B depicts exemplary categories 275 and sub-categories 276 associated with particular categories with which users may be associated. The sub-categories provide finer granularity for categorizing. For example, a category may be “wine” and a subcategory may be “varietal” (Cabernet, Merlot, Zinfandel, etc).
  • Referring back to FIG. 2, at block 208, contacts are established as TIRCs (e.g., experts) from which the user desires to receive information. The TIRCs form a set 270 of one or more contacts 255 of the user that are associated with the category and are established as TIRCs of the user for that category. In an exemplary embodiment, the user sends a trusted information resource request to one or more contacts 255 for a category/subcategory requesting that those contacts become TIRcs of the user for that category/subcategory. For example, the user may send trusted information resource requests to three of the contacts 255 (e.g., contacts 255 x, p, d) within category-based network 265 to become TIRCs of the user for the category/subcategory. The trusted information resource requests for the category are received by the host server 104, which forwards the trusted information resource requests to the intended contacts 255 x, p, d and waits for a response. At this point, the trusted information resource requests are pending and a trusted information resource relationship has not been established, which is illustrated in FIG. 3. The host server 104 then establishes each user from which a positive response to the trusted information resource request is received as a TIRC of the user. FIG. 3A depicts the establishment of a set 270 of trusted information resource relationships between the user and contacts 255 x, p, d for category-based network 265.
  • In an exemplary embodiment, once TIRCs are established, the user can individually turn the TIRCs on (active) and off (inactive) as desired. FIG. 3B illustrates /the trusted connections between the user and contacts 255 x and p turned on, and the trusted connection to expert 255 d turned off. In this arrangement, the user is able to retrieve information from TIRCs 255 x and 255 p (but not 255 d), and from the TIRCs with which contacts 255 x and 255 p have active trusted connections (e.g., 255 xa xb, x cand 255 pa, pb, pc); and from the active TIRCs of contacts 255 xa, xb, xc and 255 pa, pb, pc, etc.
  • FIG. 3C illustrates the trusted connections between the user and contacts 255 x and d turned on and the trusted connection to expert 255 p turned off. In this arrangement, the user is able to retrieve information from TIRCs 255 x and 255 d (but not 255 p), and from the TIRCs with which contacts 255 x and 255 d have active trusted connections (e.g., 255 xa, xb, da, and db, but not 255 xc); and from the active TIRCs of contacts 255 xa, xb, da, and db. In an exemplary embodiment, a contact such as contact 255 xc in FIG. 3C may be designated as inactive by the user with which that contact has a trusted information resource connection (e.g., by contact 255 x for 255 xc).
  • In an additional embodiment, to improve search results a user requesting the search may designate one or more TIRCs of their TIRCs as inactive for purposes of generating search results for queries by that user. For example, a user may designate contact 255 xc as inactive if the user does not want results from that contact (e.g., does not trust that contact's recommendations based on past experience). In accordance with this embodiment, designation of a contact as inactive for the user's queries only renders that contact inactive from the user's viewpoint and does not render that contact inactive as a TIRC of other users (e.g., contact 255 xc may remain an active TIRC of contact 255 x for contact 255 x and other users unless contact 255 x designates contact 255 xc as inactive.
  • The number of active TIRCs per category may be limited. In an exemplary embodiment, the number of active TIRCs per category is limited to ten or less and, more preferably, to three or less. Step 208 may be performed for every user 103 within social network database 122.
  • At block 210, an information request is received that specifies a category. In an exemplary embodiment, the host server 104 receives an information request from a user 103. The information request may include content filtering information such as the standard filters 277 a and/or advanced filters 277 b set forth in FIG. 2B. The host server 104 may generate and present a GUI (not shown) to the user 103 for submitting an information request. The information request GUI may include a series of check boxes associated with various categories/sub-categories and a submit button. In an exemplary embodiment, an information request may be generated by selecting one or more categories/subcategories and selecting the submit button. Additionally, the GUI may include a text box for entering a question for submission to a user's trusted information resources. The GUI may further include check boxes or other means for entering filter information for standard filters 277 a and/or advanced filters 277 b.
  • At block 212, a first set of users within the user's network are identified that are associated with the category (i.e., contacts 255 in category-based network 265) and that are established as TIRCs for that category (i.e., contacts 255 in set 270). In an exemplary embodiment, the host server 104 identifies the first set of users by examining the social network database 122 based on the category specified in the information request and the user's established TIRCs for that category. The first set of users may be thought of as “experts” from the viewpoint of the user.
  • At block 214, a second set of users within the category-based networks of the first set of users are identified that are associated with the category and that are designated as TIRCs for the category by the first set of users. In an exemplary embodiment, the host server 104 identifies the second set of users by examining the social network database 122 based on the category specified in the information request and the TIRCs established of the first set of users for that category. The second set of users may be thought of as “experts” of the first set of users, e.g., the expert's experts. The steps of block 214 may be repeated to obtain information from TIRCs that are farther removed from the user, e.g., the expert's expert's expert, the expert's expert's expert's expert, and so on.
  • At block 216, information is retrieved for identified users. In an exemplary embodiment, the host server 104 retrieves information from the database 122 for identified users (e.g., those identified in steps 212 and/or 214) corresponding to the information request. The information may be ratings and/or reviews of products within the selected category (step 210), or answers to questions within the selected category. For example, assume the category is action films. The host server 104 may retrieve all ratings and/or reviews of action films by the TIRCs identified in steps 212 and/or 214. If a user has a question associated with a category, the information may be retrieved by disseminating the question to the identified users and gathering responses from the identified users.
  • At block 218, retrieved information is provided to a user. In an exemplary embodiment, information retrieved by the host server 104 from the database 122 at block 216 is transmitted to the client device 102 from which the information request was received (step 210) where it may be viewed by the user 103.
  • The exemplary steps described above enable a user to monitor new ratings, reviews and other UGC of their TIRCs within a desired category and the TIRCs of these TIRCs, etc.; search ratings, reviews and other UGC of TIRCs within a desired category and the TIRCs of these TIRCs, etc.; and send questions to or communicate directly with TIRCs within a desired category and to/with the TIRCs of these TIRCs, etc. Monitoring, searching, and sending functionality is described in further detail below:
  • Monitoring—user 103 can set personal preferences within the social network to receive information through direct links established through extended category-based networks of users identified as TIRCs within those category-based networks. The information from these TIRCs can include ratings, reviews, links, UGC, etc. Within this mode of functionality the user receives the information automatically, e.g., periodically or as it is posted by users. The information can be filtered by criteria such as set forth in standard filters 277 a and/or advance filters 277 b (FIG. 2B) including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the number of UGC posts, ratings or reviews within a specific topic category by each TIRC, and the social network communities' approval or rating of a TIRC's UGC, ratings, reviews, etc.
  • As an example, a user may set their “monitor” preferences to notify them of reviews down to the third degree of separation by TIRCs within category-based networks for a particular category (e.g., Italian restaurants) with a particular rating (e.g., above 9.3).
  • FIG. 3D depicts a flowchart 300 of exemplary steps for monitoring reviews in accordance with one aspect of the present invention. At block 302, a information request is received (e.g., at host server 104) from a user identifying a particular product (e.g., product T5 from a group of products including products T1-T6). At block 304, a category/subcategory associated with the identified product is identified. For example, the host server 104 may identify the category/subcategory (e.g., Napa Cabernets) associated with product T5 by comparing a product identifier (e.g., UPC code) for product T5 with entries in a database.
  • At block 306, TIRCs of the user for the identified category are identified. In an exemplary embodiment, host server 104 identifies TIRCs for the identified category as described above for blocks 212 and 214 of flow chart 200.
  • At block 308, host server 304 determines if the TIRCs have reviewed the product identified by the user. In an exemplary embodiment, host server 104 compares a product identifier of the identified product to product identifiers of all products reviewed by the TIRCs. If there is not a match, processing ends at block 310. If there is a match, indicating that one or more of the TIRCs have reviewed the identified product, processing proceeds at block 312.
  • At block 312, host server 304 determines for each TIRC that has reviewed the identified product whether they rated another product the same or higher than the identified product. If no TIRC has rated any other products within the category equal to or greater than they rated the identified product, processing ends at block 314. If one or more TIRCs rated one or more other products equal to or greater than the identified product, processing ends at block 316 with information for those products being transmitted to the user device 102 of the user 103 requesting the information. This process allows a user to quickly and easily identify other products that the user may wish to try because they were rated by the user's expert, expert's expert, and/or expert's expert's expert, as equal to or better than the identified product.
  • Searching—user 103 can search for ratings, reviews, user generated content, and published content by keywords, pictures, dimensional barcodes, non-dimensional barcodes, UPC codes, geocode, GPS coordinates, and more, through direct links established through extended category-based networks of users identified as TIRCs within a category. Within this mode of functionality the user actively requests the information. The information can be filtered by criteria such as set forth in standard filters 277 a and/or advanced filters 277 b (FIG. 2B), including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the number of UGC posts, ratings or reviews within a specific topic category by each TIRC, and the social network communities' approval or rating of a TIRC's UGC, ratings, reviews, etc.
  • As an example, a user may search for ratings, reviews, or other valuable UGC by scanning the barcode on Malcom Gladwell's book “Outliers” in order to receive relevant information from up to the fifth degree of separation within his trusted resource or expert category-based network for books.
  • Q&A′ing—user 103 can send questions to be answered through direct links established through extended category-based networks of users identified as TIRCs within a category. Within this mode of functionality the user actively requests answers to questions. The TIRC can filter questions to answer based on, for example, the degrees of separation from the questioning user. The answers can be filtered by criteria such as set forth in standard filters 277 a and advance filters 277 b (FIG. 2B), including by way of non-limiting example, the degrees of separation from the TIRC, the status of active TIRC designations, the social network communities' approval or rating of a TIRC's answers, and other indications of credibility or status.
  • As an example, a user may send a question out to his trusted resource network for wine, “I am going to San Francisco next month. If I have two days in Napa, what wineries should I try to schedule a tasting?”
  • Another aspect of the present invention relates to an adaptive rating system and method that ensures that ratings of entities (e.g., (product, person, service, experience, etc.) remain relevant for a user as that user's level of experience matures. For example, a user rating a bottle of wine may have a different rating opinion after having rated 50 bottles of wine than after rating three bottles of wine. The present invention enables past and/or new ratings to be automatically adjusted in order to make them more relevant.
  • FIG. 4 depicts a flow chart 400 of exemplary steps for adapting ratings and FIG. 4A depicts a flow chart 452 of exemplary sub-steps within the steps of flow chart 400. The steps of flow charts 400 and 450 will be described with reference to the system 100 depicted in FIG. 1 to facilitate description. Other suitable systems will be understood by one of ordinary skill in the art from the description herein.
  • At block 402, a first rating for a first product is received from a user. The rating may be a rating on a scale of 1 to 10 (e.g., a nine) for a product within a category or within a subcategory (e.g., a wine or a California Pinot Noir). In an exemplary embodiment, processor 116 may be coupled to a receiver (not shown) that receives the rating from a user 103 via user device 102 over network 106.
  • At block 404, a second rating for a second product is received from the user. The rating may be a rating on a scale of 1 to 10 (e.g., a nine) for another product within the category or subcategory (e.g., a wine or a California Pinot Noir). In an exemplary embodiment, processor 116 may be coupled to a receiver (not shown) that receives the rating from the user 103 via user device 102 over network 106. FIG. 5A depicts a user attempting to rate a second/new product that same as a first/benchmark product (e.g., as a “9”).
  • Referring back to FIG. 4, at block 406, a potential conflict is identified between the first rating and the second rating. In an exemplary embodiment, processor 116 identifies the potential conflict. FIG. 4A depicts exemplary sub-steps for identifying a potential conflict (step 406). At sub-step 452, processor 116 compares the first rating to the second rating. At sub-step 454, processor 116 determines if the first rating equals the second rating. If the ratings are equal, processor 116 identifies a potential conflict and processing proceeds at block 408. If the ratings are not equal, processing ends at block 456.
  • At block 408, feedback is solicited from the user to remedy the potential conflict. In an exemplary embodiment, processor 116 solicits feedback to remedy the potential conflict.
  • FIG. 4A depicts exemplary sub-steps for soliciting feedback to remedy the potential conflict (step 408). At sub-step 458, processor 116 determines if the second rating is accurate based on the current rating scale for the category. The current rating scale includes at least one rating of a product (e.g., the first rating for the first product). In an exemplary embodiment, processor 116 sends a first inquiry to the user asking if the second rating is accurate based on the current rating scale (e.g., should the second product have the same rating as the first product). If the second rating is inaccurate (e.g., no, the first and second products are not equivalent to the user rating the products, processing proceeds at block 462. If the second rating is accurate (e.g., yes, the first and second products are essentially equivalent to the user rating the products), processing ends at block 460.
  • At sub-step 462, processor 116 receives a comparative rating between the first product and the second product. In an exemplary embodiment, processor 116 sends a rating scale such as depicted in FIG. 5B for display by user device 102 to solicit feedback from user 103. The depicted rating scale provides a number of sub-intervals in the vicinity of the first product rating for selection by user 103. For example, if the second product is a little better than the first product and the first product has a rating by user 103 of “9”, the user may select a slightly higher rating, e.g., “9.5” on the rating scale. In this case, the comparative rating would be “0.5” better. Similarly, if the second product is a little worse than the first product, the user may select a slightly lower rating, e.g., “8.5” on the rating scale. In this case, the comparative rating would be “0.5” worse. The user may enter the comparative rating in other well known manners, e.g., by typing in a comparative value or other value from which a comparative value may be obtained.
  • At block 410, the first or second rating is adapted responsive to the feedback solicited from the user. In an exemplary embodiment, processor 116 adapts the first or second rating. FIG. 4A depicts an exemplary sub-step for adapting that rating of the first or second rating (step 410). At step 464, processor 116 proportionally adjusts the first rating based on the comparative rating. In an exemplary embodiment, the rating of a first product is only adjusted when the first product has the maximum value rating on the rating scale (e.g., a value of “10” on a ten-point scale) and a maximum value rating is received for a second product that they user believes should have a higher rating than the first rating.
  • As an illustrative example, consider a first product having a rating of 10 as previously rated by the user. If the user attempts to rate a second product as a 10, similar to as illustrated in FIG. 5A, the system (e.g., processor 116) will identify a conflict. Feedback will then be solicited from the user to determine if the second product should have the same rating as the first product. If the user indicates that it should not have the same value, the user submits a comparative rating of the second product to the first product, e.g., a rating of 9.1-9.9 or 10.1-10.9. In an exemplary embodiment, if a rating of 10.1 to 10.9 were received from the user (e.g., 10.6 as illustrated in FIG. 5C), the second product would then be established as a benchmark for a rating of 10 and the first product (and any other previously rated products for the category) would be proportionally re-rated, e.g., by processor 116. For example, if the first product had a rating of 10 and the second product was given a comparative rating of 10.6, the first product would be given a rating of 9.4 (10.0-0.6=9.4) and the second product would be established as a 10. It will be understood that the system could be applied to many ratings for many products, in which case all the previously rated products may be automatically adjusted in a manner similar to the first product.
  • For example, as a first step (STEP ONE) ratings may be received by the host server 104 from a user 103 rating multiple products within a category, e.g., product 1=3, product 2=5, and product 3=8. The host server 104 may then proportionally adjust the ratings of the products to a standardized scale in which the rating of the highest rated product is set to the top value of the standardized scale and the ratings of the other products are proportionally adjusted. For example, if the standardized scale is a ten-point scale, product 3 may be set to 10 and products 1 and 2 may be proportionally adjusted, e.g., product 1 equals 4(⅜*10=3.75) and product 2 equals 6(⅝*10=6.25). Next (STEP THREE), the host server 104 receives a rating for a product within the category from the user 103 that has a rating higher than the highest rated product within that category, e.g., product 4 equals 10.9. Finally (STEP FOUR), the host server 104 adjusts the new rating to the highest rating and proportionally adjusts the other ratings. For example, product 4 is set equal to 10; product 1 is set equal to 4 (Old Score-Old Score*Adjustment Factor=Old Score-Old Score*(Max benchmark for 10-10)/10=01d Score-Old Score*(10.9-10)/10=4-4*0.09=3.64); products 2 is set equal to 5 (Old Score-Old Score*Adjustment Factor=Old Score-Old Score*(Max benchmark for 10-10)/10=01d Score-Old Score*(10.9-10)/10=6-6*0.09=5.46); and product 3 is set equal to 9(Old Score-Old Score*Adjustment Factor=Old Score-Old Score*(Max benchmark for 10-10)/10=Old Score-Old Score*(10.9-10)/10=10-10*0.09=9.1). In another embodiment, ratings are proportionally adjusted whenever a potential conflict is identified and a comparative rating (e.g., higher and/or lower) is received from a user.
  • Aspects of the adaptive rating system may include by way of non-limiting example:
  • a) A rating system where the entity (product, person, service, experience, etc) with the highest rating serves as the benchmark for which all lower rated products or experiences are ranked against within a specific category.
  • b) A process that requires the user to rate any new entities in relation to the value of current benchmarks within a specific category.
  • c) A rating system where a process requires the user, when attempting to rate an entity that has an equal rating to an existing entity, to confirm that the rating of the entity is truly equal, where if the rating of the new entity is not equal, the rating of the new entity has to be set either greater than or less than the previous benchmark for that entity.
  • d) A process that when the user indicates that the rating of a new (or re-rated) entity is greater than the current highest benchmark, all the rating of entities weighted in relation to the former benchmark are adjusted proportionally.
  • The present invention is capable of adjusting ratings as a user's tastes mature and experience within a category/subcategory evolves, while keeping scores based on a relative scale. For example, a user tries a mid-tier Bordeaux as one of their first wine experiences and give it a 10. As the user tries other wines they do not enjoy as much they will rate them less than 10 (using the mid-tier Bordeaux as the top of the scale). The user may eventually try a Bordeaux they enjoy more than any other he has previously experienced. When he tries to give it a score of 10, the adaptive rating system/method requires him to rate this Bordeaux in comparison to the mid-tier Bordeaux that is currently serving as his benchmark for “10”. If the user feels they are equal, both remain a 10. If the user rates the new Bordeaux greater than the current standing mid-tier Bordeaux (e.g. 10.5), the 10.5 Bordeaux becomes the new benchmark for “10”. The previous mid-tier Bordeaux that represented 10, along with all the wines that were rated in comparison to the mid-tier Bordeaux are automatically adjusted in relation to the new 10 point scale now established by the 10.5 Bordeaux. By adapting the rating scale (maintaining a True10™ rating system), the value of an individual rating becomes significantly more valuable and relevant to users within a network—making one's own ratings more accurate to themselves, and more meaningful and relevant to others.
  • The adapted score makes an expert's ratings or recommendations more relevant, which can be further enhanced by considering additional features, including, but not limited to:
  • a trust index: how many people directly trust a person as a TIRC (e.g., expert) for a specific category.
  • a like index: the degree to which other users “like” the answers, recommendations, and/or ratings of an expert.
  • an experience index: how many products the expert has rated, questions they have answered, etc.
  • For example, a reviewer/expert may be evaluated on a scale of 0 to 10 based on the following four characteristics: (1) number of reviews written (“WRITTEN”), (2) number of reviews read by other users (“READ”), (3) number of times identified as a TIRC by other users (“EXPERT”), and (4) number of times reviewed were identified by other users as helpful (“HELP”). For each characteristic, a maximum point level (e.g., 10) may be given to a reviewer/expert with the largest number of reviews/customer indications. Each evaluation characteristic may be assigned a weight coefficient correlated with its contribution to an overall evaluation to obtain a final evaluation score, e.g., ranging from 0 to 10. Maximum values for one or more characteristics may be designated. In one example, WRITTEN has a weight of 0.2(KW=0.2), READ has a weight of 0.5(KR=0.2), EXPERT has a weight of 0.5(KE=0.5), and HELP has a weight of 0.5(KH=0.1). Input variables may include: (1) i, reviewer's index (i=0 . . . N where N is the total number of reviewers); (2) Wi, number of reviews written by the ith reviewer; (3) Wmax, maximum number of reviews written by a reviewer/expert; (4) Ri, number of reviews by ith reviewer/expert that were read by other users; (5) Rmax, maximum number of ith reviewer/expert read reviews; (6) Emax number of times ith reviewer/expert identified as a TIRC by other users; (7) E. maximum number of TIRC identifications; (8) Hi, number of reviews by ith reviewer/expert identified as helpful; (9) Hmax, maximum number of reviews by ith reviewer/expert identified as helpful. An exemplary algorithm for determining a weight of each reviewers/experts, i, may be as set forth in equation (1).
  • EV = K W W i W max 10 + K R R i R max 10 + K E E i E max 10 + K H H i H max 10 ( 1 )
  • FIGS. 6A and 6B depict an exemplary user interfaces for rating products. In FIG. 6A, a user is presented with a portion of a rating scale 600, e.g., integers 8, 9, and 10 of a ten-point scale. The host server 104 may present the rating scale horizontally on a user device 102. A user 103 may select a rating by moving an indicator along the rating scale 600 and selecting a particular point on the rating scale when the position of the indicator corresponds to the desired rating. For example, the user may utilize a user input device such as a mouse (not shown) to move the indicator and may depress a key on the mouse to make a rating selection. If a rating conflict is identified, e.g., by host server 104 as described above with reference to block 458 (e.g., the user tries to rate a new product as a “9” and there is an existing products rated as a “9”), the user is presented with a comparative rating scale such as depicted in FIG. 6B for use in making a comparative rating. The host server 104 may present the comparative rating scale 602 in an orientation other than the orientation of the rating scale 600, e.g., vertically, on a user device 102. In the illustrated embodiment, comparative rating scale 602 has finer granularity than rating scale 600. The user may then be required to select a comparative rating on the comparative rating scale 602 between the next value greater “10” and the next value lower “8,” e.g., between 8.1 and 9.9, using an input device such as a mouse moving vertically along the comparative rating scale 602.
  • It is contemplated that one or more of the various components and steps described above may be implemented through software that configures a server to perform the function of these components and/or steps. This software may be embodied in a non-transient machine readable storage medium, e.g., a magnetic disc, an optical disk, a memory-card, or other tangible medium capable of storing instructions. The instructions, when executed by computer, such as a server, cause the computer to execute a method for performing the function of one or more components and/or steps described above.
  • Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention.

Claims (10)

What is claimed:
1. An adaptive rating method comprising:
receiving a first rating for a first product from a user;
receiving a second rating for a second product from the user;
identifying a conflict with a processor by comparing the first rating and the second rating;
soliciting feedback from the user to remedy the conflict; and
adjusting at least one of the first or second ratings with the processor responsive to feedback from the user;
wherein the first product and second product are similar.
2. The method of claim 1, wherein the identifying step comprises: comparing the first and second ratings; and identifying a conflict if the first and second ratings are equal.
3. The method of claim 1, wherein the soliciting step comprises:
sending a first inquiry to the user asking if the first rating is incorrect;
receiving a response from the user that the first rating is incorrect;
sending a second inquiry to the user soliciting a comparative rating between the first product and the second product; and
receiving the comparative rating from the user.
4. The method of claim 3, wherein the step of sending the second inquiry includes:
sending a rating scale depicting the first rating for the first product; and
requesting that the user identify the comparative rating for the second product to the first product.
5. The method of claim 3, wherein the adapting step comprises:
proportionally adjusting the first rating based on the comparative rating.
6. An adaptive rating system comprising:
a receiver that receives a first rating for a first product from a user and a second rating for a second product from the user;
a processor coupled to the receiver, the processor configured to identify a conflict by comparing the first rating and the second rating, solicit feedback from the user to remedy the conflict, and adjust at least one of the first or second ratings responsive to feedback from the user;
wherein the first and second product are obtained from the same vendor.
7. The system of claim 6, wherein the processor identifies the conflict by comparing the first and second ratings and identifying a conflict if the first and second ratings are equal.
8. The system of claim 6, wherein the processor solicits feedback by sending a first inquiry to the user asking if the first rating is incorrect, receiving a response from the user that the first rating is incorrect, sending a second inquiry to the user soliciting a comparative rating between the first product and the second product, and receiving the comparative rating from the user.
9. The system of claim 8, wherein the processor sends the second inquiry by sending a rating scale depicting the first rating for the first product and requesting that the user identify the comparative rating for the second product to the first product.
10. The system of claim 8, wherein the processor adapts the rating scale by proportionally adjusting the first rating based on the comparative rating.
US14/103,901 2010-12-15 2013-12-12 Adaptive rating system and method Abandoned US20140108282A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/103,901 US20140108282A1 (en) 2010-12-15 2013-12-12 Adaptive rating system and method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US42330910P 2010-12-15 2010-12-15
US13/023,884 US8321355B2 (en) 2010-12-15 2011-02-09 Adaptive rating system and method
US13/686,757 US8650133B2 (en) 2010-12-15 2012-11-27 Adaptive rating system and method
US14/103,901 US20140108282A1 (en) 2010-12-15 2013-12-12 Adaptive rating system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/686,757 Continuation US8650133B2 (en) 2010-12-15 2012-11-27 Adaptive rating system and method

Publications (1)

Publication Number Publication Date
US20140108282A1 true US20140108282A1 (en) 2014-04-17

Family

ID=46235672

Family Applications (4)

Application Number Title Priority Date Filing Date
US13/023,847 Abandoned US20120158844A1 (en) 2010-12-15 2011-02-09 Social network information system and method
US13/023,884 Expired - Fee Related US8321355B2 (en) 2010-12-15 2011-02-09 Adaptive rating system and method
US13/686,757 Expired - Fee Related US8650133B2 (en) 2010-12-15 2012-11-27 Adaptive rating system and method
US14/103,901 Abandoned US20140108282A1 (en) 2010-12-15 2013-12-12 Adaptive rating system and method

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US13/023,847 Abandoned US20120158844A1 (en) 2010-12-15 2011-02-09 Social network information system and method
US13/023,884 Expired - Fee Related US8321355B2 (en) 2010-12-15 2011-02-09 Adaptive rating system and method
US13/686,757 Expired - Fee Related US8650133B2 (en) 2010-12-15 2012-11-27 Adaptive rating system and method

Country Status (1)

Country Link
US (4) US20120158844A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD731520S1 (en) * 2013-06-20 2015-06-09 Tencent Technology (Shenzhen) Company Limited Portion of a display screen with animated graphical user interface
USD734356S1 (en) * 2013-06-20 2015-07-14 Tencent Technology (Shenzhen) Company Limited Portion of a display screen with animated graphical user interface
US9418375B1 (en) 2015-09-30 2016-08-16 International Business Machines Corporation Product recommendation using sentiment and semantic analysis
USD769909S1 (en) * 2012-03-07 2016-10-25 Apple Inc. Display screen or portion thereof with graphical user interface
USD776705S1 (en) 2013-10-22 2017-01-17 Apple Inc. Display screen or portion thereof with graphical user interface
USD786272S1 (en) 2012-03-06 2017-05-09 Apple Inc. Display screen or portion thereof with graphical user interface
USD900123S1 (en) * 2018-02-12 2020-10-27 Acordo Certo—Reparacao E Manutencao Automovel, LTA Display screen or portion thereof with graphical user interface
USD915420S1 (en) * 2019-03-07 2021-04-06 Intuit Inc. Display screen with graphical user interface
USD937890S1 (en) 2018-06-03 2021-12-07 Apple Inc. Electronic device with graphical user interface

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158844A1 (en) * 2010-12-15 2012-06-21 VineLoop, LLC Social network information system and method
US9870424B2 (en) * 2011-02-10 2018-01-16 Microsoft Technology Licensing, Llc Social network based contextual ranking
US9235863B2 (en) 2011-04-15 2016-01-12 Facebook, Inc. Display showing intersection between users of a social networking system
US20130041837A1 (en) * 2011-08-12 2013-02-14 Accenture Global Services Limited Online Data And In-Store Data Analytical System
US9189965B2 (en) * 2012-06-29 2015-11-17 International Business Machines Corporation Enhancing posted content in discussion forums
CA2880492A1 (en) * 2012-08-01 2014-02-06 Sears Brands, Llc Contests and sweepstakes
US9858317B1 (en) 2012-12-03 2018-01-02 Google Inc. Ranking communities based on members
US20140298265A1 (en) * 2013-03-04 2014-10-02 Triptease Limited Photo-review creation
US9792330B1 (en) 2013-04-30 2017-10-17 Google Inc. Identifying local experts for local search
CN104252518B (en) * 2014-03-13 2016-08-24 腾讯科技(深圳)有限公司 Information displaying method and device
US20150348188A1 (en) * 2014-05-27 2015-12-03 Martin Chen System and Method for Seamless Integration of Trading Services with Diverse Social Network Services
US9070088B1 (en) 2014-09-16 2015-06-30 Trooly Inc. Determining trustworthiness and compatibility of a person
US20160110778A1 (en) * 2014-10-17 2016-04-21 International Business Machines Corporation Conditional analysis of business reviews
US20170124468A1 (en) * 2015-10-30 2017-05-04 International Business Machines Corporation Bias correction in content score
US10235699B2 (en) 2015-11-23 2019-03-19 International Business Machines Corporation Automated updating of on-line product and service reviews

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6487541B1 (en) 1999-01-22 2002-11-26 International Business Machines Corporation System and method for collaborative filtering with applications to e-commerce
EP1234251B1 (en) 1999-11-03 2011-04-06 Sublinks ApS Method, system, and computer readable medium for managing resource links
US20020103692A1 (en) 2000-12-28 2002-08-01 Rosenberg Sandra H. Method and system for adaptive product recommendations based on multiple rating scales
US7418447B2 (en) 2001-01-16 2008-08-26 Cogentex, Inc. Natural language product comparison guide synthesizer
US20030220980A1 (en) 2002-05-24 2003-11-27 Crane Jeffrey Robert Method and system for providing a computer network-based community-building function through user-to-user ally association
US7310612B2 (en) 2003-08-13 2007-12-18 Amazon.Com, Inc. Personalized selection and display of user-supplied content to enhance browsing of electronic catalogs
US7822631B1 (en) 2003-08-22 2010-10-26 Amazon Technologies, Inc. Assessing content based on assessed trust in users
US20050198031A1 (en) 2004-03-04 2005-09-08 Peter Pezaris Method and system for controlling access to user information in a social networking environment
US8898239B2 (en) * 2004-03-05 2014-11-25 Aol Inc. Passively populating a participant list with known contacts
AU2005258080A1 (en) 2004-06-18 2006-01-05 Pictothink Corporation Network content organization tool
US7359894B1 (en) 2004-06-30 2008-04-15 Google Inc. Methods and systems for requesting and providing information in a social network
WO2006008778A1 (en) * 2004-07-23 2006-01-26 Fitneck S.R.L. Clothes buttoning system with self-regulation of size
US20060184464A1 (en) 2004-11-22 2006-08-17 Nec Laboratories America, Inc. System and methods for data analysis and trend prediction
US7409362B2 (en) 2004-12-23 2008-08-05 Diamond Review, Inc. Vendor-driven, social-network enabled review system and method with flexible syndication
US7761399B2 (en) 2005-08-19 2010-07-20 Evree Llc Recommendation networks for ranking recommendations using trust rating for user-defined topics and recommendation rating for recommendation sources
WO2008020312A2 (en) 2006-04-28 2008-02-21 Berger Jacqueline M Social networking and dating platform and method
US20080059891A1 (en) 2006-08-31 2008-03-06 Eyal Herzog System and a method for improving digital media content using automatic process
US8494436B2 (en) * 2006-11-16 2013-07-23 Watertown Software, Inc. System and method for algorithmic selection of a consensus from a plurality of ideas
US7930302B2 (en) * 2006-11-22 2011-04-19 Intuit Inc. Method and system for analyzing user-generated content
US7756756B1 (en) 2007-09-12 2010-07-13 Amazon Technologies, Inc. System and method of providing recommendations
US20090192884A1 (en) 2008-01-28 2009-07-30 Ren-Yi Lo Method and system for incentive-based knowledge-integrated collaborative change management
US8407286B2 (en) 2008-05-15 2013-03-26 Yahoo! Inc. Method and apparatus for utilizing social network information for showing reviews
US8442974B2 (en) 2008-06-27 2013-05-14 Wal-Mart Stores, Inc. Method and system for ranking web pages in a search engine based on direct evidence of interest to end users
EP2297685A1 (en) * 2008-07-04 2011-03-23 Yogesh Chunilal Rathod Methods and systems for brands social networks (bsn) platform
US20100082419A1 (en) * 2008-10-01 2010-04-01 David Hsueh-Chi Au-Yeung Systems and methods of rating an offer for a products
US10489747B2 (en) 2008-10-03 2019-11-26 Leaf Group Ltd. System and methods to facilitate social media
US8176510B2 (en) * 2008-11-12 2012-05-08 United Video Properties, Inc. Systems and methods for detecting inconsistent user actions and providing feedback
US20100217717A1 (en) * 2009-02-24 2010-08-26 Devonwood Logistics, Inc. System and method for organizing and presenting evidence relevant to a set of statements
US20110252031A1 (en) * 2009-12-31 2011-10-13 Michael Blumenthal Method, Device, and System for Analyzing and Ranking Products
US20120158844A1 (en) * 2010-12-15 2012-06-21 VineLoop, LLC Social network information system and method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD786272S1 (en) 2012-03-06 2017-05-09 Apple Inc. Display screen or portion thereof with graphical user interface
USD769909S1 (en) * 2012-03-07 2016-10-25 Apple Inc. Display screen or portion thereof with graphical user interface
USD731520S1 (en) * 2013-06-20 2015-06-09 Tencent Technology (Shenzhen) Company Limited Portion of a display screen with animated graphical user interface
USD734356S1 (en) * 2013-06-20 2015-07-14 Tencent Technology (Shenzhen) Company Limited Portion of a display screen with animated graphical user interface
USD776705S1 (en) 2013-10-22 2017-01-17 Apple Inc. Display screen or portion thereof with graphical user interface
USD831696S1 (en) 2013-10-22 2018-10-23 Apple Inc. Display screen or portion thereof with set of graphical user interfaces
US9418375B1 (en) 2015-09-30 2016-08-16 International Business Machines Corporation Product recommendation using sentiment and semantic analysis
US9595053B1 (en) 2015-09-30 2017-03-14 International Business Machines Corporation Product recommendation using sentiment and semantic analysis
US9704185B2 (en) 2015-09-30 2017-07-11 International Business Machines Corporation Product recommendation using sentiment and semantic analysis
USD900123S1 (en) * 2018-02-12 2020-10-27 Acordo Certo—Reparacao E Manutencao Automovel, LTA Display screen or portion thereof with graphical user interface
USD937890S1 (en) 2018-06-03 2021-12-07 Apple Inc. Electronic device with graphical user interface
USD915420S1 (en) * 2019-03-07 2021-04-06 Intuit Inc. Display screen with graphical user interface
USD986257S1 (en) 2019-03-07 2023-05-16 Intuit Inc. Display screen with graphical user interface

Also Published As

Publication number Publication date
US20120158611A1 (en) 2012-06-21
US20120158844A1 (en) 2012-06-21
US8650133B2 (en) 2014-02-11
US8321355B2 (en) 2012-11-27
US20130185222A1 (en) 2013-07-18

Similar Documents

Publication Publication Date Title
US8650133B2 (en) Adaptive rating system and method
US9380016B2 (en) Social network information system and method
US9223849B1 (en) Generating a reputation score based on user interactions
US9069945B2 (en) User validation in a social network
US10147037B1 (en) Method and system for determining a level of popularity of submission content, prior to publicizing the submission content with a question and answer support system
Liu et al. Can response management benefit hotels? Evidence from Hong Kong hotels
US20170351653A1 (en) System and method of aggregating networked data and integrating polling data to generate entity-specific scoring metrics
US8271516B2 (en) Social networks service
US20120303415A1 (en) System and method of providing recommendations
US20160232474A1 (en) Methods and systems for recommending crowdsourcing tasks
US9619846B2 (en) System and method for relevance-based social network interaction recommendation
JP4801469B2 (en) Post processing device
US20190050119A1 (en) Collaborative Peer Review System and Method of Use
US20150100581A1 (en) Method and system for providing assistance to a responder
US20130212229A1 (en) Social networking information system and method
JP4361906B2 (en) Post processing device
US20160292161A1 (en) Organizational fit
US20130013457A1 (en) Social network information system and method
US20170220935A1 (en) Member feature sets, group feature sets and trained coefficients for recommending relevant groups
US20170223122A1 (en) Systems and methods for timely propagation of network content
US20150293988A1 (en) System and Method for Opinion Sharing and Recommending Social Connections
CN112100511B (en) Preference degree data obtaining method and device and electronic equipment
US20140006298A1 (en) Adaptive rating system and method
US10380205B2 (en) Algorithm for selecting and scoring suggested action
US11295353B2 (en) Collaborative peer review search system and method of use

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION