US20210042809A1 - System and method for intuitive content browsing - Google Patents

System and method for intuitive content browsing Download PDF

Info

Publication number
US20210042809A1
US20210042809A1 US17/082,424 US202017082424A US2021042809A1 US 20210042809 A1 US20210042809 A1 US 20210042809A1 US 202017082424 A US202017082424 A US 202017082424A US 2021042809 A1 US2021042809 A1 US 2021042809A1
Authority
US
United States
Prior art keywords
content
focal point
visual representation
browsed
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/082,424
Inventor
Roi KLIPER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Joan and Irwin Jacobs Technion Cornell Institute
Original Assignee
Joan and Irwin Jacobs Technion Cornell Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/940,396 external-priority patent/US10460286B2/en
Application filed by Joan and Irwin Jacobs Technion Cornell Institute filed Critical Joan and Irwin Jacobs Technion Cornell Institute
Priority to US17/082,424 priority Critical patent/US20210042809A1/en
Publication of US20210042809A1 publication Critical patent/US20210042809A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation
    • G06Q30/0625Directed, with specific intent or strategy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/20Point-of-sale [POS] network systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/20Point-of-sale [POS] network systems
    • G06Q20/202Interconnection or interaction of plural electronic cash registers [ECR] or to host computer, e.g. network details, transfer of information from host to ECR or from ECR to ECR
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning

Definitions

  • the present disclosure relates generally to displaying content, and more particularly to intuitively organizing content to allow for interactions in two-dimensional and three-dimensional space.
  • user queries must contain sufficient information to identify relevant material.
  • algorithms used in tandem with, e.g., search engines have been developed to provide a much greater likelihood of finding relevant content for even basic queries, users may nevertheless face challenges in accurately finding particularly relevant content due to the arcane rules utilized in accepting user queries. For example, users can find more accurate content using logical operators that may not be known or understood by the average person.
  • Refinement may include submitting a refined query to the retrieval system and receiving new results respective of the refined query, thereby effectively submitting a new search.
  • refinement wastes time and computing resources due to the submission of additional queries, even for users that are familiar with the idiosyncrasies of web-based content searches.
  • inexperienced users may be frustrated by the inability to properly refine their searches to obtain the desired results.
  • a user living in New York City seeking to purchase wine may submit a query of “wine.”
  • search results related to wine generally, the user may wish to refine his search to focus on red wine and, as a result, enters a refined query of “red wine.”
  • the user may wish to further refine his search to focus on red wine originating from France and, thus, enter a refined query of “red wine France.”
  • the results of this search may include content related red wine being sold in France and/or to red wine originating from France being sold anywhere in the world.
  • the user may further need to refine his search on French red wine that can be bought locally and, therefore, enter a further refined query of “red wine France in New York.”
  • Each of the refinements requires the user to manually enter a refined query and submit the query for a new search, thereby wasting the user's time and unnecessarily using computing resources.
  • Existing solutions for refining content queries often involve offering predetermined potential refined queries and directing users to content upon user interactions with the potential refined queries.
  • the potential refined queries may be based on, e.g., queries submitted by previous users.
  • previous user queries do not always accurately capture a user's current needs, particularly when the user is not aware of his or her needs. For example, a user seeking to buy chocolate may initially enter the query “chocolate” before ultimately deciding that she would like to buy dark chocolate made in Zurich, Switzerland.
  • Potential refinements offered based on the initial query may include “dark chocolate,” “white chocolate,” “milk chocolate,” and “Swiss chocolate,” none of which entirely captures the user's ultimate needs. Thus, the user may need to perform several refinements and resend queries multiple times before arriving at the desired content.
  • the user when viewing search results or otherwise viewing content, the user is typically presented with display options such as organizing content in various organizational schemes (e.g., list form, grid form, and so on) and/or based on different ordering schemes (e.g., by date or time, relevancy to a query, alphabetical order, and so on).
  • organizational schemes e.g., list form, grid form, and so on
  • ordering schemes e.g., by date or time, relevancy to a query, alphabetical order, and so on.
  • users viewing content related to a particular book may wish to view content related to books by the same author, about the same subject, from the same genre or literary era, and so on.
  • users may be able to reorganize displayed content by, e.g., changing the organizational scheme, submitting refinement queries, changing the ordering scheme, and so on.
  • Intuitive content organization and navigation therefore serve an important role in improving the overall user experience by increasing user engagement and allowing for more efficient retrieval and/or viewing of content.
  • Such improvements to user experience may be particularly important in the search engine context, as improved user experience may result in increased use of search engine services and/or purchases of products.
  • Certain embodiments disclosed herein include a method for intuitive content browsing.
  • the method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a system for intuitive content browsing.
  • the system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identify, based on the request and the determined initial focal point, the content to be browsed; generate, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • FIG. 1 is a network diagram utilized to describe the various disclosed embodiments.
  • FIG. 2 is a flowchart illustrating a method for organizing content according to an embodiment.
  • FIG. 3 is a screenshot illustrating a spherical organization of content.
  • FIG. 4 is a flowchart illustrating a method for displaying content that may be intuitively browsed according to an embodiment.
  • FIG. 5 is a schematic diagram of a visual representation generator according to an embodiment.
  • FIG. 1 shows an example network diagram 100 utilized to describe the various disclosed embodiments.
  • the network diagram 100 includes a network 110 , a user device 120 , a visual representation generator 130 , a plurality of content retrieval systems 140 - 1 through 140 - n (hereinafter referred to individually as a search engine 140 and collectively as search engines 140 , merely for simplicity purposes) and an inventory management system 150 .
  • the network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks configured to communicate between the elements of the 110 .
  • the user device 120 may be a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, an e-reader, a game console, or any other device equipped with browsing capabilities.
  • the content retrieval systems 140 may include, but are not limited to, search engines or other sources of content from which content may be retrieved. Alternatively or collectively, the content retrieval systems 140 may include or be communicatively connected to one or more data sources which can be queried or crawled for content.
  • the user device 120 may further include a browsing agent 125 installed therein.
  • the browsing agent 125 may be, but is not limited to, a mobile application, a virtual application, a web application, a native application, and the like. In certain configurations, the browsing agent 125 can be realized as an add-on or plug-in for a web browser. In other configurations, the browsing agent 125 is a web browser.
  • the user device 120 may receive a user query or otherwise receive a request to display content (e.g., via the browsing agent 125 ) and to send, to the visual representation generator 130 , a request to generate a visual representation of the content to be browsed.
  • the request to generate a visual representation may include, but is not limited to, the user query, the content to be browsed, an identifier of the content to be browsed, or a combination thereof.
  • the user query may include a text query or a voice query.
  • the user query may be submitted through a user gesture, e.g., tapping on a certain image or key word.
  • the visual representation generator 130 is configured to receive the request to generate a visual representation and to determine an initial focal point based on the request.
  • the initial focal point includes content to be initially displayed prominently (e.g., before navigation) to the user.
  • prominently displaying the initial focal point include displaying the initial focal point as larger than other content; displaying the initial focal point in a center, top, or other portion of a display; displaying the focal point with at least one prominence marker (e.g., a letter, a number, a symbol, a graphic, a color, etc.); displaying the focal point with a higher brightness or resolution than other content; displaying the focal point using one or more animations (e.g., displaying the focal point as moving up and down); a combination thereof; and the like.
  • a most recent image of dog may be selected as the initial focal point such that, when the visual representation is initially displayed to the user, the image of the dog is the largest and centermost image appearing on a display (not shown) of the user device 120 .
  • determining an initial focal point based on the request may further include pre-processing the user query.
  • Pre-processing the user query may include, but is not limited to, correcting typos, enriching the query with information related to the user (e.g., a browsing history, a current location, etc.), and so on.
  • the initial focal point may include a web site utilized as a seed for a search.
  • the initial focal point for a search based on the user query “buy shoes” may be a web site featuring a large variety of shoes.
  • the visual representation generator 130 is configured to retrieve content from the retrieval systems 140 based on a focal point. For the first time content is retrieved for the request, the initial focal point is used.
  • the retrieval systems 140 may search using the user query with respect to the focal point. Alternatively or collectively, the visual representation generator 130 may crawl through one or more of the retrieval systems 140 for the content.
  • the retrieved content may include, but is not limited to, search results, content to be displayed, or both.
  • the visual representation generator 130 may be configured to query an inventory management system 150 and to receive, from the inventory management system 150 , a probability that one or more vendors have a sufficient inventory of a product based on the user query and the focal point.
  • An example implementation for an inventory management system for returning probabilities that vendors have sufficient inventories of product are described further in the above-referenced U.S. patent application Ser. No. 14/940,396 filed on Nov. 13, 2015, assigned to the common assignee, which is hereby incorporated by reference for all that it contains.
  • the visual representation generator 130 is further configured to organize the retrieved content.
  • the content may be organized around the focal point. Accordingly, the content, when initially organized, may be organized around the initial focal point.
  • the visual representation generator 130 may be configured to receive user interactions respective of the organized content and to determine a current focal point based on the user interactions.
  • the content may be organized as points on a sphere and displayed to the user.
  • the sphere may be displayed in a three-dimensional (3D) plane (i.e., using a stereoscopic display) or in a two-dimensional (2D) plane (i.e., such that the sphere appears to be 3D merely via optical illusion).
  • the visual representation generator 130 is configured to generate a visual representation including a browsing environment and a plurality of dynamic content elements.
  • Each dynamic content element includes content or representations of content to be browsed.
  • the focal point includes one of the dynamic content elements.
  • the browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment in which the content is to be browsed.
  • the visual illustrations may be two-dimensional, three-dimensional, and the like.
  • the browsing environment may include visual illustrations of a real location (e.g., a store or other physical location), of a non-real location (e.g., a cartoon library, a virtual store, an imaginary combination of real stores, or any other virtual or fictional location), or any other visual illustrations (for example, a visual illustration showing text, objects, people, animals, solid colors, patterns, combinations thereof, and the like).
  • the browsing environment is rendered at the beginning of browsing and static (i.e., remain the same as content is browsed) or dynamic (i.e., re-rendered or otherwise updated as content is browsed).
  • the browsing environment may include images showing a physical store in which products represented by the dynamic content elements are sold, where the images are updated to show different areas in the physical store as the user “moves” through the store by navigating among dynamic elements representing products sold in the store.
  • the browsing environment may include a static image illustrating a library, where the static image remains constant as the user navigates among dynamic elements representing books in the library.
  • the dynamic content elements may be updated and rendered in real-time as the user browses. Updating the dynamic content elements may include, but is not limited to, changing the content to be displayed, updating information related to each content (e.g., updating a value for a number of items in stock when the content represents a purchasable product), dynamically organizing the dynamic content elements, a combination thereof, and the like. Dynamic organization of the dynamic content elements may be based on one or more dynamic organization rules.
  • Such dynamic organization rules may be based on, but are not limited to, amount of inventory in stock for store products (e.g., a current inventory or projected future inventory), popularity of content (e.g., content which is trending may be organized closer to the focal point), relevance to user interests (e.g., content that is more relevant to current user interests may be organized closer to the focal point), combinations thereof, and the like.
  • the visual representation may represent an online store, with the browsing environment showing a storefront image and the dynamic content elements including product listings.
  • the product listings may include, but are not limited to, text (e.g., product descriptions, product information such as inventory and price, etc.), images (e.g., images of the product), videos (e.g., videos demonstrating the product), sound (e.g., sound including customer reviews), combinations thereof, and the like.
  • the dynamic content elements are rendered in real-time as the user browses.
  • the rendered dynamic content elements include information related to the product listings, where the information includes at least a current or projected inventory.
  • the rendered dynamic content elements are organized in real-time based on dynamic organization rules.
  • the dynamic organization rules are based on inventory such that lower inventory items or items having minimal inventory (e.g., having an amount of inventory below a predetermined threshold) are organized closer to the focal point. Such organization may be useful for, e.g., incentivizing users to buy lower stock products. As the user browses, inventory information for the product listings is updated and the dynamic content elements are organized accordingly.
  • FIG. 3 shows an example screenshot illustrating a content sphere 300 which is a spherical visual representation of content.
  • the content sphere 300 is organized around a focal point 310 .
  • a plurality of images 320 act as points on the sphere representing content. If the focal point changes, the sphere may be rotated to show a different icon as the focal point.
  • a horizontal axis 330 and a vertical axis 340 visually represent potential directions for changing the focal point to view additional content. For example, the user may gesture horizontally to view content oriented along the horizontal axis 330 and may gesture vertically to view content oriented along the vertical axis 340 .
  • the images 320 may include icons, textual information, widgets, or any other representation or presentation of the displayed content.
  • the axes 330 and 340 may be adaptably changed as the user selects new content to be the focal point 310 (e.g., by providing user gestures with respect to one of the images 320 ), as the user rotates the content sphere 300 (e.g., by providing user gestures with respect to the axes 330 and 340 ), or both. That is, the visual representation generator 130 is configured to predict (through a learning process), the user's path as the user browses via the presented content sphere 300 . As an example, if the focal point 310 includes content related to President Bill Clinton, then rotating the content sphere 300 down along the vertical axis 340 may return results related to the US in the 1990 's. On the other hand, rotating the content sphere 310 in the right direction along the horizontal axis 330 may return results related to the Democratic party.
  • axes of interest may be initially predefined, and then adaptably modified. For example, when the focal point 310 is changed to content related to President Obama by rotating the content sphere 300 along the horizontal axis 330 , the content available by rotating along the vertical access 340 may become content related to the US in the 2000's.
  • each content item can be presented in different virtual settings.
  • a lipstick may be presented in the context of cosmetics and then again as part of a Halloween costume, thus continually experiencing new browsing experience.
  • the display may include a graphical user interface for receiving user interactions with respect to the spherically organized search results.
  • the search results for the query “alcoholic drinks” may be displayed as points on the content sphere 300 based on an initial focal point of a website featuring beer. Results from the initial focal point may be initially displayed as the focal point 310 .
  • a new focal point 310 may be determined as a web site for another type of alcoholic beverage (e.g., wine, vodka, and so on).
  • a new focal point may be determined as a web site for a particular brand of beer.
  • the example content sphere 300 shown in FIG. 3 is merely an example of a visual representation and is not limiting on the disclosed embodiments.
  • the content sphere 300 is shown as having two axes merely for illustrative purposes. Different numbers of axes may be equally utilized, any of which may be, but are not necessarily, horizontal or vertical. For example, diagonal axes may be utilized in addition to or instead of horizontal and vertical axes. Further, the axes may be three-dimensional without departing from the scope of the disclosure.
  • the content sphere 300 may be navigated by moving closer or farther away from a center point of the sphere.
  • the content may be shown in a shape other than a spherical shape without departing from the scope of the disclosure. It should also be noted that the content sphere 300 is shown as having a solid black background surrounding the images 320 merely as an example illustration. Other browsing environments (e.g., other colors, patterns, static or dynamic images, videos, combinations thereof, etc.) may be equally utilized without departing from the scope of the disclosed environments. For example, the content may be rendered as three-dimensional representations of shelves and aisles of a real store, where the view of the shelves and aisles is updated as the user browses through images of products in the store.
  • FIG. 2 is an example flowchart 200 illustrating a method for refining search results according to an embodiment.
  • the method may be performed by a visual representation generator (e.g., the visual representation generator 130 , FIG. 1 ).
  • the method may be utilized to adaptively update visual representations of search results (e.g., search results displayed as the content sphere 300 , FIG. 3 ).
  • a query by a user of a user device is received.
  • the query may be received in the form of text, multimedia content, and so on.
  • the query may be a textual query or a voice query.
  • the query may be preprocessed by, e.g., correcting typos, enriching the query with user information, and so on.
  • a focal point is determined based on the received query.
  • the focal point may include a web site to be utilized as a seed for a search based on the query.
  • the determination may include identifying one or more web sites related to the user query.
  • the seed web site may be selected from among the identified web sites based on, e.g., relative validity of the sites (e.g., numbers of legitimate clicks or presence of malware). For example, a user query for “cheese” may result in identification of web sites related to grocery stores, restaurants, and so on.
  • the seed website may be utilized as the initial focal point for the query such that content related to the seed website is displayed as the focal point prior to user interactions with respect to the visual representation.
  • At S 230 at least one retrieval system is queried with respect to the received user query to retrieve search results.
  • the focal point is further sent to the retrieval systems as a seed for the search.
  • the retrieval systems may include, but are not limited to, search engines, inventory management systems, and other systems capable of retrieving content respective of queries.
  • S 230 may further include querying at least one inventory management system for probabilities that products indicated in the search results are in stock at particular merchants.
  • the probabilities may be utilized to, e.g., enrich or otherwise provide more information related to the search results.
  • the probability that a brand of shoe is in stock at a particular merchant may be provided in, e.g., a top left corner of an icon representing content related to the brand of shoe sold by the merchant.
  • a visual representation is generated based on the search results and the identified focal point.
  • the visual representation may include points representing particular search results (e.g., a particular web page or a portion thereof).
  • the visual representation may include a graphical user interface for receiving user interactions respective of the search results.
  • S 240 includes organizing the search results respective of the focal point.
  • the search results may be organized graphically.
  • the organization may include assigning the search results to points on, e.g., a sphere or other geometrical organization of the search results.
  • the generated visual representation may include a browsing environment and a plurality of dynamic content elements.
  • the browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment (e.g., a real location, a virtual location, background colors and patterns, etc.) in which content is to be browsed.
  • the browsing environment may be static, or may be dynamically updated in real-time as a user browses the content.
  • the dynamic content elements are updated in real-time as a user browses the content.
  • the dynamic content elements may be further reorganized in real-time based on the user browsing.
  • the generated visual representation may include a static storefront image and a plurality of dynamic elements updated in real-time to show product listings with current inventories, where the dynamic elements are organized such that lower inventory content items are closer to the focal point that higher inventory items.
  • the visual representation of the organized search results is caused to be displayed to the user.
  • S 250 may include sending the visual representation to a user device (e.g., the user device 120 , FIG. 1 ).
  • the visual representation may be displayed as a three-dimensional (3D) representation of the search results.
  • At S 260 at least one user input is received with respect to the displayed visual representation.
  • the user inputs may include, but are not limited to, key strokes, mouse clicks, mouse movements, user gestures on a touch screen (e.g., tapping, swiping), movement of a user device (as detected by, e.g., an accelerometer, a global positioning system (GPS), a gyroscope, etc.), voice commands, and the like.
  • S 270 based on the received user inputs, the search results are refined.
  • S 270 may include determining a current focal point based on the user inputs and the visual representation.
  • S 270 includes updating the visual representation using a website of the new focal point as a seed for the search.
  • S 280 it is determined whether additional user inputs have been received and, if so, execution continues with S 260 ; otherwise, execution terminates.
  • S 280 includes determining if the additional user inputs include a new or modified query and, if so, execution may continue with S 210 .
  • FIG. 4 is an example flowchart 400 illustrating a method for displaying content for intuitive browsing according to an embodiment.
  • the method may be performed by a visual representation generator (e.g., the visual representation generator 130 ).
  • the visual representation generator may query, crawl, or otherwise obtain content from content retrieval systems (e.g., the content retrieval systems 140 ).
  • the method may be performed by a user device (e.g., the user device 120 ) based on locally available content, retrieved content, or both.
  • a request to display content is received.
  • the request may include, but is not limited to, a query, content to be displayed, an identifier of content to be displayed, an identifier of at least one source of content, a combination thereof, and so on.
  • a focal point is determined.
  • the focal point may be determined based on, but not limited to, the query, the content to be displayed, the designated content sources, information about a user (e.g., a user profile, a browsing history, demographic information, etc.), combinations thereof, and so on.
  • the focal point may be, but is not limited to, content, a source of content, a category or other grouping of content, a representation thereof, and so on.
  • the focal point may be related to a website to be used as a seed for a search with respect to the query (e.g., a web crawl).
  • the identified content may be related to the focal point.
  • the identified content may be stored locally, or may be retrieved from at least one data source (e.g., the content retrieval systems 140 or the inventory management system 150 , FIG. 1 )
  • the identified content may include, but is not limited to, content from the same or similar web sources, content that is contextually related to content of the focal point (e.g., belonging to a same category or otherwise sharing common or related information), and so on. Similarity of content may be based on matching the content. In an embodiment, content and sources may be similar if they match above a predefined threshold.
  • the identified content is organized with respect to the focal point.
  • the organization may be based on one or more axes.
  • the axes may represent different facets of the determined content such as, but not limited to, creator (e.g., an artist, author, director, editor, publisher, etc.), geographic location, category of subject matter, type of content, genre, time of publication, and any other point of similarity among content.
  • content related to a focal point of a particular movie may be organized based on one or more axes such as, but not limited to, movies featuring the same actor(s), movies by the same director, movies by the same publisher, movies within the same genre, movies originating in the same country, movies from a particular decade or year, other media related to the movie (e.g., a television show tying into the movie) merchandise or other products related to the movie, and so on.
  • a visual representation of the organized content is generated.
  • the visual representation may include points, each point representing at least a portion of the identified content.
  • the visual representation may include a graphical user interface for receiving user interactions respective of the search results.
  • the visual representation may be spherical, may allow a user to change axes by gesturing horizontally, and may allow a user to change content within an axis by gesturing vertically.
  • the visual representation may be three-dimensional.
  • the generated visual representation is caused to be displayed on a user device.
  • S 460 includes sending the visual representation to the user device.
  • the visual representation may be updated when, e.g., an amount of content that has been displayed is above a predefined threshold, a number of user interactions is above a predefined threshold, a refined or new query is received, and the like.
  • a request to display content is received.
  • the request includes the query “the thinker.”
  • a focal point including an image of the sculpture “The Thinker” by Auguste Rodin is determined.
  • Content related to “The Thinker,” including various sculptural and artistic works, is determined using the website in which the image is shown as a seed for a search.
  • the content is organized spherically based on axes including other famous sculptures, sculptures by French sculptors, art by Auguste Rodin, works featuring “The Thinker,” sculptures created in the late 1800s, and versions of “The Thinker” made from different materials.
  • a visual representation of the spherically organized content is generated and caused to be displayed on a user device.
  • FIG. 5 is an example schematic diagram of the visual representation generator 130 according to an embodiment.
  • the visual representation generator 130 includes a processing circuitry 510 coupled to a memory 515 , a storage 520 , and a network interface 530 .
  • the components of the visual representation generator 130 may be communicatively connected via a bus 540 .
  • the processing circuitry 510 may be realized as one or more hardware logic components and circuits.
  • illustrative types of hardware logic components include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.
  • the memory 515 may be volatile (e.g., RAM, etc.), non-volatile (e.g., ROM, flash memory, etc.), or a combination thereof.
  • computer readable instructions to implement one or more embodiments disclosed herein may be stored in the storage 520 .
  • the memory 515 is configured to store software.
  • Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code).
  • the instructions when executed by the one or more processors, cause the processing circuitry 510 to perform the various processes described herein. Specifically, the instructions, when executed, cause the processing circuitry 510 to perform generation of visual representations of content for intuitive browsing, as discussed hereinabove.
  • the storage 520 may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs), or any other medium which can be used to store the desired information.
  • flash memory or other memory technology
  • CD-ROM Compact Discs
  • DVDs Digital Versatile Disks
  • the network interface 530 allows the visual representation generator 130 to communicate with the user device 120 , the content retrieval systems 140 , the inventory management system 150 , or a combination of, for the purpose of, for example, obtaining requests, obtaining content, obtaining probabilities, querying, sending visual representations, combinations thereof, and the like.
  • search results may be organized as points on different sides a cube such that user interactions may cause the displayed cube side to change, thereby changing the search results being displayed.
  • the content may be organized based on the subject matter of the content. For example, the content may be organized differently for queries for restaurants than for requests to display documents on a user device.
  • any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
  • the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and system for intuitive content browsing. The method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 15/404,827, filed Jan. 12, 2017, now allowed, which claims the benefit of U.S. Provisional Application No. 62/279,125 filed on Jan. 15, 2016. This application is also a continuation-in-part of U.S. patent application Ser. No. 14/940,396 filed on Nov. 13, 2015, now U.S. Pat. No. 10,460,286. The Ser. No. 14/940,396 application claims the benefit of U.S. Provisional Application No. 62/147,771 filed on Apr. 15, 2015, and of U.S. Provisional Application No. 62/079,804 filed on Nov. 14, 2014. The contents of the above-referenced applications are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present disclosure relates generally to displaying content, and more particularly to intuitively organizing content to allow for interactions in two-dimensional and three-dimensional space.
  • BACKGROUND
  • As the Internet becomes increasingly prevalent in modern society, the amount of information available to the average user has increased astronomically. Consequently, systems for retrieving and browsing web-based content are used much more frequently. Such systems often accept a user query for content to browse in the form of, for example, text, multimedia content (an image, videos, audio), and so on. The widespread availability of new content has led to further developments of display mechanisms allowing users to consumer and interact with data. For example, touch screens allowing users to intuitively interact with displayed content and three-dimensional virtual reality displays allowing users to view content in an immersive environment have become available to the average person. These evolving display mechanisms provide varied and improved user experiences as time goes on.
  • To obtain relevant content, user queries must contain sufficient information to identify relevant material. Although algorithms used in tandem with, e.g., search engines, have been developed to provide a much greater likelihood of finding relevant content for even basic queries, users may nevertheless face challenges in accurately finding particularly relevant content due to the arcane rules utilized in accepting user queries. For example, users can find more accurate content using logical operators that may not be known or understood by the average person.
  • The challenges in utilizing existing content retrieval systems cause further difficulties for users seeking to refine queries. Refinement may include submitting a refined query to the retrieval system and receiving new results respective of the refined query, thereby effectively submitting a new search. As a result, refinement wastes time and computing resources due to the submission of additional queries, even for users that are familiar with the idiosyncrasies of web-based content searches. Further, inexperienced users may be frustrated by the inability to properly refine their searches to obtain the desired results.
  • As an example, a user living in New York City seeking to purchase wine may submit a query of “wine.” Upon viewing search results related to wine generally, the user may wish to refine his search to focus on red wine and, as a result, enters a refined query of “red wine.” The user may wish to further refine his search to focus on red wine originating from France and, thus, enter a refined query of “red wine France.” The results of this search may include content related red wine being sold in France and/or to red wine originating from France being sold anywhere in the world. The user may further need to refine his search on French red wine that can be bought locally and, therefore, enter a further refined query of “red wine France in New York.” Each of the refinements requires the user to manually enter a refined query and submit the query for a new search, thereby wasting the user's time and unnecessarily using computing resources.
  • Existing solutions for refining content queries often involve offering predetermined potential refined queries and directing users to content upon user interactions with the potential refined queries. The potential refined queries may be based on, e.g., queries submitted by previous users. However, previous user queries do not always accurately capture a user's current needs, particularly when the user is not aware of his or her needs. For example, a user seeking to buy chocolate may initially enter the query “chocolate” before ultimately deciding that she would like to buy dark chocolate made in Zurich, Switzerland. Potential refinements offered based on the initial query may include “dark chocolate,” “white chocolate,” “milk chocolate,” and “Swiss chocolate,” none of which entirely captures the user's ultimate needs. Thus, the user may need to perform several refinements and resend queries multiple times before arriving at the desired content.
  • Moreover, for users seeking to purchase products, it may be difficult to determine in which stores the products are physically available. To find such information, a user may need to visit e-commerce websites of stores until he or she finds a store that lists the item as “in stock.” Nevertheless, such listings may be outdated or otherwise inaccurate, thereby causing user frustration and a need to conduct further searches.
  • Further, when viewing search results or otherwise viewing content, the user is typically presented with display options such as organizing content in various organizational schemes (e.g., list form, grid form, and so on) and/or based on different ordering schemes (e.g., by date or time, relevancy to a query, alphabetical order, and so on). For example, a user viewing content related to a particular book may wish to view content related to books by the same author, about the same subject, from the same genre or literary era, and so on. To view this additional content, users may be able to reorganize displayed content by, e.g., changing the organizational scheme, submitting refinement queries, changing the ordering scheme, and so on.
  • To this end, it is desirable to provide content in ways that are intuitive and therefore easily digestible by the average user. Intuitive content organization and navigation therefore serve an important role in improving the overall user experience by increasing user engagement and allowing for more efficient retrieval and/or viewing of content. Such improvements to user experience may be particularly important in the search engine context, as improved user experience may result in increased use of search engine services and/or purchases of products.
  • Additionally, some solutions exist for allowing users to browse stores remotely via remoted controlled on premises cameras (i.e., disposed in the store) or preexisting (e.g., static) photos or images of the premises and inventory. These solutions allow users to intuitively engage with content as they would in a physical store, but are typically limited to displaying store contents as they were previously (based on previously captured images) or as they currently are (e.g., via live video feed or images otherwise captured in real-time), but not based on potential future configurations (e.g., based on predicted inventories and other future changes).
  • It would therefore be advantageous to provide a solution that would overcome the deficiencies of the prior art.
  • SUMMARY
  • A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
  • Certain embodiments disclosed herein include a method for intuitive content browsing. The method includes determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identifying, based on the request and the determined initial focal point, the content to be browsed; generating, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • Certain embodiments disclosed herein also include a system for intuitive content browsing. The system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item; identify, based on the request and the determined initial focal point, the content to be browsed; generate, based on the identified content and the focal point, a visual representation of the identified content, wherein the generated visual representation includes the identified content organized with respect to the initial focal point; and send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • FIG. 1 is a network diagram utilized to describe the various disclosed embodiments.
  • FIG. 2 is a flowchart illustrating a method for organizing content according to an embodiment.
  • FIG. 3 is a screenshot illustrating a spherical organization of content.
  • FIG. 4 is a flowchart illustrating a method for displaying content that may be intuitively browsed according to an embodiment.
  • FIG. 5 is a schematic diagram of a visual representation generator according to an embodiment.
  • DETAILED DESCRIPTION
  • It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
  • FIG. 1 shows an example network diagram 100 utilized to describe the various disclosed embodiments. The network diagram 100 includes a network 110, a user device 120, a visual representation generator 130, a plurality of content retrieval systems 140-1 through 140-n (hereinafter referred to individually as a search engine 140 and collectively as search engines 140, merely for simplicity purposes) and an inventory management system 150.
  • The network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks configured to communicate between the elements of the 110. The user device 120 may be a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, an e-reader, a game console, or any other device equipped with browsing capabilities. The content retrieval systems 140 may include, but are not limited to, search engines or other sources of content from which content may be retrieved. Alternatively or collectively, the content retrieval systems 140 may include or be communicatively connected to one or more data sources which can be queried or crawled for content.
  • The user device 120 may further include a browsing agent 125 installed therein. The browsing agent 125 may be, but is not limited to, a mobile application, a virtual application, a web application, a native application, and the like. In certain configurations, the browsing agent 125 can be realized as an add-on or plug-in for a web browser. In other configurations, the browsing agent 125 is a web browser. The user device 120 may receive a user query or otherwise receive a request to display content (e.g., via the browsing agent 125) and to send, to the visual representation generator 130, a request to generate a visual representation of the content to be browsed. The request to generate a visual representation may include, but is not limited to, the user query, the content to be browsed, an identifier of the content to be browsed, or a combination thereof. The user query may include a text query or a voice query. The user query may be submitted through a user gesture, e.g., tapping on a certain image or key word.
  • In an embodiment, the visual representation generator 130 is configured to receive the request to generate a visual representation and to determine an initial focal point based on the request. The initial focal point includes content to be initially displayed prominently (e.g., before navigation) to the user. Non-limiting examples of prominently displaying the initial focal point include displaying the initial focal point as larger than other content; displaying the initial focal point in a center, top, or other portion of a display; displaying the focal point with at least one prominence marker (e.g., a letter, a number, a symbol, a graphic, a color, etc.); displaying the focal point with a higher brightness or resolution than other content; displaying the focal point using one or more animations (e.g., displaying the focal point as moving up and down); a combination thereof; and the like. For example, if the content to be browsed includes images of a dog, a most recent image of dog may be selected as the initial focal point such that, when the visual representation is initially displayed to the user, the image of the dog is the largest and centermost image appearing on a display (not shown) of the user device 120.
  • In a further embodiment, determining an initial focal point based on the request may further include pre-processing the user query. Pre-processing the user query may include, but is not limited to, correcting typos, enriching the query with information related to the user (e.g., a browsing history, a current location, etc.), and so on. In another embodiment, the initial focal point may include a web site utilized as a seed for a search. As an example, the initial focal point for a search based on the user query “buy shoes” may be a web site featuring a large variety of shoes.
  • In an embodiment, the visual representation generator 130 is configured to retrieve content from the retrieval systems 140 based on a focal point. For the first time content is retrieved for the request, the initial focal point is used. The retrieval systems 140 may search using the user query with respect to the focal point. Alternatively or collectively, the visual representation generator 130 may crawl through one or more of the retrieval systems 140 for the content. The retrieved content may include, but is not limited to, search results, content to be displayed, or both.
  • In another embodiment, the visual representation generator 130 may be configured to query an inventory management system 150 and to receive, from the inventory management system 150, a probability that one or more vendors have a sufficient inventory of a product based on the user query and the focal point. An example implementation for an inventory management system for returning probabilities that vendors have sufficient inventories of product are described further in the above-referenced U.S. patent application Ser. No. 14/940,396 filed on Nov. 13, 2015, assigned to the common assignee, which is hereby incorporated by reference for all that it contains.
  • In an embodiment, the visual representation generator 130 is further configured to organize the retrieved content. The content may be organized around the focal point. Accordingly, the content, when initially organized, may be organized around the initial focal point. The visual representation generator 130 may be configured to receive user interactions respective of the organized content and to determine a current focal point based on the user interactions. In an embodiment, the content may be organized as points on a sphere and displayed to the user. The sphere may be displayed in a three-dimensional (3D) plane (i.e., using a stereoscopic display) or in a two-dimensional (2D) plane (i.e., such that the sphere appears to be 3D merely via optical illusion).
  • In another embodiment, the visual representation generator 130 is configured to generate a visual representation including a browsing environment and a plurality of dynamic content elements. Each dynamic content element includes content or representations of content to be browsed. In an embodiment, the focal point includes one of the dynamic content elements.
  • The browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment in which the content is to be browsed. The visual illustrations may be two-dimensional, three-dimensional, and the like. The browsing environment may include visual illustrations of a real location (e.g., a store or other physical location), of a non-real location (e.g., a cartoon library, a virtual store, an imaginary combination of real stores, or any other virtual or fictional location), or any other visual illustrations (for example, a visual illustration showing text, objects, people, animals, solid colors, patterns, combinations thereof, and the like).
  • In an embodiment, the browsing environment is rendered at the beginning of browsing and static (i.e., remain the same as content is browsed) or dynamic (i.e., re-rendered or otherwise updated as content is browsed). As a non-limiting example, the browsing environment may include images showing a physical store in which products represented by the dynamic content elements are sold, where the images are updated to show different areas in the physical store as the user “moves” through the store by navigating among dynamic elements representing products sold in the store. As another non-limiting example, the browsing environment may include a static image illustrating a library, where the static image remains constant as the user navigates among dynamic elements representing books in the library.
  • The dynamic content elements may be updated and rendered in real-time as the user browses. Updating the dynamic content elements may include, but is not limited to, changing the content to be displayed, updating information related to each content (e.g., updating a value for a number of items in stock when the content represents a purchasable product), dynamically organizing the dynamic content elements, a combination thereof, and the like. Dynamic organization of the dynamic content elements may be based on one or more dynamic organization rules. Such dynamic organization rules may be based on, but are not limited to, amount of inventory in stock for store products (e.g., a current inventory or projected future inventory), popularity of content (e.g., content which is trending may be organized closer to the focal point), relevance to user interests (e.g., content that is more relevant to current user interests may be organized closer to the focal point), combinations thereof, and the like.
  • As a non-limiting example for a visual representation including a browsing environment and a plurality of dynamic content elements, the visual representation may represent an online store, with the browsing environment showing a storefront image and the dynamic content elements including product listings. The product listings may include, but are not limited to, text (e.g., product descriptions, product information such as inventory and price, etc.), images (e.g., images of the product), videos (e.g., videos demonstrating the product), sound (e.g., sound including customer reviews), combinations thereof, and the like. The dynamic content elements are rendered in real-time as the user browses. The rendered dynamic content elements include information related to the product listings, where the information includes at least a current or projected inventory. The rendered dynamic content elements are organized in real-time based on dynamic organization rules. The dynamic organization rules are based on inventory such that lower inventory items or items having minimal inventory (e.g., having an amount of inventory below a predetermined threshold) are organized closer to the focal point. Such organization may be useful for, e.g., incentivizing users to buy lower stock products. As the user browses, inventory information for the product listings is updated and the dynamic content elements are organized accordingly.
  • FIG. 3 shows an example screenshot illustrating a content sphere 300 which is a spherical visual representation of content. The content sphere 300 is organized around a focal point 310. A plurality of images 320 act as points on the sphere representing content. If the focal point changes, the sphere may be rotated to show a different icon as the focal point. A horizontal axis 330 and a vertical axis 340 visually represent potential directions for changing the focal point to view additional content. For example, the user may gesture horizontally to view content oriented along the horizontal axis 330 and may gesture vertically to view content oriented along the vertical axis 340. It should be note that the images 320 may include icons, textual information, widgets, or any other representation or presentation of the displayed content.
  • The axes 330 and 340 may be adaptably changed as the user selects new content to be the focal point 310 (e.g., by providing user gestures with respect to one of the images 320), as the user rotates the content sphere 300 (e.g., by providing user gestures with respect to the axes 330 and 340), or both. That is, the visual representation generator 130 is configured to predict (through a learning process), the user's path as the user browses via the presented content sphere 300. As an example, if the focal point 310 includes content related to President Bill Clinton, then rotating the content sphere 300 down along the vertical axis 340 may return results related to the US in the 1990's. On the other hand, rotating the content sphere 310 in the right direction along the horizontal axis 330 may return results related to the Democratic party.
  • Further, as the content of the focal point 310 changes, the related content available via the axes 330 and 340 may change. Thus, axes of interest may be initially predefined, and then adaptably modified. For example, when the focal point 310 is changed to content related to President Obama by rotating the content sphere 300 along the horizontal axis 330, the content available by rotating along the vertical access 340 may become content related to the US in the 2000's.
  • It should be appreciated that, by adaptably changing the content along the axes of interest (e.g., the axes 330 and 340), the user may be provided with an endless browsing experience. Further, each content item can be presented in different virtual settings. As an example, a lipstick may be presented in the context of cosmetics and then again as part of a Halloween costume, thus continually experiencing new browsing experience.
  • In a further embodiment, the display may include a graphical user interface for receiving user interactions with respect to the spherically organized search results. As a non-limiting example, the search results for the query “alcoholic drinks” may be displayed as points on the content sphere 300 based on an initial focal point of a website featuring beer. Results from the initial focal point may be initially displayed as the focal point 310. When a user rotates (e.g., swipes or moves) a mouse icon horizontally across the sphere 300, a new focal point 310 may be determined as a web site for another type of alcoholic beverage (e.g., wine, vodka, and so on). When a user swipes or moves a mouse icon vertically across the content sphere 300, a new focal point may be determined as a web site for a particular brand of beer.
  • It should be noted that the example content sphere 300 shown in FIG. 3 is merely an example of a visual representation and is not limiting on the disclosed embodiments. In particular, the content sphere 300 is shown as having two axes merely for illustrative purposes. Different numbers of axes may be equally utilized, any of which may be, but are not necessarily, horizontal or vertical. For example, diagonal axes may be utilized in addition to or instead of horizontal and vertical axes. Further, the axes may be three-dimensional without departing from the scope of the disclosure. For example, the content sphere 300 may be navigated by moving closer or farther away from a center point of the sphere.
  • It should further be noted that the content may be shown in a shape other than a spherical shape without departing from the scope of the disclosure. It should also be noted that the content sphere 300 is shown as having a solid black background surrounding the images 320 merely as an example illustration. Other browsing environments (e.g., other colors, patterns, static or dynamic images, videos, combinations thereof, etc.) may be equally utilized without departing from the scope of the disclosed environments. For example, the content may be rendered as three-dimensional representations of shelves and aisles of a real store, where the view of the shelves and aisles is updated as the user browses through images of products in the store.
  • FIG. 2 is an example flowchart 200 illustrating a method for refining search results according to an embodiment. In an embodiment, the method may be performed by a visual representation generator (e.g., the visual representation generator 130, FIG. 1). The method may be utilized to adaptively update visual representations of search results (e.g., search results displayed as the content sphere 300, FIG. 3).
  • At S210, a query by a user of a user device is received. The query may be received in the form of text, multimedia content, and so on. The query may be a textual query or a voice query.
  • At optional S215, the query may be preprocessed by, e.g., correcting typos, enriching the query with user information, and so on.
  • At S220, a focal point is determined based on the received query. The focal point may include a web site to be utilized as a seed for a search based on the query. The determination may include identifying one or more web sites related to the user query. The seed web site may be selected from among the identified web sites based on, e.g., relative validity of the sites (e.g., numbers of legitimate clicks or presence of malware). For example, a user query for “cheese” may result in identification of web sites related to grocery stores, restaurants, and so on. The seed website may be utilized as the initial focal point for the query such that content related to the seed website is displayed as the focal point prior to user interactions with respect to the visual representation.
  • At S230, at least one retrieval system is queried with respect to the received user query to retrieve search results. The focal point is further sent to the retrieval systems as a seed for the search. The retrieval systems may include, but are not limited to, search engines, inventory management systems, and other systems capable of retrieving content respective of queries.
  • In an embodiment, S230 may further include querying at least one inventory management system for probabilities that products indicated in the search results are in stock at particular merchants. The probabilities may be utilized to, e.g., enrich or otherwise provide more information related to the search results. As a non-limiting example, the probability that a brand of shoe is in stock at a particular merchant may be provided in, e.g., a top left corner of an icon representing content related to the brand of shoe sold by the merchant.
  • At S240, a visual representation is generated based on the search results and the identified focal point. The visual representation may include points representing particular search results (e.g., a particular web page or a portion thereof). The visual representation may include a graphical user interface for receiving user interactions respective of the search results. In an embodiment, S240 includes organizing the search results respective of the focal point. In a further embodiment, the search results may be organized graphically. In yet a further embodiment, the organization may include assigning the search results to points on, e.g., a sphere or other geometrical organization of the search results.
  • In another embodiment, the generated visual representation may include a browsing environment and a plurality of dynamic content elements. The browsing environment may include, but is not limited to, images, videos, or other visual illustrations of an environment (e.g., a real location, a virtual location, background colors and patterns, etc.) in which content is to be browsed. The browsing environment may be static, or may be dynamically updated in real-time as a user browses the content. The dynamic content elements are updated in real-time as a user browses the content. The dynamic content elements may be further reorganized in real-time based on the user browsing. As a non-limiting example, the generated visual representation may include a static storefront image and a plurality of dynamic elements updated in real-time to show product listings with current inventories, where the dynamic elements are organized such that lower inventory content items are closer to the focal point that higher inventory items.
  • At S250, the visual representation of the organized search results is caused to be displayed to the user. In an embodiment, S250 may include sending the visual representation to a user device (e.g., the user device 120, FIG. 1). In an embodiment, the visual representation may be displayed as a three-dimensional (3D) representation of the search results.
  • At S260, at least one user input is received with respect to the displayed visual representation. The user inputs may include, but are not limited to, key strokes, mouse clicks, mouse movements, user gestures on a touch screen (e.g., tapping, swiping), movement of a user device (as detected by, e.g., an accelerometer, a global positioning system (GPS), a gyroscope, etc.), voice commands, and the like.
  • At S270, based on the received user inputs, the search results are refined. In an embodiment, S270 may include determining a current focal point based on the user inputs and the visual representation. In a further embodiment, S270 includes updating the visual representation using a website of the new focal point as a seed for the search.
  • At S280, it is determined whether additional user inputs have been received and, if so, execution continues with S260; otherwise, execution terminates. In another embodiment, S280 includes determining if the additional user inputs include a new or modified query and, if so, execution may continue with S210.
  • FIG. 4 is an example flowchart 400 illustrating a method for displaying content for intuitive browsing according to an embodiment. In an embodiment, the method may be performed by a visual representation generator (e.g., the visual representation generator 130). The visual representation generator may query, crawl, or otherwise obtain content from content retrieval systems (e.g., the content retrieval systems 140). In another embodiment, the method may be performed by a user device (e.g., the user device 120) based on locally available content, retrieved content, or both.
  • At S410, a request to display content is received. The request may include, but is not limited to, a query, content to be displayed, an identifier of content to be displayed, an identifier of at least one source of content, a combination thereof, and so on.
  • At S420, based on the request, a focal point is determined. The focal point may be determined based on, but not limited to, the query, the content to be displayed, the designated content sources, information about a user (e.g., a user profile, a browsing history, demographic information, etc.), combinations thereof, and so on. The focal point may be, but is not limited to, content, a source of content, a category or other grouping of content, a representation thereof, and so on. In an embodiment, the focal point may be related to a website to be used as a seed for a search with respect to the query (e.g., a web crawl).
  • At S430, content to be browsed with respect to the focal point is identified. The identified content may be related to the focal point. The identified content may be stored locally, or may be retrieved from at least one data source (e.g., the content retrieval systems 140 or the inventory management system 150, FIG. 1) As examples, the identified content may include, but is not limited to, content from the same or similar web sources, content that is contextually related to content of the focal point (e.g., belonging to a same category or otherwise sharing common or related information), and so on. Similarity of content may be based on matching the content. In an embodiment, content and sources may be similar if they match above a predefined threshold.
  • At S440, the identified content is organized with respect to the focal point. In an embodiment, the organization may be based on one or more axes. The axes may represent different facets of the determined content such as, but not limited to, creator (e.g., an artist, author, director, editor, publisher, etc.), geographic location, category of subject matter, type of content, genre, time of publication, and any other point of similarity among content. As a non-limiting example, content related to a focal point of a particular movie may be organized based on one or more axes such as, but not limited to, movies featuring the same actor(s), movies by the same director, movies by the same publisher, movies within the same genre, movies originating in the same country, movies from a particular decade or year, other media related to the movie (e.g., a television show tying into the movie) merchandise or other products related to the movie, and so on.
  • At S450, a visual representation of the organized content is generated. The visual representation may include points, each point representing at least a portion of the identified content. The visual representation may include a graphical user interface for receiving user interactions respective of the search results. In an embodiment, the visual representation may be spherical, may allow a user to change axes by gesturing horizontally, and may allow a user to change content within an axis by gesturing vertically. In another embodiment, the visual representation may be three-dimensional.
  • At S460, the generated visual representation is caused to be displayed on a user device. In an embodiment, S460 includes sending the visual representation to the user device. In an embodiment, the visual representation may be updated when, e.g., an amount of content that has been displayed is above a predefined threshold, a number of user interactions is above a predefined threshold, a refined or new query is received, and the like.
  • As a non-limiting example, a request to display content is received. The request includes the query “the thinker.” A focal point including an image of the sculpture “The Thinker” by Auguste Rodin is determined. Content related to “The Thinker,” including various sculptural and artistic works, is determined using the website in which the image is shown as a seed for a search. The content is organized spherically based on axes including other famous sculptures, sculptures by French sculptors, art by Auguste Rodin, works featuring “The Thinker,” sculptures created in the late 1800s, and versions of “The Thinker” made from different materials. A visual representation of the spherically organized content is generated and caused to be displayed on a user device.
  • FIG. 5 is an example schematic diagram of the visual representation generator 130 according to an embodiment. The visual representation generator 130 includes a processing circuitry 510 coupled to a memory 515, a storage 520, and a network interface 530. In another embodiment, the components of the visual representation generator 130 may be communicatively connected via a bus 540.
  • The processing circuitry 510 may be realized as one or more hardware logic components and circuits. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), Application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), and the like, or any other hardware logic components that can perform calculations or other manipulations of information.
  • The memory 515 may be volatile (e.g., RAM, etc.), non-volatile (e.g., ROM, flash memory, etc.), or a combination thereof. In one configuration, computer readable instructions to implement one or more embodiments disclosed herein may be stored in the storage 520.
  • In another embodiment, the memory 515 is configured to store software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing circuitry 510 to perform the various processes described herein. Specifically, the instructions, when executed, cause the processing circuitry 510 to perform generation of visual representations of content for intuitive browsing, as discussed hereinabove.
  • The storage 520 may be magnetic storage, optical storage, and the like, and may be realized, for example, as flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs), or any other medium which can be used to store the desired information.
  • The network interface 530 allows the visual representation generator 130 to communicate with the user device 120, the content retrieval systems 140, the inventory management system 150, or a combination of, for the purpose of, for example, obtaining requests, obtaining content, obtaining probabilities, querying, sending visual representations, combinations thereof, and the like.
  • It should be understood that the embodiments described herein are not limited to the specific architecture illustrated in FIG. 5, and other architectures may be equally used without departing from the scope of the disclosed embodiments.
  • It should be noted that various embodiments described herein are discussed with respect to content from search results merely for simplicity purposes and without limitation on the disclosed embodiments. Other content, including content preexisting on a device, content available via a storage or data source, and the like, may be displayed and browsed without departing from the scope of the disclosure.
  • It should be further noted that the embodiments described herein are discussed with respect to a spherical representation of content merely for simplicity purposes and without limitation on the disclosed embodiments. Other geometrical representations may be utilized with points on the geometric figures representing search results without departing from the scope of the disclosure. For example, the search results may be organized as points on different sides a cube such that user interactions may cause the displayed cube side to change, thereby changing the search results being displayed.
  • It should also be noted that various examples for changing content are provided merely for the sake of illustration and without limitation on the disclosed embodiments. Content may be organized in other ways without departing from the scope of the disclosure.
  • It should be further noted that the content may be organized based on the subject matter of the content. For example, the content may be organized differently for queries for restaurants than for requests to display documents on a user device.
  • It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
  • As used herein, the phrase “at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including “at least one of A, B, and C,” the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.
  • The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims (19)

What is claimed is:
1. A method for intuitive content browsing, comprising:
determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item to be initially displayed prominently;
identifying, based on the request and the determined initial focal point, the content to be browsed;
generating, based on the identified content and the initial focal point, a visual representation of the identified content, wherein the generated visual representation organizes the identified content as at least a plurality of dynamic elements, one of which is the content item to be initially displayed prominently, using a base browsing environment wherein the generated visual representation arranges the identified content with respect to the initial focal point and wherein the initial focal point is located on the base browsing environment and the item to be initially displayed prominently is represented in a manner of prominence, wherein generating the visual representation further includes predicting a user path as the user browses the visual representation; and
sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
2. The method of claim 1, wherein the generated visual representation includes at least one axis, wherein the generated visual representation is browsed along each of the at least one axis, and wherein the browsing of the displayed visual representation includes selecting, based on at least one user input, a new focal point, wherein the displayed visual representation is updated with respect to the new focal point.
3. The method of claim 1, wherein the request includes a query, wherein the content to be browsed includes at least search results.
4. The method of claim 3, wherein the determined focal point includes content of a web site, wherein identifying the content to be browsed further comprises:
searching, based on the query, in at least one content retrieval system for the content to be browsed, wherein the web site is utilized as a seed for the search;
querying at least one inventory management system for probabilities that products indicated in the search results are available from at least one merchant
determining, based on at least one user input, a new focal point, the new focal point including content of a web site; and
updating the visual representation based on the new focal point.
5. The method of claim 1, wherein the dynamic content elements are dynamically organized based on dynamic organization rules.
6. The method of claim 5, wherein the dynamic organization rules are based on inventory such that items are organized closer to the focal point based on ascending order of their inventory.
7. The method of claim 1, wherein the base browsing environment shows a storefront image and the dynamic content elements include product listings.
8. The method of claim 1, wherein the base browsing environment includes at least one visual illustration, each dynamic content element includes one of the content to be browsed, and wherein the at least one dynamic content element is updated in real-time as the identified content is browsed, wherein the browsing environment is updated in real-time as the identified content is browsed.
9. The method of claim 1, wherein the dynamic content elements are updated and rendered in real-time as the identified content is browsed.
10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising:
determining, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item to be initially displayed prominently;
identifying, based on the request and the determined initial focal point, the content to be browsed;
generating, based on the identified content and the initial focal point, a visual representation of the identified content, wherein the generated visual representation organizes the identified content as at least a plurality of dynamic elements, one of which is the content item to be initially displayed prominently, using a base browsing environment wherein the generated visual representation arranges the identified content with respect to the initial focal point and wherein the initial focal point is located on the base browsing environment and the item to be initially displayed prominently is represented in a manner of prominence, wherein generating the visual representation further includes predicting a user path as the user browses the visual representation; and
sending, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
11. A system for intuitive content browsing, comprising:
a processing circuitry; and
a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to:
determine, based on a request to browse content, an initial focal point for a visual representation of the content, wherein the initial focal point represents a content item to be initially displayed prominently;
identify, based on the request and the determined initial focal point, the content to be browsed;
generate, based on the identified content and the initial focal point, a visual representation of the identified content, wherein the generated visual representation organizes the identified content as at least a plurality of dynamic elements, one of which is the content item to be initially displayed prominently, using a base browsing environment wherein the generated visual representation arranges the identified content with respect to the initial focal point and wherein the initial focal point is located on the base browsing environment and the item to be initially displayed prominently is represented in a manner of prominence, wherein generating the visual representation further includes predicting a user path as the user browses the visual representation; and
send, to a user device, the generated visual representation for display, wherein the identified content is browsed via the displayed visual representation with respect to the focal point.
12. The system of claim 11, wherein the generated visual representation includes at least one axis, wherein the generated visual representation is browsed along each of the at least one axis, and wherein the system is further configured to:
select, based on at least one user input, a new focal point, wherein the displayed visual representation is updated with respect to the new focal point.
13. The system of claim 11, wherein the request includes a query, wherein the content to be browsed includes at least search results.
14. The system of claim 13, wherein the determined focal point includes content of a web site, wherein the system is further configured to:
search, based on the query, in at least one content retrieval system for the content to be browsed, wherein the web site is utilized as a seed for the search;
query at least one inventory management system for probabilities that products indicated in the search results are available from at least one merchant;
determine, based on at least one user input, a new focal point, the new focal point including content of a web site; and
update the visual representation based on the new focal point.
15. The system of claim 16, wherein the dynamic content elements are dynamically organized based on dynamic organization rules.
16. The system of claim 15, wherein the dynamic organization rules are based on inventory such that items are organized closer to the focal point based on ascending order of their inventory.
17. The system of claim 11, wherein the base browsing environment shows a storefront image and the dynamic content elements include product listings, and, wherein the base browsing environment includes at least one visual illustration, each dynamic content element includes one of the content to be browsed, and wherein the at least one dynamic content element is updated in real-time as the identified content is browsed.
18. The system of claim 11, wherein the browsing environment is updated in real-time as the identified content is browsed.
19. The system of claim 11, wherein the dynamic content elements are updated and rendered in real-time as the identified content is browsed.
US17/082,424 2014-11-14 2020-10-28 System and method for intuitive content browsing Abandoned US20210042809A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/082,424 US20210042809A1 (en) 2014-11-14 2020-10-28 System and method for intuitive content browsing

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462079804P 2014-11-14 2014-11-14
US201562147771P 2015-04-15 2015-04-15
US14/940,396 US10460286B2 (en) 2014-11-14 2015-11-13 Inventory management system and method thereof
US201662279125P 2016-01-15 2016-01-15
US15/404,827 US10825069B2 (en) 2014-11-14 2017-01-12 System and method for intuitive content browsing
US17/082,424 US20210042809A1 (en) 2014-11-14 2020-10-28 System and method for intuitive content browsing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/404,827 Continuation US10825069B2 (en) 2014-11-14 2017-01-12 System and method for intuitive content browsing

Publications (1)

Publication Number Publication Date
US20210042809A1 true US20210042809A1 (en) 2021-02-11

Family

ID=58634940

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/404,827 Active 2036-04-14 US10825069B2 (en) 2014-11-14 2017-01-12 System and method for intuitive content browsing
US17/082,424 Abandoned US20210042809A1 (en) 2014-11-14 2020-10-28 System and method for intuitive content browsing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/404,827 Active 2036-04-14 US10825069B2 (en) 2014-11-14 2017-01-12 System and method for intuitive content browsing

Country Status (1)

Country Link
US (2) US10825069B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495051B (en) 2013-08-09 2021-07-06 热成像雷达有限责任公司 Method for analyzing thermal image data using multiple virtual devices and method for associating depth values with image pixels
US20160274778A1 (en) * 2015-03-20 2016-09-22 Alexander Li User interface exposing content at a direct focal point
US10574886B2 (en) 2017-11-02 2020-02-25 Thermal Imaging Radar, LLC Generating panoramic video for video management systems
US10459622B1 (en) * 2017-11-02 2019-10-29 Gopro, Inc. Systems and methods for interacting with video content
US11430037B2 (en) * 2019-09-11 2022-08-30 Ebay Korea Co. Ltd. Real time item listing modification
US11601605B2 (en) 2019-11-22 2023-03-07 Thermal Imaging Radar, LLC Thermal imaging camera device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090259632A1 (en) * 2008-04-15 2009-10-15 Yahoo! Inc. System and method for trail identification with search results
US8352869B2 (en) * 2009-02-24 2013-01-08 Ebay Inc. Systems and methods for providing multi-directional visual browsing on an electronic device

Family Cites Families (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7010577B1 (en) 1998-09-11 2006-03-07 L. V. Partners, L.P. Method of controlling a computer using an embedded unique code in the content of DVD media
US6515656B1 (en) 1999-04-14 2003-02-04 Verizon Laboratories Inc. Synchronized spatial-temporal browsing of images for assessment of content
WO2000065509A2 (en) 1999-04-22 2000-11-02 Qode.Com, Inc. System and method for providing electronic information upon receipt of a scanned bar code
US6326988B1 (en) 1999-06-08 2001-12-04 Monkey Media, Inc. Method, apparatus and article of manufacture for displaying content in a multi-dimensional topic space
US7181438B1 (en) * 1999-07-21 2007-02-20 Alberti Anemometer, Llc Database access system
US6868525B1 (en) * 2000-02-01 2005-03-15 Alberti Anemometer Llc Computer graphic display visualization system and method
US8234218B2 (en) 2000-10-10 2012-07-31 AddnClick, Inc Method of inserting/overlaying markers, data packets and objects relative to viewable content and enabling live social networking, N-dimensional virtual environments and/or other value derivable from the content
US20020059117A1 (en) 2000-11-10 2002-05-16 Aranet, Inc Methods of generating revenue using streaming video with associated links
US20020143669A1 (en) 2001-01-22 2002-10-03 Scheer Robert H. Method for managing inventory within an integrated supply chain
US7085736B2 (en) 2001-02-27 2006-08-01 Alexa Internet Rules-based identification of items represented on web pages
US20040030741A1 (en) * 2001-04-02 2004-02-12 Wolton Richard Ernest Method and apparatus for search, visual navigation, analysis and retrieval of information from networks with remote notification and content delivery
US7027999B2 (en) 2001-04-06 2006-04-11 Fms, Inc. Method of and apparatus for forecasting item availability
US7406659B2 (en) 2001-11-26 2008-07-29 Microsoft Corporation Smart links
US8938478B2 (en) 2001-12-28 2015-01-20 Google Inc. Dynamic presentation of web content
US20120272134A1 (en) 2002-02-06 2012-10-25 Chad Steelberg Apparatus, system and method for a media enhancement widget
US8195597B2 (en) * 2002-02-07 2012-06-05 Joseph Carrabis System and method for obtaining subtextual information regarding an interaction between an individual and a programmable device
US7136875B2 (en) 2002-09-24 2006-11-14 Google, Inc. Serving advertisements based on content
JP2004110548A (en) * 2002-09-19 2004-04-08 Fuji Xerox Co Ltd Usability evaluation support apparatus and method
US7596513B2 (en) 2003-10-31 2009-09-29 Intuit Inc. Internet enhanced local shopping system and method
US20050210008A1 (en) * 2004-03-18 2005-09-22 Bao Tran Systems and methods for analyzing documents over a network
US20070300142A1 (en) 2005-04-01 2007-12-27 King Martin T Contextual dynamic advertising based upon captured rendered text
US7886024B2 (en) 2004-07-01 2011-02-08 Microsoft Corporation Sharing media objects in a network
US20060100924A1 (en) 2004-11-05 2006-05-11 Apple Computer, Inc. Digital media file with embedded sales/marketing information
US7975019B1 (en) 2005-07-15 2011-07-05 Amazon Technologies, Inc. Dynamic supplementation of rendered web pages with content supplied by a separate source
US8700586B2 (en) 2005-10-31 2014-04-15 Yahoo! Inc. Clickable map interface
US7693912B2 (en) * 2005-10-31 2010-04-06 Yahoo! Inc. Methods for navigating collections of information in varying levels of detail
US20120203661A1 (en) 2011-02-04 2012-08-09 Life Technologies Corporation E-commerce systems and methods
EP2011017A4 (en) 2006-03-30 2010-07-07 Stanford Res Inst Int Method and apparatus for annotating media streams
US20070244831A1 (en) 2006-04-18 2007-10-18 Kuo James Shaw-Han System and method for secure online transaction
US7747749B1 (en) * 2006-05-05 2010-06-29 Google Inc. Systems and methods of efficiently preloading documents to client devices
US20080209308A1 (en) 2006-05-22 2008-08-28 Nicholas Andrew Brackney Content reference linking purchase model
US8626594B2 (en) 2006-06-15 2014-01-07 Google Inc. Ecommerce-enabled advertising
US7685192B1 (en) * 2006-06-30 2010-03-23 Amazon Technologies, Inc. Method and system for displaying interest space user communities
US20090300528A1 (en) * 2006-09-29 2009-12-03 Stambaugh Thomas M Browser event tracking for distributed web-based processing, spatial organization and display of information
US8756510B2 (en) * 2006-10-17 2014-06-17 Cooliris, Inc. Method and system for displaying photos, videos, RSS and other media content in full-screen immersive view and grid-view using a browser feature
US10387891B2 (en) * 2006-10-17 2019-08-20 Oath Inc. Method and system for selecting and presenting web advertisements in a full-screen cinematic view
US9071730B2 (en) 2007-04-14 2015-06-30 Viap Limited Product information display and purchasing
US20080294974A1 (en) * 2007-05-24 2008-11-27 Nokia Corporation Webpage history view
US8694363B2 (en) 2007-06-20 2014-04-08 Ebay Inc. Dynamically creating a context based advertisement
US8744118B2 (en) 2007-08-03 2014-06-03 At&T Intellectual Property I, L.P. Methods, systems, and products for indexing scenes in digital media
US9569806B2 (en) 2007-09-04 2017-02-14 Apple Inc. Dynamic presentation of location-specific information
US20170116640A1 (en) 2007-10-15 2017-04-27 Adam Sah Communicating Context Information Among Portable Program Modules
US8010901B1 (en) * 2007-10-26 2011-08-30 Sesh, Inc. System and method for automated synchronized co-browsing
US20090271289A1 (en) 2007-11-20 2009-10-29 Theresa Klinger System and method for propagating endorsements
US20090148045A1 (en) 2007-12-07 2009-06-11 Microsoft Corporation Applying image-based contextual advertisements to images
US8244590B2 (en) 2007-12-21 2012-08-14 Glyde Corporation Software system for decentralizing ecommerce with single page buy
US8504945B2 (en) * 2008-02-01 2013-08-06 Gabriel Jakobson Method and system for associating content with map zoom function
US20090248548A1 (en) 2008-03-26 2009-10-01 30 Second Software, Inc. Method for location based inventory lookup
US20090271250A1 (en) 2008-04-25 2009-10-29 Doapp, Inc. Method and system for providing an in-site sales widget
WO2009137368A2 (en) 2008-05-03 2009-11-12 Mobile Media Now, Inc. Method and system for generation and playback of supplemented videos
US20090281899A1 (en) 2008-05-07 2009-11-12 Peigen Jiang Method for placing advertisement on web pages
WO2010022000A2 (en) 2008-08-18 2010-02-25 Ipharro Media Gmbh Supplemental information delivery
US8170927B2 (en) 2008-09-30 2012-05-01 Carefusion 303, Inc. Adaptive critical low level management
WO2010045143A2 (en) * 2008-10-16 2010-04-22 The University Of Utah Research Foundation Automated development of data processing results
US9336528B2 (en) 2008-12-16 2016-05-10 Jeffrey Beaton System and method for overlay advertising and purchasing utilizing on-line video or streaming media
US8291348B2 (en) * 2008-12-31 2012-10-16 Hewlett-Packard Development Company, L.P. Computing device and method for selecting display regions responsive to non-discrete directional input actions and intelligent content analysis
US20100185525A1 (en) 2009-01-16 2010-07-22 John Hazen Controlling presentation of purchasing information based on item availability
US8539359B2 (en) * 2009-02-11 2013-09-17 Jeffrey A. Rapaport Social network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic
WO2010096624A2 (en) 2009-02-19 2010-08-26 Scvngr, Inc. Location-based advertising method and system
US20150294377A1 (en) * 2009-05-30 2015-10-15 Edmond K. Chow Trust network effect
US9245263B2 (en) 2009-06-23 2016-01-26 Jwl Ip Holdings Llc Systems and methods for scripted content delivery
US20100332496A1 (en) 2009-06-26 2010-12-30 Microsoft Corporation Implicit product placement leveraging identified user ambitions
US20110004517A1 (en) 2009-06-26 2011-01-06 The Jungle U LLC Dialogue advertising
US8289340B2 (en) * 2009-07-30 2012-10-16 Eastman Kodak Company Method of making an artistic digital template for image display
US8274523B2 (en) * 2009-07-30 2012-09-25 Eastman Kodak Company Processing digital templates for image display
US20110035275A1 (en) 2009-08-05 2011-02-10 Scott Frankel Method and apparatus for embedded graphical advertising
US20110099088A1 (en) 2009-10-28 2011-04-28 Adgregate Markets, Inc. Various methods and apparatuses for completing a transaction order through an order proxy system
US8977639B2 (en) * 2009-12-02 2015-03-10 Google Inc. Actionable search results for visual queries
US9405772B2 (en) * 2009-12-02 2016-08-02 Google Inc. Actionable search results for street view visual queries
US8606329B2 (en) * 2009-12-18 2013-12-10 Nokia Corporation Method and apparatus for rendering web pages utilizing external rendering rules
US9760885B1 (en) 2010-03-23 2017-09-12 Amazon Technologies, Inc. Hierarchical device relationships for geolocation-based transactions
US8830225B1 (en) * 2010-03-25 2014-09-09 Amazon Technologies, Inc. Three-dimensional interface for content location
US8631029B1 (en) * 2010-03-26 2014-01-14 A9.Com, Inc. Evolutionary content determination and management
US20110298830A1 (en) * 2010-06-07 2011-12-08 Palm, Inc. Single Point Input Variable Zoom
US20120206647A1 (en) 2010-07-01 2012-08-16 Digital Zoom, LLC System and method for tagging streamed video with tags based on position coordinates and time and selectively adding and using content associated with tags
US8392261B2 (en) 2010-07-15 2013-03-05 Google Inc. Local shopping and inventory
US8682739B1 (en) 2010-07-30 2014-03-25 Amazon Technologies, Inc. Identifying objects in video
US20120095881A1 (en) 2010-10-15 2012-04-19 Glyde Corporation Atomizing e-commerce
WO2012082924A2 (en) 2010-12-14 2012-06-21 Soorena Salari Apparatus, system, and method for a micro commerce ad
US20120167146A1 (en) 2010-12-28 2012-06-28 White Square Media Llc Method and apparatus for providing or utilizing interactive video with tagged objects
US9111306B2 (en) 2011-01-10 2015-08-18 Fujifilm North America Corporation System and method for providing products from multiple websites
TWI546700B (en) * 2011-01-13 2016-08-21 宏達國際電子股份有限公司 Portable electronic device, and control method and computer program product of the same
US8401911B1 (en) 2011-03-22 2013-03-19 Google Inc. Display of popular, in-stock products of a merchant
US20120311453A1 (en) * 2011-05-31 2012-12-06 Fanhattan Llc System and method for browsing and accessing media content
US20130138510A1 (en) 2011-11-28 2013-05-30 Navionics Spa Systems and methods for facilitating electronic sales transactions through published content
US9123064B2 (en) 2012-02-29 2015-09-01 American Express Travel Related Services Company, Inc. Online transactions using an embedded storefront widget
US11010795B2 (en) 2012-03-30 2021-05-18 Rewardstyle, Inc. System and method for affiliate link generation
US9430784B1 (en) 2012-03-30 2016-08-30 David Frederick System for E-commerce accessibility
US9117238B2 (en) 2012-04-18 2015-08-25 Ebay Inc. Method, system, and medium for generating a mobile interface indicating traffic level for local merchants
US9311669B2 (en) 2012-04-26 2016-04-12 Ribbon Payments, Inc. System and method for selling a product through an adaptable purchase interface
US9436961B2 (en) 2012-04-26 2016-09-06 Ribbon Payments, Inc. System and method for selling a product through an adaptable purchase interface
US9619833B2 (en) 2012-06-04 2017-04-11 Paypal, Inc. Wish list transactions through smart TV
US10176633B2 (en) * 2012-06-05 2019-01-08 Apple Inc. Integrated mapping and navigation application
US20130332262A1 (en) 2012-06-11 2013-12-12 John Hunt Internet marketing-advertising reporting (iMar) system
US9262535B2 (en) * 2012-06-19 2016-02-16 Bublup Technologies, Inc. Systems and methods for semantic overlay for a searchable space
US20140046789A1 (en) 2012-08-09 2014-02-13 Ebay, Inc. Fast Transactions
US20140058850A1 (en) 2012-08-22 2014-02-27 Christopher S. Reckert Direct commercial offering and purchase method from social network sites
WO2014047455A2 (en) 2012-09-21 2014-03-27 Fitzpatrick Heather Marie System and method for providing electronic commerce data
US20140129919A1 (en) 2012-11-07 2014-05-08 Eric R. Benson Method for Embedding Captured Content from one location to a host location
US9285958B1 (en) * 2012-11-19 2016-03-15 Amazon Technologies, Inc. Browser interface for accessing predictive content
US9710433B2 (en) * 2012-11-30 2017-07-18 Yahoo! Inc. Dynamic content mapping
US20140180832A1 (en) 2012-12-05 2014-06-26 Mulu, Inc. Methods and systems for populating content on a web page
US8963865B2 (en) * 2012-12-14 2015-02-24 Barnesandnoble.Com Llc Touch sensitive device with concentration mode
US20150178786A1 (en) 2012-12-25 2015-06-25 Catharina A.J. Claessens Pictollage: Image-Based Contextual Advertising Through Programmatically Composed Collages
US20150302011A1 (en) 2012-12-26 2015-10-22 Rakuten, Inc. Image management device, image generation program, image management method, and image management program
US10514541B2 (en) * 2012-12-27 2019-12-24 Microsoft Technology Licensing, Llc Display update time reduction for a near-eye display
US11182821B2 (en) 2013-07-26 2021-11-23 Exxcelon Corporation System and method of saving deal offers to be applied at a point-of-sale (POS) of a retail store
US9933864B1 (en) * 2013-08-29 2018-04-03 Amazon Technologies, Inc. Steady content display
US9978078B2 (en) 2013-09-25 2018-05-22 Retailmenot, Inc. Tracking offers across multiple channels
US20180225885A1 (en) * 2013-10-01 2018-08-09 Aaron Scott Dishno Zone-based three-dimensional (3d) browsing
MX358612B (en) * 2013-10-01 2018-08-27 Dishno Aaron Three-dimensional (3d) browsing.
US9319745B2 (en) 2013-10-16 2016-04-19 VidRetal, Inc. Media player system for product placements
US20150120453A1 (en) 2013-10-25 2015-04-30 Palo Alto Research Center Incorporated Real-time local offer targeting and delivery system
US20150227972A1 (en) 2014-02-07 2015-08-13 Huang (Joy) Tang System and methods for identifying and promoting tagged commercial products
US20160012142A1 (en) * 2014-03-26 2016-01-14 Charles J. W. Reed System and Method for Parallel Content Delivery and Focus Engine for Content Results
US20150278783A1 (en) 2014-03-31 2015-10-01 Comr.Se Corp. Native e-commerce transactables for familiar user environments
US20150302424A1 (en) 2014-04-18 2015-10-22 Mavatar Technologies, Inc. Systems and methods for providing content provider-driven shopping
US10638194B2 (en) 2014-05-06 2020-04-28 At&T Intellectual Property I, L.P. Embedding interactive objects into a video session
US20150350729A1 (en) 2014-05-28 2015-12-03 United Video Properties, Inc. Systems and methods for providing recommendations based on pause point in the media asset
US9965796B2 (en) 2014-06-26 2018-05-08 Paypal, Inc. Social media buttons with payment capability
US9799143B2 (en) * 2014-08-15 2017-10-24 Daqri, Llc Spatial data visualization
US10007333B2 (en) * 2014-11-07 2018-06-26 Eye Labs, LLC High resolution perception of content in a wide field of view of a head-mounted display
US10460286B2 (en) 2014-11-14 2019-10-29 The Joan and Irwin Jacobs Technion-Cornell Institute Inventory management system and method thereof
US10102565B2 (en) 2014-11-21 2018-10-16 Paypal, Inc. System and method for content integrated product purchasing
US9760924B2 (en) 2015-02-02 2017-09-12 Kibo Software, Inc. Automatic search of local inventory
US20160267569A1 (en) 2015-03-10 2016-09-15 Google Inc. Providing Search Results Comprising Purchase Links For Products Associated With The Search Results
CN106339398B (en) * 2015-07-09 2019-10-18 广州市动景计算机科技有限公司 A kind of pre-reading method of Webpage, device and intelligent terminal
US10491711B2 (en) * 2015-09-10 2019-11-26 EEVO, Inc. Adaptive streaming of virtual reality data
US20170200141A1 (en) 2016-01-13 2017-07-13 Glossaread Technologies Private Limited Methods and systems for managing electronic contents in electronic publication

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090259632A1 (en) * 2008-04-15 2009-10-15 Yahoo! Inc. System and method for trail identification with search results
US8352869B2 (en) * 2009-02-24 2013-01-08 Ebay Inc. Systems and methods for providing multi-directional visual browsing on an electronic device

Also Published As

Publication number Publication date
US20170124622A1 (en) 2017-05-04
US10825069B2 (en) 2020-11-03

Similar Documents

Publication Publication Date Title
US20210042809A1 (en) System and method for intuitive content browsing
AU2022271460B2 (en) Matching content to a spatial 3D environment
US20200183966A1 (en) Creating Real-Time Association Interaction Throughout Digital Media
US11907240B2 (en) Method and system for presenting a search result in a search result card
US10540055B2 (en) Generating interactive content items based on content displayed on a computing device
US9619829B2 (en) Evolutionary content determination and management
US20200311126A1 (en) Methods to present search keywords for image-based queries
TWI573042B (en) Gesture-based tagging to view related content
KR101820256B1 (en) Visual search and three-dimensional results
US20140195890A1 (en) Browser interface for accessing supplemental content associated with content pages
US20150242525A1 (en) System for referring to and/or embedding posts within other post and posts within any part of another post
US9830388B2 (en) Modular search object framework
US9460167B2 (en) Transition from first search results environment to second search results environment
US20150317354A1 (en) Intent based search results associated with a modular search object framework
US20140195337A1 (en) Browser interface for accessing supplemental content associated with content pages
US20150317319A1 (en) Enhanced search results associated with a modular search object framework
US9619519B1 (en) Determining user interest from non-explicit cues
JP2008146492A (en) Information providing device, information providing method, and computer program
WO2014110048A1 (en) Browser interface for accessing supple-mental content associated with content pages
US10437902B1 (en) Extracting product references from unstructured text
US10628848B2 (en) Entity sponsorship within a modular search object framework
WO2017123746A1 (en) System and method for intuitive content browsing
KR101701952B1 (en) Method, apparatus and computer program for displaying serch information

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION