WO2015168580A1 - Computerized systems and methods for providing information related to displayed content - Google Patents

Computerized systems and methods for providing information related to displayed content Download PDF

Info

Publication number
WO2015168580A1
WO2015168580A1 PCT/US2015/028827 US2015028827W WO2015168580A1 WO 2015168580 A1 WO2015168580 A1 WO 2015168580A1 US 2015028827 W US2015028827 W US 2015028827W WO 2015168580 A1 WO2015168580 A1 WO 2015168580A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
interface
content
displayed
token
Prior art date
Application number
PCT/US2015/028827
Other languages
French (fr)
Inventor
Richard Carl Gossweiler Iii
Krishna Bharat
Kenneth Wayne Dauber
Erwin Sing Tam
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Publication of WO2015168580A1 publication Critical patent/WO2015168580A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements

Definitions

  • the present disclosure relates to computerized systems and methods for providing information related to displayed content and, more generally, to interactive display interfaces and information retrieval technologies.
  • the present disclosure relates to computerized systems and methods for providing a user with information related to content displayed in a particular region of a display interface.
  • a large display device may present an interface that allows one user to interact with content in a first region of the interface, while other users interact with content in other regions of the interface.
  • Content providers may find it advantageous to provide such displays in public spaces.
  • a large display could be placed at an airport, present a variety of advertisements, and allow users to interact with the advertisements while they wait for their airplanes to board.
  • Such a large display could provide interactive advertisements to many users at a lower cost than providing the advertisements on a number of single advertisement displays.
  • Embodiments of the present disclosure relate to computerized systems and methods for providing information related to displayed content.
  • embodiments of the present disclosure relate to solutions for providing a user with information related to content that is displayed in a particular region of a display interface.
  • computerized systems and methods that allow a user to identify a particular interactive region of a display interface that displayed content in which the user is interested. Once the interactive region has been identified, the computerized systems and methods may provide information related to the content that was displayed in that region to a device of the user. [006] In accordance with some embodiments, there is provided a computer-implemented method for providing information related to displayed content. The method comprises operations performed by one or more processors.
  • the operations include identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface.
  • the operations also include causing representations of the first and second tokens to be presented at a second device.
  • the operations further include receiving an indication that the representation of the first token has been selected at the second device.
  • the operations still further include causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
  • a computer- implemented system for providing information related to displayed content.
  • the system comprises a memory device that stores instructions and one or more processors that execute the instructions.
  • the one or more processors execute the instructions to identify tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface.
  • the one or more processors also execute the instructions to cause representations of the first and second tokens to be presented at a second device.
  • the one or more processors further execute the instructions to receive an indication that the representation of the first token has been selected on the second device.
  • the one or more processors still further execute the instructions to cause information related to content presented in the first interactive region to be sent to the second device based on the received indication.
  • a non-transitory computer-readable medium that stores instructions.
  • the instructions when executed by one or more processors, cause the one or more processors to perform a method.
  • the method comprises identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface.
  • the method also comprises causing representations of the first and second tokens to be presented on a second device.
  • the method further comprises receiving an indication that the representation of the first token has been selected on the second device.
  • the method still further comprises causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
  • FIG. 1A illustrates an example environment for presenting content, consistent with embodiments of the present disclosure.
  • FIG. 1 B illustrates an example user interface, consistent with embodiments of the present disclosure.
  • FIG. 2 illustrates an example computing environment for implementing embodiments and features consistent with the present disclosure.
  • FIG. 3 illustrates a flowchart of an example method for providing information related to displayed content, consistent with embodiments of the present disclosure.
  • FIG. 4 illustrates a flowchart of another example method for providing information related to displayed content, consistent with embodiments of the present disclosure.
  • FIG. 5 illustrates an example interface for displaying content, consistent with embodiments of the present disclosure.
  • FIG. 6 A illustrates an example user interface screen for selecting a representation of a token, consistent with embodiments of the present disclosure.
  • FIG. 6B illustrates an example user interface screen for displaying links corresponding to content items, consistent with embodiments of the present disclosure.
  • FIG. 7 illustrates a flowchart of an example method implemented by a display device, consistent with embodiments of the present disclosure.
  • FIG. 8 illustrates a flowchart of an example method implemented by a client device, consistent with embodiments of the present disclosure.
  • FIG. 9 illustrates a flowchart of another example method implemented by a client device, consistent with embodiments of the present disclosure.
  • FIG. 10 illustrates a flowchart of an example method for displaying merged content, consistent with embodiments of the present disclosure.
  • FIG. 1 1 illustrates an example environment for displaying merged content, consistent with embodiments of the present disclosure.
  • FIG. 12 illustrates an example computer system for implementing embodiments and features consistent with the present disclosure.
  • Embodiments of the present disclosure relate to computerized systems and methods for providing information related to displayed content.
  • Embodiments of the present disclosure include systems and methods that may provide a user with information related to content displayed in a particular region of a display interface.
  • a display interface may include interactive regions that display one or more types of content.
  • a user may interact with a region of the interface, and may wish to receive information related to that content on a second device.
  • the interface may associate a token with the user, and may display the token in the region of the interface with which the user interacts.
  • the user may select a representation of the token on the second device to identify the region with which he/she was interacting, and information related to the content displayed in that region can then be provided to the second device.
  • One or more advantages may be achieved by providing content via a display device that supports user interaction. For example, a greater variety of content can be displayed. Moreover, unlike static displays (e.g., posters, signs), an interactive display can present content that is targeted to a particular user's interest. With an interactive display, a user may be able to interact with content to, for example, learn additional information about an advertised product, search for information, play a game, look up directions to a place of interest, and more. Users may find the information provided by such displays to be more useful and/or more engaging. Content providers may find advertisements on such displays to be more effective.
  • static displays e.g., posters, signs
  • an interactive display can present content that is targeted to a particular user's interest.
  • a user may be able to interact with content to, for example, learn additional information about an advertised product, search for information, play a game, look up directions to a place of interest, and more. Users may find the information provided by such displays to be more useful and/or more engaging. Content providers may find
  • Content providers may also find that it is more cost effective to provide a display supporting multiple user interaction than a display supporting interaction of a single user.
  • a single large display could present an interface with multiple interactive regions, allowing multiple users to interact with the interface at the same time.
  • Such a display could be useful in public spaces, such as airports.
  • users could learn about products advertised on a large public display by interacting with advertisements while waiting for their airplanes to board.
  • providing such a display presents some challenges to content providers. For example, a user interacting with a region of a displayed interface may wish to further interact with content displayed in the region, but be unwilling or unable to do so at that time.
  • a user interacting with advertisement content may arrive at a purchase screen, and may wish to purchase an advertised product, but be unwilling to enter his/her credit card information on the public display.
  • a user may be unable to further interact with content, because they have to catch a flight that is ready to depart.
  • Embodiments of the present disclosure can address the challenges associated with providing displays supporting multiple user interaction.
  • embodiments of the present disclosure provide computerized systems and methods that may display a token in the region of the multiuser display interface with which a user interacts, and that may allow the user to select a representation of the token on a device of the user. Selecting the representation may allow the computerized systems and methods disclosed herein to provide the user's device with information related to the content that was displayed in the region, such as the content itself, so that the user can continue to interact with the content using the user's device.
  • Embodiments of the present disclosure also provide a user-friendly and private way to receive information related to displayed content at a device of a user. For example, by selecting a representation of a token to identify content in which a user is interested, passwords and/or other more complicated ways of authenticating to a device are not needed. No information regarding the user's device needs to be presented on the display and the user's device remains private.
  • FIG. 1 A illustrates an example environment 100 in which interactive content may be presented to one or more users, consistent with embodiments of the present disclosure.
  • environment 100 may be a public environment.
  • a public environment may be, for example, an airport terminal, bus station, mall, store, shopping center, railroad station, subway station, park, zoo, stadium, public building, school campus, parking lot, area along a roadway, hospital, museum, wayside station, or any other location frequented by large groups of people.
  • a public environment is not so limited.
  • the term "public" may refer to any environment in which two or more individuals are present.
  • a display device may present an interactive interface including interactive content, such as interactive interface 160 of FIG. 1 A.
  • a display device 210 may present interactive interface 160 across one or more display panels.
  • interactive interface 160 may be displayed across one large display panel or across multiple display panels.
  • the display panel(s) may be display panel(s) 1250, as further described with reference to FIG. 12.
  • Display panel(s) may include, for example, one or more cathode ray tube (CRT) displays, liquid crystal displays (LCDs), plasma displays, light emitting diode (LED) displays, touch screen type displays, projector displays (e.g., images projected on a screen or surface, holographic images, etc.), organic light emitting diode (OLED) displays, field emission displays (FEDs), active matrix displays, a vacuum fluorescent (VFR) displays, 3-dimensional (3-D) displays, electronic paper (e-ink) displays, microdisplays, or any combination of the above types of displays.
  • CTR cathode ray tube
  • LCDs liquid crystal displays
  • LED light emitting diode
  • LED light emitting diode
  • touch screen type displays e.g., images projected on a screen or surface, holographic images, etc.
  • projector displays e.g., images projected on a screen or surface, holographic images, etc.
  • OLED organic light emitting diode
  • FEDs field
  • Interactive interface 160 may allow one or more users to interact with content displayed in interface 160 through one or more types of user inputs.
  • the one or more types of user inputs may include one or more gestures, motions, voice commands, touch screen inputs, button presses, key presses on a physical or virtual keyboard, etc.
  • user input may include a gesture, motion, or voice command captured by one or more sensors mounted on or in the vicinity of a display device 210.
  • Sensors may include one or more video cameras, infrared sensors, ultrasound sensors, radio frequency sensors, or microphones.
  • User input may further include button or key presses on a touch screen or virtual or physical keyboard.
  • Interface 160 may include a certain number of interactive regions.
  • a display device 210 may present interface 160 across several display panels, and may configure interface 160 to include an interactive region for each panel.
  • a display device 210 may dynamically configure interface 160 to display a certain number of interactive regions. For example, the number of interactive regions may be dynamically configured based on a number of users that a display device 21 0 detects within the vicinity of display device 210, or that display device 210 detects as interacting with interface 160. In such an example scenario, two interactive regions may be displayed when two users are detected in the vicinity of display device 210, and interface 160 may be divided into three interactive regions when a third user enters the vicinity of display device 210.
  • a user interacting with interface 160 may interact with content within an interactive region, such as region A 150 or region B 152.
  • each user interacting with interface 160 may be assigned his/her own interactive region for interacting with content.
  • each interactive region may be configured to allow a single user to interact with content within the region.
  • a token may be displayed in association with an interactive region.
  • a token may be displayed within an interactive region.
  • tokens may be displayed in each interactive region within interface 160.
  • tokens may be displayed in each interactive region within interface 160 with which a user interacts.
  • a token could be displayed outside an interactive region with which it is associated.
  • a representation of the layout of interface 160 could be displayed as a legend in interface 160, along with tokens positioned in the legend such that a user could infer which interactive region corresponds to a given token.
  • a token may include a color, image, number, character, character string, shape, 3- dimensional object, avatar, and/or any other item that could be used to identify a particular region.
  • a user may select a particular token to be displayed. For example, a user could choose a particular color, image, number, character, character string, 3-dimensional object, avatar, and/or other item to display in association with the interactive region with which the user interacts.
  • a token is first displayed in an interactive region of interface 160 when a user enters the vicinity of the region.
  • one or more sensors could detect when a user enters the vicinity of a particular interactive region, and display device 210 could associate a token with the user and display the token in response to the detection of the user.
  • a token could be displayed when a user begins interacting with an interactive region.
  • a token may be displayed when a user finishes interacting with an interactive region.
  • a token may be displayed the entire time a user interacts with an interactive region.
  • a token may be displayed upon a user input from a user. For example, a token can be displayed based upon a particular user input, such as a particular gesture.
  • a content item displayed within an interactive region of interface 160 may include any type of content in which a user may be interested. This may include, for example, documents, presentations, text, news, articles, maps, geographic information, rating information, review information, polling information, directories, pricing information, advertisements, product information, information regarding days of hours of operation for establishments, audio, video, pictures, images, social network information, games, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, web pages, URLs, search resuts, and/or any other type of content in which users may be interested.
  • This may include, for example, documents, presentations, text, news, articles, maps, geographic information, rating information, review information, polling information, directories, pricing information, advertisements, product information, information regarding days of hours of operation for establishments, audio, video, pictures, images, social network information, games, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, web pages, URLs, search resuts, and
  • information related to a content item may include the content item itself, or any other information related to a content item, such as documents, presentations, text, news, articles, maps, geographic information, rating information, review information, polling information, directories, pricing information, advertisements, product information, information regarding days of hours of operation for establishments, audio, video, pictures, images, social network information, games, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, web pages, URLs, search results, and/or any other type of information which may be related to a content item.
  • information related to a content item may be a representation of the content item that is configured for display on a particular client device 220.
  • information related to a content item may be a representation of the content item that is configured for display on a smartphone.
  • content items may be presented in interface 160 in such a way that users may interact with the content items.
  • one or more software applications may be executed on a display device 210 to allow users to interact with the content.
  • interface 160 may be comprised of one or more web pages rendered by one or more web browsers stored on display device 210.
  • Interface 160 may present content in such a way that users may, for example, request quotes for particular stocks, search for information using a search engine, purchase an advertised product, play an audio or video file, vote in a poll, search for directions to a destination, look up a store's location and hours, check a social network feed, etc.
  • Users may interact with content displayed in interface 160 using one or more gestures, motions, key presses, button presses, voice commands, etc.
  • one or more microphones mounted on, or in the vicinity of, display device 210 may detect voice commands, and display device 210 may process the voice commands to determine user input.
  • a user may press keys or buttons to interact with content items displayed in interface 160.
  • the keys and/or buttons could be physical and/or virtual (e.g., provided on a touch screen interface).
  • one or more video cameras mounted on, or in the vicinity of, display device 210 may capture user gestures.
  • a display device 210 may analyze the user gestures in the captured video to determine user input. For example, display device 210 may interpret a user pointing at a particular content item as a selection of the content item, and/or may interpret a user making a swiping gesture as a manipulation of a content item.
  • the one or more video cameras may detect various characteristics of users interacting with interface 160. For example, a display device 210 may analyze captured video to determine a user's age, gender, hair color, eye color, and/or other user characteristics.
  • Display device 210 may then identify content items that may be desirable to a user having such characteristics, and may present the content items to the user in interface 160.
  • Content items may be stored locally at display device 210, or may be received by a display device 210 from one or more servers 240.
  • one or more servers 240 may send content updates to a display device 210, such as real-time updates.
  • server(s) 240 may provide a display device 210 with updated weather content, stock quotes, trending articles, etc.
  • display device 210 may request content items from server(s) 240 as a user interacting with interface 160 requests content items.
  • two users may be interacting with interface 160.
  • the two users may be simultaneously interacting with interface 160.
  • User A 120 may be interacting with content A 130 in interactive region A 150 of interface 160.
  • User B 122 may be interacting with content B 132 in interactive region B 152 of interface 160.
  • Content A 130 and content B 132 may be the same content, or different content.
  • a token A 140 may be displayed in association with interactive region A 150, and a token B 142 may be displayed in association with interactive region B 152.
  • token A 150 may be displayed in interactive region A 150, and/or token B 142 may be displayed in interactive region B 152, though the disclosure is not so limited.
  • token A 140 is different than token B 142.
  • a user interacting with interface 160 may wish to receive information related to content displayed in interface 160 on a different device, such as a client device 220. For example, a user may wish to continue interacting with content displayed in interface 160 on a client device 220, or may wish to receive additional information related to the content displayed in interface 160 on a client device 220.
  • FIG. 1 B illustrates an example user interface 1 10 that may be displayed on a client device 220, and that may allow a user to receive information related to content displayed in interface 160 on client device 220.
  • representations of the tokens displayed in interface 160 may be displayed in user interface 1 10.
  • a representation of a token may include a color, image, number, character, character string, shape, 3-dimensional object, avatar, and/or any other item that could be used to identify a particular token that was displayed in an interactive interface of a display device 210.
  • a representation of a token may be the same as the token.
  • a token that is displayed in a red color may be represented by a representation that is displayed in the red color.
  • a representation of a token may differ from the token.
  • a token that is displayed in a red color may be represented by a representation that displays the character string "red.”
  • a user may select a representation in user interface 1 10 to identify an interactive region that contains content in which the user is interested. For example, user A 120 may be interested in receiving content from interface 160. As previously noted with reference to FIG. 1 A, user A 120 may have been interacting with content A 130 in interactive region A 150. User interface 1 10 may be displayed on a client device 220 of user A 120. User A 120 may wish to receive information related to content A 130 on client device 220. Accordingly, user A may select representation A 170 on user interface 1 10 to identify that interactive region A is the interactive region that displayed the content in which user A was interested. In response to selecting representation A 170, information related to content
  • a 130 may be received on client device 220.
  • the above example describes a situation in which a user selects a representation of a token that identifies an interactive region of interface 160 with which the user interacted.
  • user A 120 may view content B 132 while user B 122 is interacting with interactive region B 1 52.
  • User A 120 may be interested in content B 132, despite the fact that it is user B 122 that is interacting with content B 132.
  • user A 120 may still be able to receive information related to content B 132 on his/her client device 220 by selecting representation B 172 on user interface 1 10.
  • user A 120 may be interested in content A 130 and content B
  • user A 120 may select both representation A 170 and representation B 172 in user interface 1 10, and may receive information related to content A 130 and information related to content B 132, in response to the selection.
  • user interface 1 10 has been described above as being a user interface 1 1 of a client device 220 of user A 120, the disclosure is not so limited.
  • a user interface 1 10 may also be displayed on a client device 220 of user B 122 when user B 122 is interested in receiving information related to content displayed in interface 160 on his/her client device 220. Similar to the example described above with respect to user A 120, user B 122 may select a representation A 170 when user B 122 wishes to receive information related to content A 130 on a client device 220 of user B 122. User B 122 may select a representation B 172 when user B wishes to receive information related to content B 132 on a client device 220 of user B 122. User B 122 may also select a representation A 170 and a representation B 172 when user B wishes to receive information related to content A 130 and information related to content B 132 on a client device 220 of user B.
  • a user that has not interacted with interface 160 may be interested in receiving information related to content displayed in interface 160 on his/her client device 220.
  • the user may view content A 130 and/or content B 132, despite having not interacted with interface 160, and may request information related to content A 130 and/or content B 132 by selecting a representation A 170 and/or a representation B 172 on the user's client device 220.
  • FIG. 1 A illustrates two users as interacting with interface 160, and illustrates interface 160 as including two interactive regions
  • the disclosure is not so limited. Any number of users may interact with an interface 160 and any number of interactive regions may be displayed in an interface 1 60.
  • FIG. 1 B illustrates two representations as being displayed in user interface 1 10, the disclosure is not so limited. Any number of representations may be displayed in a user interface 1 10 of a client device 220. In some embodiments, the number of representations displayed in user interface 1 10 corresponds to the number of interactive regions included in an interface 160, the number of tokens displayed in an interface 160, or the number of users interacting with an interface 160, though the disclosure is not so limited.
  • FIG. 2 is a block diagram of an example computing environment 200 for implementing embodiments and features of the present disclosure. The arrangement and number of components in system 200 is provided for purposes of illustration. Additional arrangements, number of components, and other modifications may be made, consistent with the present disclosure.
  • computing environment 200 may include one or more client devices 220.
  • a client device 220 could be a mobile phone, smart phone, tablet, netbook, electronic reader, personal digital assistant (PDA), personal computer, laptop computer, smart watch, gaming device, desktop computer, set-top box, television, personal organizer, portable electronic device, smart appliance, navigation device, and/or other types of computing devices.
  • PDA personal digital assistant
  • a client device 220 may be implemented with hardware devices and/or software applications running thereon.
  • a user may use a client device 220 to communicate with display device(s) 210 and/or server(s) 240 over network(s) 230.
  • a client device 220 may communicate by transmitting data to and/or receiving data from display device(s) 210 and/or server(s) 240.
  • client device(s) 220 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
  • Computing environment 200 may also include one or more display device(s) 210.
  • a display device 210 could be a display, smart display, server, personal computer, desktop computer, mobile phone, smart phone, tablet, netbook, electronic reader, personal digital assistant (PDA), smart watch, gaming device, desktop computer, set-top box, television, personal organizer, portable electronic device, smart appliance, navigation device, and/or other types of computing devices.
  • PDA personal digital assistant
  • a display device 210 may be a computing device configured to present an interactive interface across one or more display panels, consistent with the embodiments further disclosed herein.
  • a display device 210 may be implemented with hardware devices and/or software applications running thereon.
  • a display device 210 may communicate with client device(s) 220 and/or server(s) 240 over network(s) 230. For example, a display device 210 may communicate by transmitting data to and/or receiving data from client device(s) 220 and/or server(s) 240. In some embodiments, one or more of display device(s) 210 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
  • Computing environment 200 may further include one or more server(s) 240.
  • server(s) 240 could include any combination of one or more of web servers, databases, mainframe computers, general-purpose computers, personal computers, or other types of computing devices.
  • one or more of server(s) 240 may be configured to host a web page, implement a search engine 245, index information, store information, and/or retrieve information.
  • a server 240 may be a standalone computing system or apparatus, or it may be part of a larger system.
  • server(s) 240 may represent distributed servers that are remotely located and communicate over a communications network, or over a dedicated network, such as a local area network (LAN).
  • LAN local area network
  • Server(s) 240 may include one or more back-end servers for carrying out one or more aspects of the present disclosure.
  • Server(s) 240 may be implemented as a server system comprising a plurality of servers, or a server farm comprising a load balancing system and a plurality of servers.
  • a server 240 may be implemented with hardware devices and/or software applications running thereon.
  • a server 240 may communicate with client device(s) 220 and/or display device(s) 210 over network(s) 230.
  • a server 240 may communicate by transmitting data to and/or receiving data from client device(s) 220 and/or display device(s) 210.
  • one or more of server(s) 240 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
  • a user can submit a query to a search engine 245 within server(s) 240.
  • the query may be transmitted through network(s) 230 to server(s) 240.
  • Server(s) 240 may include, or may be connected to, an index database and/or a search engine.
  • Server(s) 240 may respond to the query by generating search results, which are transmitted through network(s) 230 to client(s) 220 and/or display device(s) 210 in a form that may be presented to the user (e.g., a search results web page to be displayed in a web browser running on client(s) 220 and/or display device(s) 210).
  • the search engine when the query is received by the search engine, the search engine identifies resources that match the query.
  • the search engine may also identify a particular "snippet" or section of each resource that is relevant to the query (or of the highest ranked resources that are relevant to the query).
  • the search may include an indexing engine that indexes resources (e.g., web pages, images, or news articles on the Internet) found in a corpus (e.g., a collection or repository of content), an index database that stores the index information, and/or a ranking engine (or other software) that may rank the resources that match the query.
  • the indexing engine may index information using traditional techniques.
  • the ranking engine may have access to one or more scoring functions that are, for example, associated with the ranking engine.
  • the ranking engine may select a scoring function from the set of scoring functions.
  • the ranking engine may base the selection on user input.
  • the ranking engine may select a scoring function based on instructions received from a scoring functions evaluator.
  • the ranking engine may select multiple scoring functions and send multiple sets of ranked search results, one corresponding to each selected scoring function, to client(s) 220 and/or display device(s) 210.
  • the ranking engine may rank search results that are responsive to the query by determining one or more signals for the search result and the query, sending those signals to one of the scoring functions, receiving a score from the scoring function for each search result, and then ranking the search results based on the received scores.
  • the ranking engine and scoring functions may communicate according to commands specified in an application programming interface (API).
  • the API may specify interfaces used by the ranking engine and the scoring function to implement and invoke a series of commands for sharing data.
  • the API may specify a command used by a scoring function to receive scoring data from the ranking engine, and/or may specify a command used by a ranking engine to request a score from a scoring function.
  • data may be passed between the scoring function and the ranking engine in messages encoded according to a messaging format.
  • the messaging format may be specified by the API, or may be separate from the API.
  • Examples of signals include information about the query itself, for example, the terms of the query, an identifier of the user who submitted the query, and a categorization of the user who submitted the query (e.g., the geographic location from where the query was submitted, the language of the user who submitted the query, interests of the user who submitted the query, or a type of client device 220 or display device 210 used to submit the query (e.g., mobile device, laptop, desktop computer)).
  • the identification of the user may be, for example, a user name and/or the Internet Protocol (IP) address of a client device 220 or display device 210.
  • IP Internet Protocol
  • the geographic location from where the query was submitted may be, for example, a continent, a country, a state, a city, and/or geographic coordinates, such as latitude and/or longitude.
  • Signals may also include information about the terms of the query, for example, the locations where a query term appears in the title, body, and/or text of anchors in a search result, where a query term appears in anchors pointing to the search result, how a term is used in the search result (e.g., in the title of the search result, in the body of the search result, and/or in a link in the search result), the term frequency (e.g., the number of times the term appears in a corpus of documents in the same language as the query divided by the total number of terms in the corpus), and/or the document frequency (e.g., the number of documents in a corpus of documents that contain the query term divided by the total number of documents in the corpus).
  • the term frequency e.g., the number of times the term appears in a corpus of documents in the same language as the query divided by the total number of terms in the corpus
  • the document frequency e.g., the number of documents in a corpus of documents that contain the query
  • signals include information about the search result, for example, a measure of the quality of the search result, a universal resource locator (URL) of search result, a geographic location where the search result is hosted, when server(s) 240 first added the search result to an index, a language of the search result, a size of the search result (e.g., number of tokens and/or file size), a length of a title of the search result, and/or a length of the text of source anchors for links pointing to a document.
  • a measure of the quality of the search result for example, a measure of the quality of the search result, a universal resource locator (URL) of search result, a geographic location where the search result is hosted, when server(s) 240 first added the search result to an index, a language of the search result, a size of the search result (e.g., number of tokens and/or file size), a length of a title of the search result, and/or a length of the text of source anchors for links pointing to
  • signals include information about anchor text for links pointing to the search result, for example, the text itself and the total number of tokens (e.g., words) in the anchor text. For example, if an anchor pointing to the search result has the text "NY" and another anchor has the text "New York,” then the signals may include the text "NY” and "New York” as well as the number of tokens in the text: one from “NY” and two from “New York” for a total of three tokens.
  • tokens e.g., words
  • Other anchor signals for links pointing to the search result may include a number of documents in the domain of the search result that have a link pointing to the search result with given anchor text, and/or a number of documents from different domains than the search result that have a link pointing to the search result with given anchor text.
  • the ranking engine may also provide additional information to the scoring function, for example, scoring parameters.
  • server(s) 240 may receive the scoring parameters with the query.
  • Computing environment 200 may still further include one or more networks 230.
  • Network(s) 230 may connect server(s) 240 with client device(s) 220, may connect server(s) 240 with display device(s) 210, and/or may connect display device(s) 210 with client device(s) 220.
  • Network(s) 230 may provide for the exchange of information, such as queries for information and results, between client device(s) 220 and server(s) 240, between client device(s) 220 and display device(s) 210, or between server(s) 240 and display device(s) 210.
  • Network(s) 230 may include one or more types of networks interconnecting display device(s) 210, client device(s) 220, and/or server(s) 240.
  • one client device 220 may communicate with server(s) 240 using a different type of network than a second client device 220 may use to communicate with server(s) 240.
  • a display device 210 may communicate with server(s) 240 by using a different type of network than a client device 220 may use to communicate with server(s) 240.
  • a display device 210 may communicate with client device(s) 220 using a different type of network than is used by server(s) 240 to communicate with display device(s) 210 and/or client device(s) 220.
  • Network(s) 230 may include one or more wide area networks (WANs), metropolitan area networks (MANs), local area networks (LANs), personal area networks (PANs), or any combination of these networks.
  • Network(s) 230 may include a combination of a variety of different network types, including Internet, intranet, Ethernet, twisted-pair, coaxial cable, fiber optic, cellular, satellite, IEEE 802.1 1 , terrestrial, Bluetooth, infrared, wireless universal serial bus (wireless USB), and/or other types of wired or wireless networks.
  • FIG. 3 illustrates a flowchart of an example method 300, consistent with embodiments of the present disclosure.
  • Example method 300 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 300 may be performed by one or more servers 240 or by one or more display devices 210.
  • a token that is or was displayed in an interactive interface of a first device may be identified as being associated with an interactive region of the interactive interface of the first device.
  • storage medium(s) may be referenced to identify one or more of the tokens that are and/or were displayed in the interactive interface.
  • a plurality of tokens that are and/or were displayed in the interactive interface of the first device may be identified.
  • all of the tokens that are and/or were displayed in the interactive interface of the first device may be identified.
  • a content item that is or was displayed in an interactive interface may be identified as being associated with an interactive region of the interactive interface of the first device.
  • storage medium(s) may be referenced to identify one or more of the content items that are and/or were displayed in the interactive interface.
  • a plurality of content items that are and/or were displayed in the interactive interface of the first device may be identified.
  • all of the content items that are and/or were displayed in the interactive interface of the first device may be identified.
  • data may be sent to a second device, such as a client device 220, to cause a representation of the token to be displayed on the second device.
  • a computer system 1200 may cause the representation to be displayed on the second device.
  • computer system 1200 may cause representations of a plurality of the tokens that were displayed in an interface of the first device to be displayed on the second device.
  • computer system 1200 may cause representations of all of the tokens that were displayed on the interface of the first device to be displayed on the second device.
  • Computer system 1200 may cause the representation(s) to be displayed at the second device by transmitting data, such as HTML data, XML data, or other instructions to the second device over network(s) 230.
  • the representation(s) may be displayed at the second device in a user interface, such as user interface 1 10 of FIG. I B.
  • an indication that the representation of the token has been selected at the second device may be received.
  • the indication may be received from the second device over network(s) 230 in response to a user selection of the representation.
  • more than one representation may be selected by a user. Accordingly, one or more indications of selected
  • step 350 information related to the content items that were displayed in the interactive region identified by the selected token representation may be sent to the second device.
  • a computer system 1200 may cause the information to be sent to the second device.
  • Computer system 1200 may determine which token(s) correspond to the selected representation(s), and which content item(s) correspond to the interactive portion(s) associated with those token(s).
  • Computer system 1200 may then determine which content items are and/or were displayed in those interactive portion(s).
  • Information related to the determined content items may then be sent to the second device over network(s) 230.
  • Information related to a content item may include the content item, other information related to the content item, or a combination thereof. For example, if a user had interacted with an interactive advertisement, the interactive advertisement and/or other information, such as a URL of a web page describing the advertised product, may be sent to the second device.
  • method 300 may be performed by a display device 210.
  • computing environment 200 may not require server(s) 240.
  • tokens, content items, and information related to the content items may be stored locally on storage medium(s) within a display device 210.
  • a display device 210 may identify the token(s) (step 310) and content items (step 320) by referencing the storage medium(s), and may cause representation(s) of the token(s) to be displayed at client device(s) 220 (step 330) by sending data indicative of the token(s) to client device(s) 220 over network(s) 230.
  • a display device 210 may also receive indication(s) that representation(s) have been selected from client device(s) 220 over network(s) 230 (step 340). A display device 210 may further send information related to the content items(s) that were displayed in the interactive portion(s) identified by the selected representation(s) to the client device(s) (step 350).
  • method 300 may be performed by one or more server(s) 240.
  • server(s) 240 may identify token(s) that are and/or were displayed in an interface of a first device, such as a display device 210.
  • server(s) 240 may receive information from the first device indicating that a token is or was displayed in association with an interactive region of the interface of the display device.
  • a plurality of tokens that are and/or were displayed in association with interactive regions of the interface may be identified.
  • all of the tokens that are and/or were displayed in association with interactive regions of the interface may be identified.
  • the first device may periodically transmit information regarding one or more tokens that are and/or were displayed in association with interactive regions of the interface to server(s) 240 over network(s) 230.
  • server(s) 240 may periodically poll the first device over network(s) 230 to receive information about tokens that are and/or were displayed in association with interactive regions of the interface.
  • server(s) 240 may be aware of the tokens that are and/or were displayed in association with interactive regions of the interface.
  • server(s) 240 may have provided instructions to the first device over network(s) 230 to control the configuration and/or display of the interactive regions and/or tokens of the interface.
  • server(s) 240 may store information about the tokens that are and/or were displayed in association with interactive regions of the interface, and may reference storage medium(s) to identify one or more of the tokens that were presented.
  • Server(s) performing step 320 may identify content item(s) that are and/or were displayed in an interface of the first device. For example, server(s) 240 may receive information over network(s) 230 from the first device indicating that a content item is and/or was displayed in an interactive region of the interface. In some embodiments, a plurality of content items that are and/or were displayed in association with interactive regions of the interface may be identified. In some
  • all of the content items that are and/or were displayed in association with interactive regions of the interface may be identified.
  • the first device may periodically transmit information regarding content item(s) that are and/or were displayed in association with interactive regions of the interface to server(s) 240 over network(s) 230.
  • server(s) 240 may periodically poll the first device over network(s) 230 to receive information about content items that are and/or were displayed in association with interactive regions of the interface.
  • server(s) 240 may be aware of the content items that are and/or were displayed in association with interactive regions of the interface.
  • server(s) 240 may provide instructions to the first device over network(s) 230 to configure and/or display interactive regions and/or content items within the interface.
  • server(s) 240 may store information about the content items that are and/or were displayed in association with interactive regions of the interface of the display device, and may reference storage medium(s) to identify the content item(s) that were presented.
  • server(s) 240 may cause representation(s) of the token(s) to be displayed on the second device.
  • server(s) 240 may cause representations of a plurality of the tokens that are and/or were displayed in the interface of the first device to be displayed on the second device.
  • server(s) 240 may cause representations of all of the tokens that were displayed in the interface of the first device be displayed on the second device.
  • Server(s) 240 may cause the representation(s) to be displayed on the second device based on a signal or user command received over network(s) 230 from the second device.
  • Server(s) 240 may cause the representation(s) to be displayed on the second device by transmitting data, such as HTML data, XML data, or other instructions to the second device over network(s) 230.
  • the representation(s) may be displayed at the second device in a user interface, such as user interface 1 10 of FIG. I B.
  • an indication that a representation of a token has been selected at the second device may be received at server(s) 240.
  • the indication may be received from the second device over network(s) 230 in response to a user selection of the representation.
  • a first indication corresponding to the representation of the first token may be received from the second device, and a second indication corresponding to the representation of the second token may be received from the second device.
  • a single indication may be received from the second device indicating that representations of the two tokens have been selected.
  • server(s) may determine the token(s) represented by the selected representation(s), and may determine the content item(s) that were displayed in the interactive region(s) identified by the token(s).
  • Information related to the determined content item(s) may then be sent to the second device over network(s) 230.
  • the information related to the content items may include the same content item(s), other information related to content item(s), or a combination thereof. For example, if a user had interacted with an interactive advertisement for a product, the interactive advertisement and/or additional information, such as a URL to a web page describing the product, may be sent to the second device over network(s) 230.
  • the information related to the determined content item(s) may be representations of the determined content item(s) configured for display on a particular client device 220, such as a phone.
  • Server(s) 240 may be connected to a plurality of display devices 210 over network(s) 230.
  • server(s) 240 may be connected to hundreds, or even thousands, of display devices 210. Accordingly, server(s) 240 may need to identify a display device 210 with which a user interacted, such as the first device described above. In accordance with some embodiments described below, server(s) 240 may identify the first device.
  • the first device may be identified based on a notification received from a client device 220, such as the second device described above.
  • the notification may represent data that identifies the first device, such as a universal resource locator (URL), code, character string, bar code, quick response (QR) code, or any other data that can be used to identify an item.
  • URL universal resource locator
  • QR quick response
  • the identifying data may be a location of the second device, and the first device may be identified based on the location.
  • the second device may determine its location via one or more of global positioning satellite (GPS) signals, cellular signals, base station signals, and/or any other signals to determine the location of an electronic device. Once the second device has determined its location, information regarding that location may be used to identify the first device.
  • GPS global positioning satellite
  • the first device may be identified by a URL.
  • the URL may be located marked on or near the first device, or may be presented in the interface displayed by the first device.
  • a user using second device may then type the URL into a web browser and the second device may connect to a web page for the first device. With the second device connected, one or more web servers 240 may then carry out method 300.
  • server(s) 240 may receive the notification from the second device over network(s) 230. Upon reception of the notification, server(s) 140 may identify the first device. For example server(s) 240 may consult one or more databases, tables, or lists stored locally or remotely on server(s) 240 to determine which display device 210 corresponds to the first device. If the data represented by the notification matches identification data stored on server(s) 240 for the first device, server(s) 240 may determine that the user of the second device was interacting with the first device.
  • server(s) 240 may consult the one or more databases, tables, or lists to identify locations of display devices 210, and may identify the first device based on a determination that a stored location of the first device is closest to a location indicated by the location information. Once the first device has been identified, server(s) may associate the first device with the second device.
  • step 320 may proceed step 310, or may be conducted in parallel with step 310.
  • step 320 may not be performed until after step 340 has been performed. In such embodiments, only the content item(s) associated with the region(s) indicated by the selected representation(s) need be identified.
  • example method 300 has been described with respect to single client device 220 (e.g., second device) that communicates with server(s) 240 and/or a display device 210 (e.g., first device). However, the disclosure is not so limited. As noted previously, method 300 may be performed (e.g., by server(s) 240 or display device(s) 210) for any number of client devices 220. For example, method 300 may also be performed for a second user interacting with an interactive region of the interface displayed on the first device, such as user B 122 of FIG. 1A.
  • a computer system 1200 may cause representations of the tokens displayed on the interface of the first device to be sent to a device (e.g., third device) of the second user over network(s) 230.
  • a device e.g., third device
  • an indication that one of the representations was selected by the second user on the third device may be received at the computer system over network(s) 230.
  • the second user may select a representation of a second token that corresponds to a second interactive region with which the second user had interacted, and a second indication corresponding to the second user's selection may be received over network(s) 230.
  • FIG. 4 illustrates a flowchart of an example method 400, consistent with embodiments of the present disclosure.
  • Example method 400 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 400 may be performed by one or more servers 240 or by one or more display devices 210.
  • a plurality of content items may be displayed within an interactive region of an interface of a display device 210.
  • the first device discussed with reference to method 300 may display a plurality of content items in the interactive region of the interface with which the user interacts.
  • An example of such an interface is illustrated in FIG. 5, which is discussed further below.
  • step 350 of method 300 may be performed by performing method 400.
  • step 410 may be performed.
  • a computer system 1200 may identify a token based on the selected representation, and content items that were displayed in the interactive region associated with the token.
  • the computer system may then cause a listing of links corresponding to the content items to be presented on the second device.
  • computer system 1200 may send instructions or commands over network(s) 230 that cause the listing to be presented.
  • An example listing is illustrated in user interface 620 of FIG. 6B.
  • step 420 an indication may be received over network(s) 230 that a link presented in the listing has been selected by the user at the second device.
  • step 430 the computer system may cause information related to a content item corresponding to the selected link to be sent to the second device over network(s) 230.
  • FIG. 5 illustrates an example interactive interface 500 of a display device 210, such as the first device discussed with respect to methods 300 and 400.
  • Interface 500 may include a first interactive region 560 and a second interactive region 562.
  • a first token 520 may be displayed in association with first interactive region 560, and a second token 522 may be displayed in association with second interactive region 562.
  • a first headline 530 may be displayed in first interactive region 560 and a second headline 532 may be displayed in association with second interactive region 532.
  • Headlines 530, 532 may include, for example, a trending topic or top story of the day.
  • a plurality of content items may be displayed in each of the interactive regions.
  • content item 540 may display news headlines
  • content item 542 may display science news
  • content item 544 may display a mall directory
  • content item 546 may display the current weather.
  • a user interacting with first interactive region 560 may interact with these content items to, for example, learn more about the news, store locations in a mall, science discoveries, or the upcoming weather.
  • Content item 552 may display a chess game
  • content item 554 may display an advertisement for shoes
  • content item 556 may display information about a cafe
  • content item 558 may display a user's social network information.
  • a user interacting with second interactive region 562 may interact with these content items to, for example, play a chess match, learn about a product (e.g., shoes), review menu items at a restaurant, or keep updated on their friends' lives.
  • An interface of a display device 210 is not limited to the example illustration of FIG. 5. Any number of content items may be displayed in an interactive region, and a displayed content item may have any size or shape. Moreover, any number of interactive regions may be displayed in interface 500.
  • FIGs. 6A and 6B illustrate example user interface screens 610 and 620 displayed on a client device 220 of a user, such as the second device described with respect to methods 300 and 400.
  • Screen 610 may be displayed on the second device when step 330 of method 300 is performed.
  • FIG. 6A illustrates an example of a user interface screen that may be displayed to a user interacting with interface 500.
  • first representation 630 may correspond to first token 520 of interface 500
  • second representation 632 may correspond to second token 522 of interface 500.
  • Screen 610 may also display an instruction 670 to the user.
  • FIG. 6B illustrates an example of a user interface screen 620 that may be displayed to the user after the user has selected a representation on the second device, such as a selection of first representation 630 of screen 610.
  • screen 620 may be displayed on the second device when step 410 of method 400 is performed.
  • Screen 620 may display the selected representation 640 on the screen to inform the user that screen 620 corresponds to the interactive region associated with the token represented by representation 640.
  • Screen 620 may also display links for each of the content items that were displayed in that interactive region.
  • link 660 may correspond to content item 540 of interface 500
  • link 662 may correspond to content item 544 of interface 500
  • link 664 may correspond to content item 542 of interface 500
  • link 666 may correspond to content item 546 of interface 500. Selection of any of the links may cause information related to the content item identified by the link to be sent to the second device.
  • Screen 620 may also include an instruction 650 to the user.
  • FIG. 7 illustrates a flowchart of an example method 700, consistent with embodiments of the present disclosure.
  • Example method 700 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 700 may be performed by one or more one or more display devices 210.
  • a user may be detected.
  • a display device 210 may detect that a user moves into a vicinity of the display device via one or more sensors (e.g., video cameras, motion sensors, microphones, etc.).
  • the display device may receive content.
  • the display device may receive content from local storage medium(s) or from server(s) 240 over network(s) 230. The content may be received based on a user interaction, a user preference stored locally or at server(s) 240, or as part of a periodic update.
  • the display device may display the received content as part of an interactive interface displayed on one or more display panel(s).
  • the display device may display a token.
  • the display device may display a token in association with an interactive region of the interface with which the user interacts.
  • the token may be automatically determined by the display device, or determined based on an instruction received from server(s) 240 over network(s) 230.
  • the display device may select the token for display based on a determination by the display device or server(s) 240 that the token is not already being displayed at the display device, that the token is not being displayed at a certain number of display devices, or that the token is not being displayed at any of display devices 210.
  • the user may choose the token that is displayed.
  • the user may interact with the display device to browse through a listing of available tokens, and may select one for display in association with the user.
  • the display device may provide display information.
  • the display device may provide the display information to storage medium(s) on the display device for storage, and/or may provide the display information over network(s) 230 for storage at server(s) 240.
  • the display device may provide the display information to server(s) over network(s) 230 periodically, or based on a request received over network(s) 230 from server(s) 240. For example, server(s) 240 may regularly poll the display device over network(s) 230 for the display information.
  • the display device may, for example, store information about content that has been displayed and/or tokens that have been displayed.
  • the display device may also store information about locations in the interface in which the content has been displayed, in which the tokens have been displayed, and/or in which the interactive regions have been provided.
  • the display device may further store information defining associations between tokens and content that have been displayed in association with each other, between tokens and interactive regions that have been provided in association with each other, and/or between content and regions that have been provided in association with each other.
  • the display device may further store information about user interaction histories and/or browsing histories. For example, the display device may store a history of user inputs performed by a user.
  • the display device may also store a history of content with which a user has interacted. Any of the stored display information may be provided over network(s) 230 for storage at server(s) 240.
  • the display information may be used by one or more computer systems 1200 of computing environment 200 to carry out various aspects of the disclosure.
  • the display information may be used to help one or more computer systems 1200 (e.g., server(s) 240 or display device(s) 210) identify tokens associated with regions of an interface of the device, such as in step 310 of method 300.
  • the display information may also be used to help the computer system(s) 1200 identify content associated with regions of an interface of the device, such as in step 320 of method 300.
  • FIG. 7 illustrates performing the steps of method 700 in a particular order, the disclosure is not so limited. For example, steps 710-750 may be performed in any order, or may be performed in parallel.
  • FIG. 8 illustrates a flowchart of an example method 800, consistent with embodiments of the present disclosure.
  • Example method 800 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 800 may be performed by one or more one or more client devices 220.
  • a client device 220 may store a software program that, when executed, allows the client device to communicate with display device(s) 210 and/or server(s) 240, and/or to carry out method 800.
  • the software application may be provided to the client device from a content provider.
  • the software application may be a web browser application.
  • the client device receives display device identification information.
  • the display device identification information may be acquired based on some user action. For example, a user may see information that identifies a display device 210 displayed in association with the display device, and may enter the information into a software application stored on the second device.
  • the information could be a URL, code, character string, bar code, number (e.g., serial number), QR code, or any other data that can be used to identify an item.
  • the user may enter the identification information into the client device.
  • the user could type the URL into a web browser, type in a code, type in a character string, capture a bar code or QR code with a camera of the client device, etc.
  • a software application stored on the client device may prompt the user with a request to determine identification information for the display device.
  • the client device could present the user with a prompt indicating that a nearby display device is trying to communicate with the client device over network(s) 230.
  • the display device may wish to send a code or other data to the client device over network(s) 230 that would allow the client device to identify the display device.
  • the client device may prompt the user with a request to allow the client device to determine its location. If authorized, the client device may determine its location via one or more global positioning satellite (GPS) signals, cellular signals, base station signals, or any other signals known to one of skill in the art in determining the location of the client device. Information regarding the location of the client device may then be used to identify the display device based on the proximity of the display device to the location of the client device. For example, location information may be transmitted to server(s) 240 over network(s) 230. Server(s) 240 may consult a database, listing, or table of display devices, and may identify the display device by determining that it is the display device closest in proximity to the client device.
  • GPS global positioning satellite
  • the device identification information may be automatically captured by the second device.
  • a software application running on the second device may automatically capture the device identification information using one or more cameras, video cameras, microphones, GPS signals, cellular signals, base station signals, or signals received from the first device over network(s) 230.
  • method 300 may be performed by the display device.
  • the client device may not need to identify the display device, because the client device may communicate directly with the display device over network(s) 230.
  • step 810 may be skipped.
  • the client device may send an indication of a request to display representations over network(s) 230.
  • a user interacting with the interface of the display device may provide a user input on the client device to request that representations of the tokens being displayed in the interface be presented on the client device.
  • the user may enter the request before, after, or at any time during, his/her interaction with the interface of the first device.
  • the user may enter the request so long as the user remains within a certain proximity of the display device.
  • the client device may automatically request to display the representations after receiving the display device identification information in step 810.
  • the client device may automatically display the representations on the client device so long as the user interacts with, or remains in proximity of, the display device.
  • the user may enter the request at any time, even hours, days, months, or years after interacting with the display device.
  • the user could enter the display device identification information into an application on the client device, along with the time and/or date in which the user interacted with the display device.
  • the display device and/or server(s) 240 may search through a history of stored display information and determine the tokens, content, and/or interactive regions presented on the display device at that time and/or date.
  • representations of the tokens displayed in the interface of the display device may be displayed on the client device.
  • the representations may be displayed in one or more user interface screens, such as a screen of user interface 1 10 of FIG. I B, or screen 610 of FIG. 6A.
  • the client device may send an indication that a representation was selected over network(s) 230.
  • the indication may be sent to server(s) 240 over network(s) 230, or directly to the display device over network(s) 230 in some embodiments where the client device communicates directly with the display device.
  • the user may select more than one of the representations, and one or more indications may be sent over network(s) 230 to identify the selected representations.
  • the client device may receive information related to content that was displayed in association with the token(s) corresponding to the selected representation(s) over network(s) 230.
  • FIG. 9 illustrates a flowchart of an example method 900, consistent with embodiments of the present disclosure.
  • Example method 900 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 900 may be performed by one or more client devices 220.
  • a client device 220 may store a software program that, when executed, allows the client device to communicate with display device(s) 210 and/or server(s) 240, and/or to perform method 900.
  • a plurality of content items may be displayed within an interactive region of an interface of a display device 210.
  • the display device may display a plurality of content items in the interactive region of the interface with which the user interacts.
  • An example of such an interface is illustrated in FIG. 5.
  • step 850 of method 800 may be performed by performing method 900.
  • step 910 may be performed.
  • the client device may display a list of links corresponding to content items that were displayed in association with the token represented by the representation.
  • FIG. 6B illustrates an example user interface screen 620 that displays a listing of links corresponding to content items.
  • the links correspond to the content items displayed in first interactive region 560 of interface 500 of FIG. 5.
  • the client device may send an indication over network(s) 230 that a link presented in the listing was selected by the user.
  • the indication may be sent over network(s) 230 to server(s) 240.
  • the indication may be sent over network(s) 230 to the display device in some embodiments where the client device communicates directly with the display device.
  • the client device may receive information related to a content item corresponding to the selected link over network(s) 230.
  • FIG. 10 illustrates a flowchart of an example method 1000, consistent with embodiments of the present disclosure.
  • Example method 1000 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12).
  • method 1000 may be performed by one or more display devices 210.
  • a display device 210 may display a first content in a first interactive region of an interactive interface based on a first user's interactions with the first region. For example, the first user may select, open, or otherwise manipulate content displayed in the first region, such that a first content is displayed in the first region at a given time.
  • FIG. 1 1 illustrates an example environment 1 100 in which method 1000 may be implemented. As shown in FIG. 1 1 , first content A 1 120 may be displayed in first interactive region A 1 140 of an interactive interface 1 1 10 based on a first user A 1 150's interactions with first interactive region A 1 140. As further illustrated in FIG. 1 1 , token A 1 130 may be displayed in association with first interactive region A 1 140.
  • the display device may display a second content in a second interactive region of the interface based on a second user's interactions with the second region.
  • the second user may select, open, or otherwise manipulate content displayed in the second region, such that a second content is displayed in the second region at the given time.
  • second content B 1 122 may be displayed in second interactive region B 1 142 of an interactive interface 1 1 10 based on a second user B 1 152's interactions with second interaction region B 1 142.
  • token B 1 132 may be displayed in association with second interaction region B 1 142.
  • the display device may determine and display a third content, such as a merged content, relating to the first content and the second content in a third region of the interface.
  • the merged content may be determined by the display device, or by server(s) 240.
  • the merged content may be determined by identifying content that has common or similar traits to the displayed first content and the displayed second content. For example, if merged content were to be determined based on the example content items illustrated in interactive regions 560, 562 of FIG. 5, the merged content may include advertisements for rain boots, sandals, or snow boots based on content item 546 displayed in interactive region 560 and content item 554 displayed in interactive region 562.
  • the merged content may also include, for example, news about the health benefits of coffee, based on content item 542 displayed in interactive region 560 and content item 556 displayed in interactive region 562.
  • the merged content may be displayed in a region separate from the first interactive region and the second interactive region. In some embodiments, the merged content may be displayed between the first interactive region and the second interactive region. In FIG. 1 1 , for example, merged content 1 124 is displayed in a region C 1 144 that is between region A 1 130 and region B 1 132.
  • a token such as token C 1 134 shown in FIG. 1 1 , may be associated with the region displaying the merged content.
  • One or more computer systems 1200 may perform one or more of methods 300, 400, 700, 800, or 900 to enable a user, such as user A 1 150 or user B 1 152, to select a representation of token C on his/her client device 220 to receive information related to the merged content at his/her client device 220.
  • Interface 1 1 10 may present merged content based on commonalities or similarities among any number of content items being displayed on interface 1 1 10. For example, if a third user were interacting with interface 1 1 10, merged content 1 124 may be presented based on commonalities or similarities between content A 1 120, content B 1 122, and a content presented in a region with which the third user interacts.
  • Method 1000 may allow users using a multi-user interface to interact with one another.
  • An interface displaying merged content based on method 1000 may also present merged content that is desirable to one or more users interacting with the interface, but which neither of the users would have identified on his/her own.
  • a user may select multiple representations of tokens when the user desires to receive information related to content displayed in more than one interactive region.
  • merged content such as that described with reference to FIGs. 10 and 1 1 , may be provided based on the multiple representations that were selected by the user. For example, first content that was displayed in a first interactive region of an interface and second content that was displayed in a second interactive region of the interface may be identified based on the tokens corresponding to the selected representations, and merged content may be determined based on commonalities or similarities between the first content and the second content. The merged content, and/or information related to the merged content, may then be provided to the user's client device 220.
  • information related to content that was displayed in an interactive region may be received at a client device 220 in response to a user selecting a representation of a token associated with the interactive region at the client device 220.
  • Information related to content is not limited to information related to content that was displayed when the representation was selected. Rather, information related to content that was displayed at any time while a user was interacting with an interactive region may be provided in response to the selected representation. In some embodiments, all of the content that was displayed while a user was interacting with an interactive region may be provided in response to the selected representation.
  • a list of links corresponding to all of the content that was displayed while a user was interacting with an interactive region may be displayed, such as the list illustrated in FIG. 6B, and a user may select a link corresponding to any one or more of the content items he/she viewed in order to receive information related to those content items.
  • a display device 210 may be configured to save information related to one or more content items displayed in an interactive region with which a user is interacting in a storage, such as an electronic shopping cart, for later retrieval by the user. For example, as a user interacts with content items in an interactive region of an interface, the user may provide a particular user input requesting that information related to a particular content item be placed in a shopping cart. The information in the shopping cart may be stored at a display device 220, or server(s) 240.
  • the user can identify only those content items in which he/she is interested while interacting with the interface, and may "checkout" and receive information related to the content items at his/her client device 220 by providing a certain user input, or by selecting a representation of the token associated with the interactive region.
  • a user may select to receive additional information, such as a browsing history, associated with the user's interaction with an interaction region after selecting the representation of the token associated with the interactive region.
  • a browsing history may include, for example, a listing of all of the content that was displayed in the interactive region while the user was interacting with the region.
  • a first user using a first client device 220 may share a representation of a token with a second user using a second client device 220 over network(s) 230.
  • the first user may share the representation with a friend using a messaging, electronic mail, or social networking application.
  • the second user may receive information related to content that was displayed in the interactive region associated with the represented token.
  • information related to content displayed in interactive region(s) associated with the represented token(s) may be continuously displayed on the client device. For example, if a user has selected a representation of a token associated with a first interactive region, information related to content displayed in the first interactive region may be continuously and dynamically displayed on the client device while the user interacts with the first interactive region. For example, if a user opens a content item in the first interactive region, a representation of that content item may be displayed on the client device. In some embodiments, information related to each of the content items displayed in the first interactive region may be dynamically displayed on the client device as the user manipulates the content items.
  • the user may manipulate the content displayed in a first interactive region based on information related to the content that is displayed on the client device. For example, a user may open a content item in the first interactive region by opening a representation of the content item displayed on the client device.
  • the user may select to move content items between the interface displayed by display device and a display of the client device. For example, once a user has identified a particular interactive region by selecting a representation of a token associated with that region, the user may move content from the interface displayed by the display device to the client device. For example, a user may move a web page from being displayed in the interactive region in the interface of the display device to the display of the client device.
  • the user may move content from the client device to the display device.
  • a user may move a web page from being displayed on the display of the client device to the interactive region of the interface of the display device.
  • a user may move content items back and forth between the interface of the display device and the display of the client device.
  • a user may move a web page from being displayed in the interactive region of the interface of the display device to being displayed in the display of the client device, may then manipulate the web page on the client device, and may then move the manipulated web page back to being displayed in the interactive region of the interface.
  • computer system(s) 1200 may transform content and/or information related to content, in a manner that allows it to be displayed on a device of a different type. Based on signals or data received from a client device 220, the computer system(s) may identify a type of client device 220, and one or more transformations that need to be performed to content and/or information related to content based on the type.
  • the computer system(s) may identify that the client device is a smartphone, and that content needs to be resized accordingly to fit on the display of the smartphone.
  • the computer system(s) may identify that content requires too much processing power for the smartphone, and may convert the content into a form that requires less processing power.
  • FIG. 12 is a block diagram illustrating an example computer system 1200 that may be used for implementing embodiments consistent with the present disclosure, including the example systems and methods described herein.
  • Computer system 1200 may include one or more computing devices 1280.
  • Computer system 1200 may be used to implement client device(s) 210, display device(s) 220, and/or server(s) 240.
  • the arrangement and number of components in computer system 1200 is provided for purposes of illustration. Additional arrangements, number of components, or other modifications may be made, consistent with the present disclosure.
  • a computing device 1280 may include one or more processors 1210 for executing instructions.
  • processors suitable for the execution of instructions include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a computing device 1280 may also include one or more input/output (I/O) devices 1220.
  • I/O devices 1220 may include keys, buttons, mice, joysticks, styluses, gesture sensors (e.g., video cameras), motion sensors (e.g., infrared sensors, ultrasound sensors, etc.), voice sensors (e.g., microphones), etc. Keys and/or buttons may be physical and/or virtual (e.g., provided on a touch screen interface).
  • a computing device 1280 may include one or more storage devices configured to store data and/or software instructions used by processor(s) 1210 to perform operations consistent with disclosed embodiments.
  • a computing device 1280 may include main memory 1230 configured to store one or more software programs that, when executed by processor(s) 1210, cause processor(s) 1210 to perform functions or operations consistent with disclosed embodiments.
  • main memory 1230 may include NOR or NAND flash memory devices, read only memory (ROM) devices, random access memory (RAM) devices, etc.
  • a computing device 1280 may also include one or more storage medium(s) 1240.
  • storage medium(s) 1240 may include hard drives, solid state drives, tape drives, redundant array of independent disks (RAID) arrays, etc.
  • FIG. 12 illustrates only one main memory 1230 and one storage medium 1240
  • a computing device 1280 may include any number of main memories 1230 and storage mediums 1240.
  • FIG. 12 illustrates main memory 1230 and storage medium 1240 as part of computing device 1280, main memory 1230 and/or storage medium 1240 may be located remotely and computing device 1280 may be able to access main memory 1230 and/or storage medium 1240 via network(s) 230.
  • Storage medium(s) 1240 may be configured to store data, and may store data received from one or more of server(s) 240, client device(s) 220, and display device(s) 210.
  • the data may take or represent various content or information forms, such as documents, presentations, textual content, mapping information, geographic information, rating information, review information, polling information, directory information, pricing information, advertising information, product information, information regarding days and hours of operation for establishments, search indexes, news, audio files, video files, image files, user profile information, social network information, markup information (e.g., hypertext markup language (HTML) information, extensible markup language (XML) information), games, software applications, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, and any other type of information and/or content in which users may be interested, or any combination thereof.
  • markup information e.g., hypertext markup language (HTML) information, extensible markup language (XML) information
  • games software applications
  • the data may further include other data received, stored, and/or inferred by a computer system 1200, such as data regarding locations of display device(s) 210, identities of display device(s) 210, locations of client device(s) 220, identities of client device(s) 220, tokens displayed at display device(s) 220, token(s) displayed and/or selected at client device(s) 210, content items displayed at display device(s) 220, content items provided to client device(s) 210, information provided to client device(s) 210, user interaction information (e.g., interaction histories, browsing histories), user preference information, and/or any other data used for carrying out embodiments consistent with the disclosure.
  • user interaction information e.g., interaction histories, browsing histories
  • user preference information e.g., any other data used for carrying out embodiments consistent with the disclosure.
  • a computing device 1280 may also include one or more displays 1250 for displaying data and information.
  • Display(s) 1250 may be implemented using one or more display panels, which may include, for example, one or more cathode ray tube (CRT) displays, liquid crystal displays (LCDs), plasma displays, light emitting diode (LED) displays, touch screen type displays, projector displays (e.g., images projected on a screen or surface, holographic images, etc.), organic light emitting diode (OLED) displays, field emission displays (FEDs), active matrix displays, vacuum fluorescent (VFR) displays, 3- dimensional (3-D) displays, electronic paper (e-ink) displays, microdisplays, or any combination of the above types of displays.
  • CTR cathode ray tube
  • LCDs liquid crystal displays
  • LED light emitting diode
  • touch screen type displays e.g., images projected on a screen or surface, holographic images, etc.
  • projector displays e.g., images projected on a screen or surface
  • a computing device 1280 may further include one or more communications interfaces
  • Communications interface(s) 1260 may allow software and/or data to be transferred between server(s) 240, client device(s) 210, and/or display device(s) 220. Examples of communications interface(s) 1260 may include modems, network interface cards (e.g., an Ethernet card), communications ports, personal computer memory card international association (PCMCIA) slots and cards, antennas, etc. Communications interface(s) 1260 may transfer software and/or data in the form of signals, which may be electronic, electromagnetic, optical, and/or other types of signals. The signals may be provided to/from communications interface(s) 1260 via a communications path (e.g., network 230), which may be implemented using wired, wireless, cable, fiber optic, radio frequency (RF), and/or other communications channels.
  • a communications path e.g., network 230
  • RF radio frequency
  • a server 240 may include a main memory 1230 that stores a single program or multiple programs and may additionally execute one or more programs located remotely from server 240.
  • a display device 220 and/or client device 210 may execute one or more remotely stored programs instead of, or in addition to, programs stored on these devices.
  • a server 240 may be capable of accessing separate server(s) and/or computing devices that generate, maintain, and provide web sites, and/or event creation and notification servers.
  • the computer-implemented methods disclosed herein may be executed, for example, by one or more processors that receive instructions from one or more non-transitory computer-readable storage mediums.
  • systems consistent with the present disclosure may include at least one processor and memory, and the memory may be a non-transitory computer-readable medium.
  • a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, magnetic strip storage, semiconductor storage, optical disc storage, magneto-optical disc storage, and/or any other known physical storage medium. Singular terms, such as “memory” and “computer-readable storage medium,” may additionally refer to multiple structures, such as a plurality of memories and/or computer-readable storage mediums.
  • a "memory" may comprise any type of computer-readable storage medium unless otherwise specified.
  • a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the one or more processors to perform steps or stages consistent with embodiments disclosed herein. Additionally, one or more computer-readable storage mediums may be utilized in implementing a computer-implemented method.
  • Computer programs based on the written description and methods of this specification are within the skill of a software developer.
  • the various programs or program modules can be created using a variety of programming techniques.
  • program sections or program modules can be designed in or by means of Java, C, C++, assembly language, or any such programming languages.
  • One or more of such software sections or modules can be integrated into a computer system or existing communications software.

Abstract

Computer-implemented systems, methods, and computer-readable media are provided for providing information related to displayed content. In accordance with some embodiments, a first user interacting with first content in a first region of an interface may select a representation of a first token displayed in association with the first region, and receive information related to the first content on a device of the first user. Also, in accordance with some embodiments, a second user interacting with second content in a second region of the interface may select a representation of a second token displayed in association with the second region, and may receive information related to the second content on a device of the second user.

Description

COMPUTERIZED SYSTEMS AND METHODS FOR PROVIDING INFORMATION RELATED
TO DISPLAYED CONTENT
This application claims priority to U.S. Provisional Patent Application No. 61 /987,322, filed on May 1 , 2014, the entire contents of which are incorporated herein by reference. DESCRIPTION
Background
[001 ] The present disclosure relates to computerized systems and methods for providing information related to displayed content and, more generally, to interactive display interfaces and information retrieval technologies. By way of example, and without limitation, the present disclosure relates to computerized systems and methods for providing a user with information related to content displayed in a particular region of a display interface.
[002] The use of electronic devices to access content has grown significantly over the years. People can now interact with content using a variety of devices, such as personal computers, laptops, tablets, personal digital assistants (PDAs), personal organizers, mobile phones, smart-phones, and other devices. These devices are typically designed with the intention that one user will interact with the device at a time.
[003] Some devices are designed to accommodate multiple users at the same time. For example, a large display device may present an interface that allows one user to interact with content in a first region of the interface, while other users interact with content in other regions of the interface. Content providers may find it advantageous to provide such displays in public spaces. For example, a large display could be placed at an airport, present a variety of advertisements, and allow users to interact with the advertisements while they wait for their airplanes to board. Such a large display could provide interactive advertisements to many users at a lower cost than providing the advertisements on a number of single advertisement displays. SUMMARY
[004] Embodiments of the present disclosure relate to computerized systems and methods for providing information related to displayed content. In addition, embodiments of the present disclosure relate to solutions for providing a user with information related to content that is displayed in a particular region of a display interface.
[005] In accordance with some embodiments of the present disclosure, computerized systems and methods are provided that allow a user to identify a particular interactive region of a display interface that displayed content in which the user is interested. Once the interactive region has been identified, the computerized systems and methods may provide information related to the content that was displayed in that region to a device of the user. [006] In accordance with some embodiments, there is provided a computer-implemented method for providing information related to displayed content. The method comprises operations performed by one or more processors. The operations include identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface. The operations also include causing representations of the first and second tokens to be presented at a second device. The operations further include receiving an indication that the representation of the first token has been selected at the second device. The operations still further include causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
[007] Furthermore, in accordance with some embodiments, there is provided a computer- implemented system for providing information related to displayed content. The system comprises a memory device that stores instructions and one or more processors that execute the instructions. The one or more processors execute the instructions to identify tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface. The one or more processors also execute the instructions to cause representations of the first and second tokens to be presented at a second device. The one or more processors further execute the instructions to receive an indication that the representation of the first token has been selected on the second device. The one or more processors still further execute the instructions to cause information related to content presented in the first interactive region to be sent to the second device based on the received indication.
[008] Additionally, in accordance with some embodiments, there is provided a non-transitory computer-readable medium that stores instructions. The instructions, when executed by one or more processors, cause the one or more processors to perform a method. The method comprises identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface. The method also comprises causing representations of the first and second tokens to be presented on a second device. The method further comprises receiving an indication that the representation of the first token has been selected on the second device. The method still further comprises causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
[009] Before explaining example embodiments consistent with the present disclosure in detail, it is to be understood that the disclosure is not limited in its application to the details of constructions and to the arrangements set forth in the following description or illustrated in the drawings. The disclosure is capable of embodiments in addition to those described and is capable of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as in the abstract, are for the purpose of description and should not be regarded as limiting.
[010] It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS
[O i l ] The accompanying drawings, which are incorporated in and constitute part of this specification, and together with the description, illustrate and serve to explain the principles of various example embodiments.
[012] FIG. 1A illustrates an example environment for presenting content, consistent with embodiments of the present disclosure.
[013] FIG. 1 B illustrates an example user interface, consistent with embodiments of the present disclosure.
[014] FIG. 2 illustrates an example computing environment for implementing embodiments and features consistent with the present disclosure.
[015] FIG. 3 illustrates a flowchart of an example method for providing information related to displayed content, consistent with embodiments of the present disclosure.
[016] FIG. 4 illustrates a flowchart of another example method for providing information related to displayed content, consistent with embodiments of the present disclosure.
[017] FIG. 5 illustrates an example interface for displaying content, consistent with embodiments of the present disclosure.
[018] FIG. 6 A illustrates an example user interface screen for selecting a representation of a token, consistent with embodiments of the present disclosure.
[019] FIG. 6B illustrates an example user interface screen for displaying links corresponding to content items, consistent with embodiments of the present disclosure.
[020] FIG. 7 illustrates a flowchart of an example method implemented by a display device, consistent with embodiments of the present disclosure.
[021 ] FIG. 8 illustrates a flowchart of an example method implemented by a client device, consistent with embodiments of the present disclosure.
[022] FIG. 9 illustrates a flowchart of another example method implemented by a client device, consistent with embodiments of the present disclosure.
[023] FIG. 10 illustrates a flowchart of an example method for displaying merged content, consistent with embodiments of the present disclosure.
[024] FIG. 1 1 illustrates an example environment for displaying merged content, consistent with embodiments of the present disclosure.
[025] FIG. 12 illustrates an example computer system for implementing embodiments and features consistent with the present disclosure. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[026] Reference will now be made in detail to the present embodiments of the disclosure, certain examples of which are illustrated in the accompanying drawings.
[027] Embodiments of the present disclosure relate to computerized systems and methods for providing information related to displayed content. Embodiments of the present disclosure include systems and methods that may provide a user with information related to content displayed in a particular region of a display interface. For example, a display interface may include interactive regions that display one or more types of content. A user may interact with a region of the interface, and may wish to receive information related to that content on a second device. The interface may associate a token with the user, and may display the token in the region of the interface with which the user interacts. The user may select a representation of the token on the second device to identify the region with which he/she was interacting, and information related to the content displayed in that region can then be provided to the second device.
[028] One or more advantages may be achieved by providing content via a display device that supports user interaction. For example, a greater variety of content can be displayed. Moreover, unlike static displays (e.g., posters, signs), an interactive display can present content that is targeted to a particular user's interest. With an interactive display, a user may be able to interact with content to, for example, learn additional information about an advertised product, search for information, play a game, look up directions to a place of interest, and more. Users may find the information provided by such displays to be more useful and/or more engaging. Content providers may find advertisements on such displays to be more effective.
[029] Content providers may also find that it is more cost effective to provide a display supporting multiple user interaction than a display supporting interaction of a single user. For example, a single large display could present an interface with multiple interactive regions, allowing multiple users to interact with the interface at the same time. Such a display could be useful in public spaces, such as airports. For example, users could learn about products advertised on a large public display by interacting with advertisements while waiting for their airplanes to board. However, providing such a display presents some challenges to content providers. For example, a user interacting with a region of a displayed interface may wish to further interact with content displayed in the region, but be unwilling or unable to do so at that time. As one example, a user interacting with advertisement content may arrive at a purchase screen, and may wish to purchase an advertised product, but be unwilling to enter his/her credit card information on the public display. As another example, a user may be unable to further interact with content, because they have to catch a flight that is ready to depart.
[030] Embodiments of the present disclosure can address the challenges associated with providing displays supporting multiple user interaction. For example, embodiments of the present disclosure provide computerized systems and methods that may display a token in the region of the multiuser display interface with which a user interacts, and that may allow the user to select a representation of the token on a device of the user. Selecting the representation may allow the computerized systems and methods disclosed herein to provide the user's device with information related to the content that was displayed in the region, such as the content itself, so that the user can continue to interact with the content using the user's device.
[03 1 ] Embodiments of the present disclosure also provide a user-friendly and private way to receive information related to displayed content at a device of a user. For example, by selecting a representation of a token to identify content in which a user is interested, passwords and/or other more complicated ways of authenticating to a device are not needed. No information regarding the user's device needs to be presented on the display and the user's device remains private.
[032] FIG. 1 A illustrates an example environment 100 in which interactive content may be presented to one or more users, consistent with embodiments of the present disclosure. In some embodiments, environment 100 may be a public environment. A public environment may be, for example, an airport terminal, bus station, mall, store, shopping center, railroad station, subway station, park, zoo, stadium, public building, school campus, parking lot, area along a roadway, hospital, museum, wayside station, or any other location frequented by large groups of people. However, a public environment is not so limited. The term "public" may refer to any environment in which two or more individuals are present.
[033] A display device (e.g., a display device 210 of FIG. 2) may present an interactive interface including interactive content, such as interactive interface 160 of FIG. 1 A. In some embodiments, a display device 210 may present interactive interface 160 across one or more display panels. For example, interactive interface 160 may be displayed across one large display panel or across multiple display panels. In some embodiments, the display panel(s) may be display panel(s) 1250, as further described with reference to FIG. 12. Display panel(s) may include, for example, one or more cathode ray tube (CRT) displays, liquid crystal displays (LCDs), plasma displays, light emitting diode (LED) displays, touch screen type displays, projector displays (e.g., images projected on a screen or surface, holographic images, etc.), organic light emitting diode (OLED) displays, field emission displays (FEDs), active matrix displays, a vacuum fluorescent (VFR) displays, 3-dimensional (3-D) displays, electronic paper (e-ink) displays, microdisplays, or any combination of the above types of displays.
[034] Interactive interface 160 may allow one or more users to interact with content displayed in interface 160 through one or more types of user inputs. The one or more types of user inputs may include one or more gestures, motions, voice commands, touch screen inputs, button presses, key presses on a physical or virtual keyboard, etc. For example, user input may include a gesture, motion, or voice command captured by one or more sensors mounted on or in the vicinity of a display device 210. Sensors may include one or more video cameras, infrared sensors, ultrasound sensors, radio frequency sensors, or microphones. User input may further include button or key presses on a touch screen or virtual or physical keyboard. In some embodiments, a user may press a button or key on a separate user device, such as a client device 220, and the input could be communicated over a network (e.g., network 230) to display device 210. [035] Interface 160 may include a certain number of interactive regions. In some embodiments, a display device 210 may present interface 160 across several display panels, and may configure interface 160 to include an interactive region for each panel. In some embodiments, a display device 210 may dynamically configure interface 160 to display a certain number of interactive regions. For example, the number of interactive regions may be dynamically configured based on a number of users that a display device 21 0 detects within the vicinity of display device 210, or that display device 210 detects as interacting with interface 160. In such an example scenario, two interactive regions may be displayed when two users are detected in the vicinity of display device 210, and interface 160 may be divided into three interactive regions when a third user enters the vicinity of display device 210.
[036] A user interacting with interface 160 may interact with content within an interactive region, such as region A 150 or region B 152. In some embodiments, each user interacting with interface 160 may be assigned his/her own interactive region for interacting with content. For example, each interactive region may be configured to allow a single user to interact with content within the region. A token may be displayed in association with an interactive region. In some embodiments, a token may be displayed within an interactive region. In some embodiments, tokens may be displayed in each interactive region within interface 160. In some embodiments, tokens may be displayed in each interactive region within interface 160 with which a user interacts. Alternatively, a token could be displayed outside an interactive region with which it is associated. For example, a representation of the layout of interface 160 could be displayed as a legend in interface 160, along with tokens positioned in the legend such that a user could infer which interactive region corresponds to a given token.
[037] A token may include a color, image, number, character, character string, shape, 3- dimensional object, avatar, and/or any other item that could be used to identify a particular region. In some embodiments, a user may select a particular token to be displayed. For example, a user could choose a particular color, image, number, character, character string, 3-dimensional object, avatar, and/or other item to display in association with the interactive region with which the user interacts.
[038] In some embodiments, a token is first displayed in an interactive region of interface 160 when a user enters the vicinity of the region. For example, one or more sensors could detect when a user enters the vicinity of a particular interactive region, and display device 210 could associate a token with the user and display the token in response to the detection of the user. In some embodiments, a token could be displayed when a user begins interacting with an interactive region. In some embodiments, a token may be displayed when a user finishes interacting with an interactive region. In some
embodiments, a token may be displayed the entire time a user interacts with an interactive region. In some embodiments, a token may be displayed upon a user input from a user. For example, a token can be displayed based upon a particular user input, such as a particular gesture.
[039] A content item displayed within an interactive region of interface 160 may include any type of content in which a user may be interested. This may include, for example, documents, presentations, text, news, articles, maps, geographic information, rating information, review information, polling information, directories, pricing information, advertisements, product information, information regarding days of hours of operation for establishments, audio, video, pictures, images, social network information, games, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, web pages, URLs, search resuts, and/or any other type of content in which users may be interested.
[040] As further disclosed herein, information related to a content item may include the content item itself, or any other information related to a content item, such as documents, presentations, text, news, articles, maps, geographic information, rating information, review information, polling information, directories, pricing information, advertisements, product information, information regarding days of hours of operation for establishments, audio, video, pictures, images, social network information, games, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, web pages, URLs, search results, and/or any other type of information which may be related to a content item. In some example embodiments, information related to a content item may be a representation of the content item that is configured for display on a particular client device 220. For example, information related to a content item may be a representation of the content item that is configured for display on a smartphone.
[041 ] In some embodiments, content items may be presented in interface 160 in such a way that users may interact with the content items. For example, one or more software applications may be executed on a display device 210 to allow users to interact with the content. In some embodiments, interface 160 may be comprised of one or more web pages rendered by one or more web browsers stored on display device 210. Interface 160 may present content in such a way that users may, for example, request quotes for particular stocks, search for information using a search engine, purchase an advertised product, play an audio or video file, vote in a poll, search for directions to a destination, look up a store's location and hours, check a social network feed, etc.
[042] Users may interact with content displayed in interface 160 using one or more gestures, motions, key presses, button presses, voice commands, etc. In some embodiments, one or more microphones mounted on, or in the vicinity of, display device 210 may detect voice commands, and display device 210 may process the voice commands to determine user input. In some embodiments, a user may press keys or buttons to interact with content items displayed in interface 160. The keys and/or buttons could be physical and/or virtual (e.g., provided on a touch screen interface).
[043] In some embodiments, one or more video cameras mounted on, or in the vicinity of, display device 210 may capture user gestures. A display device 210 may analyze the user gestures in the captured video to determine user input. For example, display device 210 may interpret a user pointing at a particular content item as a selection of the content item, and/or may interpret a user making a swiping gesture as a manipulation of a content item. In some embodiments, the one or more video cameras may detect various characteristics of users interacting with interface 160. For example, a display device 210 may analyze captured video to determine a user's age, gender, hair color, eye color, and/or other user characteristics. Display device 210 may then identify content items that may be desirable to a user having such characteristics, and may present the content items to the user in interface 160. [044] Content items may be stored locally at display device 210, or may be received by a display device 210 from one or more servers 240. In some embodiments, one or more servers 240 may send content updates to a display device 210, such as real-time updates. For example, server(s) 240 may provide a display device 210 with updated weather content, stock quotes, trending articles, etc. In some embodiments, display device 210 may request content items from server(s) 240 as a user interacting with interface 160 requests content items.
[045] In the example environment illustrated in FIG. 1 A, two users may be interacting with interface 160. In some embodiments, the two users may be simultaneously interacting with interface 160. User A 120 may be interacting with content A 130 in interactive region A 150 of interface 160. User B 122 may be interacting with content B 132 in interactive region B 152 of interface 160. Content A 130 and content B 132 may be the same content, or different content. A token A 140 may be displayed in association with interactive region A 150, and a token B 142 may be displayed in association with interactive region B 152. In some embodiments, token A 150 may be displayed in interactive region A 150, and/or token B 142 may be displayed in interactive region B 152, though the disclosure is not so limited. In some example embodiments, token A 140 is different than token B 142.
[046] A user interacting with interface 160, such as user A 120 and/or user B 122, may wish to receive information related to content displayed in interface 160 on a different device, such as a client device 220. For example, a user may wish to continue interacting with content displayed in interface 160 on a client device 220, or may wish to receive additional information related to the content displayed in interface 160 on a client device 220.
[047] FIG. 1 B illustrates an example user interface 1 10 that may be displayed on a client device 220, and that may allow a user to receive information related to content displayed in interface 160 on client device 220. In some embodiments, representations of the tokens displayed in interface 160 may be displayed in user interface 1 10. A representation of a token may include a color, image, number, character, character string, shape, 3-dimensional object, avatar, and/or any other item that could be used to identify a particular token that was displayed in an interactive interface of a display device 210. In some embodiments, a representation of a token may be the same as the token. For example, a token that is displayed in a red color may be represented by a representation that is displayed in the red color. In some embodiments, a representation of a token may differ from the token. For example, a token that is displayed in a red color may be represented by a representation that displays the character string "red."
[048] A user may select a representation in user interface 1 10 to identify an interactive region that contains content in which the user is interested. For example, user A 120 may be interested in receiving content from interface 160. As previously noted with reference to FIG. 1 A, user A 120 may have been interacting with content A 130 in interactive region A 150. User interface 1 10 may be displayed on a client device 220 of user A 120. User A 120 may wish to receive information related to content A 130 on client device 220. Accordingly, user A may select representation A 170 on user interface 1 10 to identify that interactive region A is the interactive region that displayed the content in which user A was interested. In response to selecting representation A 170, information related to content
A 130 may be received on client device 220.
[049] The above example describes a situation in which a user selects a representation of a token that identifies an interactive region of interface 160 with which the user interacted. However, the disclosure is not so limited. For example, user A 120 may view content B 132 while user B 122 is interacting with interactive region B 1 52. User A 120 may be interested in content B 132, despite the fact that it is user B 122 that is interacting with content B 132. However, user A 120 may still be able to receive information related to content B 132 on his/her client device 220 by selecting representation B 172 on user interface 1 10.
[050] In some embodiments, user A 120 may be interested in content A 130 and content B
132. In such embodiments, user A 120 may select both representation A 170 and representation B 172 in user interface 1 10, and may receive information related to content A 130 and information related to content B 132, in response to the selection.
[051 ] Although user interface 1 10 has been described above as being a user interface 1 1 of a client device 220 of user A 120, the disclosure is not so limited. A user interface 1 10 may also be displayed on a client device 220 of user B 122 when user B 122 is interested in receiving information related to content displayed in interface 160 on his/her client device 220. Similar to the example described above with respect to user A 120, user B 122 may select a representation A 170 when user B 122 wishes to receive information related to content A 130 on a client device 220 of user B 122. User B 122 may select a representation B 172 when user B wishes to receive information related to content B 132 on a client device 220 of user B 122. User B 122 may also select a representation A 170 and a representation B 172 when user B wishes to receive information related to content A 130 and information related to content B 132 on a client device 220 of user B.
[052] In some embodiments, a user that has not interacted with interface 160 may be interested in receiving information related to content displayed in interface 160 on his/her client device 220. For example, the user may view content A 130 and/or content B 132, despite having not interacted with interface 160, and may request information related to content A 130 and/or content B 132 by selecting a representation A 170 and/or a representation B 172 on the user's client device 220.
[053] While FIG. 1 A illustrates two users as interacting with interface 160, and illustrates interface 160 as including two interactive regions, the disclosure is not so limited. Any number of users may interact with an interface 160 and any number of interactive regions may be displayed in an interface 1 60. And while FIG. 1 B illustrates two representations as being displayed in user interface 1 10, the disclosure is not so limited. Any number of representations may be displayed in a user interface 1 10 of a client device 220. In some embodiments, the number of representations displayed in user interface 1 10 corresponds to the number of interactive regions included in an interface 160, the number of tokens displayed in an interface 160, or the number of users interacting with an interface 160, though the disclosure is not so limited. [054] FIG. 2 is a block diagram of an example computing environment 200 for implementing embodiments and features of the present disclosure. The arrangement and number of components in system 200 is provided for purposes of illustration. Additional arrangements, number of components, and other modifications may be made, consistent with the present disclosure.
[055] As shown in FIG. 2, computing environment 200 may include one or more client devices 220. By way of example, a client device 220 could be a mobile phone, smart phone, tablet, netbook, electronic reader, personal digital assistant (PDA), personal computer, laptop computer, smart watch, gaming device, desktop computer, set-top box, television, personal organizer, portable electronic device, smart appliance, navigation device, and/or other types of computing devices. In some embodiments, a client device 220 may be implemented with hardware devices and/or software applications running thereon. A user may use a client device 220 to communicate with display device(s) 210 and/or server(s) 240 over network(s) 230. A client device 220 may communicate by transmitting data to and/or receiving data from display device(s) 210 and/or server(s) 240. In some embodiments, one or more of client device(s) 220 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
[056] Computing environment 200 may also include one or more display device(s) 210. By way of example, a display device 210 could be a display, smart display, server, personal computer, desktop computer, mobile phone, smart phone, tablet, netbook, electronic reader, personal digital assistant (PDA), smart watch, gaming device, desktop computer, set-top box, television, personal organizer, portable electronic device, smart appliance, navigation device, and/or other types of computing devices. In some example embodiments, a display device 210 may be a computing device configured to present an interactive interface across one or more display panels, consistent with the embodiments further disclosed herein. In some embodiments, a display device 210 may be implemented with hardware devices and/or software applications running thereon. A display device 210 may communicate with client device(s) 220 and/or server(s) 240 over network(s) 230. For example, a display device 210 may communicate by transmitting data to and/or receiving data from client device(s) 220 and/or server(s) 240. In some embodiments, one or more of display device(s) 210 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
[057] Computing environment 200 may further include one or more server(s) 240. By way of example, server(s) 240 could include any combination of one or more of web servers, databases, mainframe computers, general-purpose computers, personal computers, or other types of computing devices. In some embodiments, one or more of server(s) 240 may be configured to host a web page, implement a search engine 245, index information, store information, and/or retrieve information. In some embodiments, a server 240 may be a standalone computing system or apparatus, or it may be part of a larger system. For example, server(s) 240 may represent distributed servers that are remotely located and communicate over a communications network, or over a dedicated network, such as a local area network (LAN). Server(s) 240 may include one or more back-end servers for carrying out one or more aspects of the present disclosure. [058] Server(s) 240 may be implemented as a server system comprising a plurality of servers, or a server farm comprising a load balancing system and a plurality of servers. In some embodiments, a server 240 may be implemented with hardware devices and/or software applications running thereon. A server 240 may communicate with client device(s) 220 and/or display device(s) 210 over network(s) 230. For example, a server 240 may communicate by transmitting data to and/or receiving data from client device(s) 220 and/or display device(s) 210. In some embodiments, one or more of server(s) 240 may be implemented using a computer system, such as computer system 1200 of FIG. 12.
[059] In some embodiments, a user can submit a query to a search engine 245 within server(s) 240. When the user submits a query, the query may be transmitted through network(s) 230 to server(s) 240. Server(s) 240 may include, or may be connected to, an index database and/or a search engine. Server(s) 240 may respond to the query by generating search results, which are transmitted through network(s) 230 to client(s) 220 and/or display device(s) 210 in a form that may be presented to the user (e.g., a search results web page to be displayed in a web browser running on client(s) 220 and/or display device(s) 210).
[060] In some embodiments, when the query is received by the search engine, the search engine identifies resources that match the query. The search engine may also identify a particular "snippet" or section of each resource that is relevant to the query (or of the highest ranked resources that are relevant to the query). The search may include an indexing engine that indexes resources (e.g., web pages, images, or news articles on the Internet) found in a corpus (e.g., a collection or repository of content), an index database that stores the index information, and/or a ranking engine (or other software) that may rank the resources that match the query. The indexing engine may index information using traditional techniques.
[061 ] The ranking engine may have access to one or more scoring functions that are, for example, associated with the ranking engine. The ranking engine may select a scoring function from the set of scoring functions. The ranking engine may base the selection on user input. Alternatively, the ranking engine may select a scoring function based on instructions received from a scoring functions evaluator. In some implementations, the ranking engine may select multiple scoring functions and send multiple sets of ranked search results, one corresponding to each selected scoring function, to client(s) 220 and/or display device(s) 210.
[062] In some embodiments, the ranking engine may rank search results that are responsive to the query by determining one or more signals for the search result and the query, sending those signals to one of the scoring functions, receiving a score from the scoring function for each search result, and then ranking the search results based on the received scores. The ranking engine and scoring functions may communicate according to commands specified in an application programming interface (API). In general, the API may specify interfaces used by the ranking engine and the scoring function to implement and invoke a series of commands for sharing data. For example, the API may specify a command used by a scoring function to receive scoring data from the ranking engine, and/or may specify a command used by a ranking engine to request a score from a scoring function. In some implementations, data may be passed between the scoring function and the ranking engine in messages encoded according to a messaging format. The messaging format may be specified by the API, or may be separate from the API.
[063] Examples of signals include information about the query itself, for example, the terms of the query, an identifier of the user who submitted the query, and a categorization of the user who submitted the query (e.g., the geographic location from where the query was submitted, the language of the user who submitted the query, interests of the user who submitted the query, or a type of client device 220 or display device 210 used to submit the query (e.g., mobile device, laptop, desktop computer)). The identification of the user may be, for example, a user name and/or the Internet Protocol (IP) address of a client device 220 or display device 210. The geographic location from where the query was submitted may be, for example, a continent, a country, a state, a city, and/or geographic coordinates, such as latitude and/or longitude.
[064] Signals may also include information about the terms of the query, for example, the locations where a query term appears in the title, body, and/or text of anchors in a search result, where a query term appears in anchors pointing to the search result, how a term is used in the search result (e.g., in the title of the search result, in the body of the search result, and/or in a link in the search result), the term frequency (e.g., the number of times the term appears in a corpus of documents in the same language as the query divided by the total number of terms in the corpus), and/or the document frequency (e.g., the number of documents in a corpus of documents that contain the query term divided by the total number of documents in the corpus).
[065] Further examples of signals include information about the search result, for example, a measure of the quality of the search result, a universal resource locator (URL) of search result, a geographic location where the search result is hosted, when server(s) 240 first added the search result to an index, a language of the search result, a size of the search result (e.g., number of tokens and/or file size), a length of a title of the search result, and/or a length of the text of source anchors for links pointing to a document.
[066] Other examples of signals include information about anchor text for links pointing to the search result, for example, the text itself and the total number of tokens (e.g., words) in the anchor text. For example, if an anchor pointing to the search result has the text "NY" and another anchor has the text "New York," then the signals may include the text "NY" and "New York" as well as the number of tokens in the text: one from "NY" and two from "New York" for a total of three tokens. Other anchor signals for links pointing to the search result may include a number of documents in the domain of the search result that have a link pointing to the search result with given anchor text, and/or a number of documents from different domains than the search result that have a link pointing to the search result with given anchor text.
[067] The ranking engine may also provide additional information to the scoring function, for example, scoring parameters. In some implementations, server(s) 240 may receive the scoring parameters with the query. [068] Computing environment 200 may still further include one or more networks 230.
Network(s) 230 may connect server(s) 240 with client device(s) 220, may connect server(s) 240 with display device(s) 210, and/or may connect display device(s) 210 with client device(s) 220. Network(s) 230 may provide for the exchange of information, such as queries for information and results, between client device(s) 220 and server(s) 240, between client device(s) 220 and display device(s) 210, or between server(s) 240 and display device(s) 210. Network(s) 230 may include one or more types of networks interconnecting display device(s) 210, client device(s) 220, and/or server(s) 240. For example, one client device 220 may communicate with server(s) 240 using a different type of network than a second client device 220 may use to communicate with server(s) 240. As another example, a display device 210 may communicate with server(s) 240 by using a different type of network than a client device 220 may use to communicate with server(s) 240. As a further example, a display device 210 may communicate with client device(s) 220 using a different type of network than is used by server(s) 240 to communicate with display device(s) 210 and/or client device(s) 220.
[069] Network(s) 230 may include one or more wide area networks (WANs), metropolitan area networks (MANs), local area networks (LANs), personal area networks (PANs), or any combination of these networks. Network(s) 230 may include a combination of a variety of different network types, including Internet, intranet, Ethernet, twisted-pair, coaxial cable, fiber optic, cellular, satellite, IEEE 802.1 1 , terrestrial, Bluetooth, infrared, wireless universal serial bus (wireless USB), and/or other types of wired or wireless networks.
[070] FIG. 3 illustrates a flowchart of an example method 300, consistent with embodiments of the present disclosure. Example method 300 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 300 may be performed by one or more servers 240 or by one or more display devices 210.
[071 ] In step 310, a token that is or was displayed in an interactive interface of a first device, such as a display device 210, may be identified as being associated with an interactive region of the interactive interface of the first device. For example, storage medium(s) may be referenced to identify one or more of the tokens that are and/or were displayed in the interactive interface. In some embodiments, a plurality of tokens that are and/or were displayed in the interactive interface of the first device may be identified. In some embodiments, all of the tokens that are and/or were displayed in the interactive interface of the first device may be identified.
[072] In step 320, a content item that is or was displayed in an interactive interface may be identified as being associated with an interactive region of the interactive interface of the first device. For example, storage medium(s) may be referenced to identify one or more of the content items that are and/or were displayed in the interactive interface. In some embodiments, a plurality of content items that are and/or were displayed in the interactive interface of the first device may be identified. In some embodiments, all of the content items that are and/or were displayed in the interactive interface of the first device may be identified. [073] In step 330, data may be sent to a second device, such as a client device 220, to cause a representation of the token to be displayed on the second device. For example, a computer system 1200, such as a server 240 or display device 210, may cause the representation to be displayed on the second device. In some embodiments, computer system 1200 may cause representations of a plurality of the tokens that were displayed in an interface of the first device to be displayed on the second device. In some embodiments, computer system 1200 may cause representations of all of the tokens that were displayed on the interface of the first device to be displayed on the second device. Computer system 1200 may cause the representation(s) to be displayed at the second device by transmitting data, such as HTML data, XML data, or other instructions to the second device over network(s) 230. The representation(s) may be displayed at the second device in a user interface, such as user interface 1 10 of FIG. I B.
[074] In step 340, an indication that the representation of the token has been selected at the second device may be received. The indication may be received from the second device over network(s) 230 in response to a user selection of the representation. As previously noted, more than one representation may be selected by a user. Accordingly, one or more indications of selected
representations may be received.
[075] In step 350, information related to the content items that were displayed in the interactive region identified by the selected token representation may be sent to the second device. For example, a computer system 1200 may cause the information to be sent to the second device. Computer system 1200 may determine which token(s) correspond to the selected representation(s), and which content item(s) correspond to the interactive portion(s) associated with those token(s). Computer system 1200 may then determine which content items are and/or were displayed in those interactive portion(s). Information related to the determined content items may then be sent to the second device over network(s) 230. Information related to a content item may include the content item, other information related to the content item, or a combination thereof. For example, if a user had interacted with an interactive advertisement, the interactive advertisement and/or other information, such as a URL of a web page describing the advertised product, may be sent to the second device.
[076] In some embodiments, method 300 may be performed by a display device 210. In such embodiments, computing environment 200 may not require server(s) 240. For example, tokens, content items, and information related to the content items may be stored locally on storage medium(s) within a display device 210. Accordingly, a display device 210 may identify the token(s) (step 310) and content items (step 320) by referencing the storage medium(s), and may cause representation(s) of the token(s) to be displayed at client device(s) 220 (step 330) by sending data indicative of the token(s) to client device(s) 220 over network(s) 230. A display device 210 may also receive indication(s) that representation(s) have been selected from client device(s) 220 over network(s) 230 (step 340). A display device 210 may further send information related to the content items(s) that were displayed in the interactive portion(s) identified by the selected representation(s) to the client device(s) (step 350).
[077] Alternatively, method 300 may be performed by one or more server(s) 240. In performing step 3 10 of method 300, server(s) 240 may identify token(s) that are and/or were displayed in an interface of a first device, such as a display device 210. For example, server(s) 240 may receive information from the first device indicating that a token is or was displayed in association with an interactive region of the interface of the display device. In some embodiments, a plurality of tokens that are and/or were displayed in association with interactive regions of the interface may be identified. In some embodiments, all of the tokens that are and/or were displayed in association with interactive regions of the interface may be identified.
[078] In some embodiments, the first device may periodically transmit information regarding one or more tokens that are and/or were displayed in association with interactive regions of the interface to server(s) 240 over network(s) 230. In some embodiments, server(s) 240 may periodically poll the first device over network(s) 230 to receive information about tokens that are and/or were displayed in association with interactive regions of the interface. In some embodiments, server(s) 240 may be aware of the tokens that are and/or were displayed in association with interactive regions of the interface. For example, server(s) 240 may have provided instructions to the first device over network(s) 230 to control the configuration and/or display of the interactive regions and/or tokens of the interface. In such embodiments, server(s) 240 may store information about the tokens that are and/or were displayed in association with interactive regions of the interface, and may reference storage medium(s) to identify one or more of the tokens that were presented.
[079] Server(s) performing step 320 may identify content item(s) that are and/or were displayed in an interface of the first device. For example, server(s) 240 may receive information over network(s) 230 from the first device indicating that a content item is and/or was displayed in an interactive region of the interface. In some embodiments, a plurality of content items that are and/or were displayed in association with interactive regions of the interface may be identified. In some
embodiments, all of the content items that are and/or were displayed in association with interactive regions of the interface may be identified.
[080] In some embodiments, the first device may periodically transmit information regarding content item(s) that are and/or were displayed in association with interactive regions of the interface to server(s) 240 over network(s) 230. In some embodiments, server(s) 240 may periodically poll the first device over network(s) 230 to receive information about content items that are and/or were displayed in association with interactive regions of the interface. In some embodiments, server(s) 240 may be aware of the content items that are and/or were displayed in association with interactive regions of the interface. For example, server(s) 240 may provide instructions to the first device over network(s) 230 to configure and/or display interactive regions and/or content items within the interface. In such embodiments, server(s) 240 may store information about the content items that are and/or were displayed in association with interactive regions of the interface of the display device, and may reference storage medium(s) to identify the content item(s) that were presented.
[081 ] In step 330, server(s) 240 may cause representation(s) of the token(s) to be displayed on the second device. In some embodiments, server(s) 240 may cause representations of a plurality of the tokens that are and/or were displayed in the interface of the first device to be displayed on the second device. In some embodiments, server(s) 240 may cause representations of all of the tokens that were displayed in the interface of the first device be displayed on the second device. Server(s) 240 may cause the representation(s) to be displayed on the second device based on a signal or user command received over network(s) 230 from the second device. Server(s) 240 may cause the representation(s) to be displayed on the second device by transmitting data, such as HTML data, XML data, or other instructions to the second device over network(s) 230. The representation(s) may be displayed at the second device in a user interface, such as user interface 1 10 of FIG. I B.
[082] In step 340, an indication that a representation of a token has been selected at the second device may be received at server(s) 240. The indication may be received from the second device over network(s) 230 in response to a user selection of the representation. In some embodiments where representations of two tokens have been selected at the second device, a first indication corresponding to the representation of the first token may be received from the second device, and a second indication corresponding to the representation of the second token may be received from the second device.
Alternatively, a single indication may be received from the second device indicating that representations of the two tokens have been selected.
[083] In step 350, server(s) may determine the token(s) represented by the selected representation(s), and may determine the content item(s) that were displayed in the interactive region(s) identified by the token(s). Information related to the determined content item(s) may then be sent to the second device over network(s) 230. The information related to the content items may include the same content item(s), other information related to content item(s), or a combination thereof. For example, if a user had interacted with an interactive advertisement for a product, the interactive advertisement and/or additional information, such as a URL to a web page describing the product, may be sent to the second device over network(s) 230. In some embodiments, the information related to the determined content item(s) may be representations of the determined content item(s) configured for display on a particular client device 220, such as a phone.
[084] Server(s) 240 may be connected to a plurality of display devices 210 over network(s) 230. For example, server(s) 240 may be connected to hundreds, or even thousands, of display devices 210. Accordingly, server(s) 240 may need to identify a display device 210 with which a user interacted, such as the first device described above. In accordance with some embodiments described below, server(s) 240 may identify the first device.
[085] The first device may be identified based on a notification received from a client device 220, such as the second device described above. The notification may represent data that identifies the first device, such as a universal resource locator (URL), code, character string, bar code, quick response (QR) code, or any other data that can be used to identify an item.
[086] In some embodiments, the identifying data may be a location of the second device, and the first device may be identified based on the location. For example, the second device may determine its location via one or more of global positioning satellite (GPS) signals, cellular signals, base station signals, and/or any other signals to determine the location of an electronic device. Once the second device has determined its location, information regarding that location may be used to identify the first device.
[087] In some embodiments, the first device may be identified by a URL. The URL may be located marked on or near the first device, or may be presented in the interface displayed by the first device. A user using second device may then type the URL into a web browser and the second device may connect to a web page for the first device. With the second device connected, one or more web servers 240 may then carry out method 300.
[088] In some embodiments, server(s) 240 may receive the notification from the second device over network(s) 230. Upon reception of the notification, server(s) 140 may identify the first device. For example server(s) 240 may consult one or more databases, tables, or lists stored locally or remotely on server(s) 240 to determine which display device 210 corresponds to the first device. If the data represented by the notification matches identification data stored on server(s) 240 for the first device, server(s) 240 may determine that the user of the second device was interacting with the first device. If the notification represents location information of the second device, server(s) 240 may consult the one or more databases, tables, or lists to identify locations of display devices 210, and may identify the first device based on a determination that a stored location of the first device is closest to a location indicated by the location information. Once the first device has been identified, server(s) may associate the first device with the second device.
[089] Although FIG. 3 illustrates performing the steps of method 300 in a particular order, the disclosure is not so limited. For example, step 320 may proceed step 310, or may be conducted in parallel with step 310. In some embodiments, step 320 may not be performed until after step 340 has been performed. In such embodiments, only the content item(s) associated with the region(s) indicated by the selected representation(s) need be identified.
[090] Additionally, for illustrative purposes, example method 300 has been described with respect to single client device 220 (e.g., second device) that communicates with server(s) 240 and/or a display device 210 (e.g., first device). However, the disclosure is not so limited. As noted previously, method 300 may be performed (e.g., by server(s) 240 or display device(s) 210) for any number of client devices 220. For example, method 300 may also be performed for a second user interacting with an interactive region of the interface displayed on the first device, such as user B 122 of FIG. 1A. In step 330, a computer system 1200 (e.g., server(s) 240 or display device(s) 210) may cause representations of the tokens displayed on the interface of the first device to be sent to a device (e.g., third device) of the second user over network(s) 230. In step 340, an indication that one of the representations was selected by the second user on the third device may be received at the computer system over network(s) 230. For example, the second user may select a representation of a second token that corresponds to a second interactive region with which the second user had interacted, and a second indication corresponding to the second user's selection may be received over network(s) 230. In step 350, the computer system may cause information related to the content presented in the second interactive region to be sent over network(s) 230 to the third device based on the received second indication. [091 ] FIG. 4 illustrates a flowchart of an example method 400, consistent with embodiments of the present disclosure. Example method 400 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 400 may be performed by one or more servers 240 or by one or more display devices 210.
[092] In some embodiments of the disclosure, a plurality of content items may be displayed within an interactive region of an interface of a display device 210. For example, the first device discussed with reference to method 300 may display a plurality of content items in the interactive region of the interface with which the user interacts. An example of such an interface is illustrated in FIG. 5, which is discussed further below. In such embodiments, step 350 of method 300 may be performed by performing method 400. For example, after receiving an indication over network(s) 230 that a representation of a token has been selected at the second device, step 410 may be performed. In step 410, a computer system 1200 (e.g., server(s) 240 or display device(s) 210) may identify a token based on the selected representation, and content items that were displayed in the interactive region associated with the token. The computer system may then cause a listing of links corresponding to the content items to be presented on the second device. For example, computer system 1200 may send instructions or commands over network(s) 230 that cause the listing to be presented. An example listing is illustrated in user interface 620 of FIG. 6B.
[093] In step 420, an indication may be received over network(s) 230 that a link presented in the listing has been selected by the user at the second device. In step 430, the computer system may cause information related to a content item corresponding to the selected link to be sent to the second device over network(s) 230.
[094] FIG. 5 illustrates an example interactive interface 500 of a display device 210, such as the first device discussed with respect to methods 300 and 400. Interface 500 may include a first interactive region 560 and a second interactive region 562. A first token 520 may be displayed in association with first interactive region 560, and a second token 522 may be displayed in association with second interactive region 562. In some embodiments, a first headline 530 may be displayed in first interactive region 560 and a second headline 532 may be displayed in association with second interactive region 532. Headlines 530, 532 may include, for example, a trending topic or top story of the day.
Additionally, a plurality of content items may be displayed in each of the interactive regions.
[095] For example, content item 540 may display news headlines, content item 542 may display science news, content item 544 may display a mall directory, and content item 546 may display the current weather. A user interacting with first interactive region 560 may interact with these content items to, for example, learn more about the news, store locations in a mall, science discoveries, or the upcoming weather.
[096] Content item 552 may display a chess game, content item 554 may display an advertisement for shoes, content item 556 may display information about a cafe, and content item 558 may display a user's social network information. A user interacting with second interactive region 562 may interact with these content items to, for example, play a chess match, learn about a product (e.g., shoes), review menu items at a restaurant, or keep updated on their friends' lives.
[097] An interface of a display device 210 is not limited to the example illustration of FIG. 5. Any number of content items may be displayed in an interactive region, and a displayed content item may have any size or shape. Moreover, any number of interactive regions may be displayed in interface 500.
[098] FIGs. 6A and 6B illustrate example user interface screens 610 and 620 displayed on a client device 220 of a user, such as the second device described with respect to methods 300 and 400. Screen 610 may be displayed on the second device when step 330 of method 300 is performed. FIG. 6A illustrates an example of a user interface screen that may be displayed to a user interacting with interface 500. For example, first representation 630 may correspond to first token 520 of interface 500, and second representation 632 may correspond to second token 522 of interface 500. Screen 610 may also display an instruction 670 to the user.
[099] FIG. 6B illustrates an example of a user interface screen 620 that may be displayed to the user after the user has selected a representation on the second device, such as a selection of first representation 630 of screen 610. For example, screen 620 may be displayed on the second device when step 410 of method 400 is performed. Screen 620 may display the selected representation 640 on the screen to inform the user that screen 620 corresponds to the interactive region associated with the token represented by representation 640. Screen 620 may also display links for each of the content items that were displayed in that interactive region. For example, link 660 may correspond to content item 540 of interface 500, link 662 may correspond to content item 544 of interface 500, link 664 may correspond to content item 542 of interface 500, and link 666 may correspond to content item 546 of interface 500. Selection of any of the links may cause information related to the content item identified by the link to be sent to the second device. Screen 620 may also include an instruction 650 to the user.
[0100] FIG. 7 illustrates a flowchart of an example method 700, consistent with embodiments of the present disclosure. Example method 700 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 700 may be performed by one or more one or more display devices 210.
[0101 ] In step 710, a user may be detected. For example, a display device 210 may detect that a user moves into a vicinity of the display device via one or more sensors (e.g., video cameras, motion sensors, microphones, etc.). In step 720, the display device may receive content. For example, the display device may receive content from local storage medium(s) or from server(s) 240 over network(s) 230. The content may be received based on a user interaction, a user preference stored locally or at server(s) 240, or as part of a periodic update. In step 730, the display device may display the received content as part of an interactive interface displayed on one or more display panel(s).
[0102] In step 740, the display device may display a token. For example, the display device may display a token in association with an interactive region of the interface with which the user interacts. In some embodiments, the token may be automatically determined by the display device, or determined based on an instruction received from server(s) 240 over network(s) 230. For example, the display device may select the token for display based on a determination by the display device or server(s) 240 that the token is not already being displayed at the display device, that the token is not being displayed at a certain number of display devices, or that the token is not being displayed at any of display devices 210. In some embodiments, the user may choose the token that is displayed. For example, the user may interact with the display device to browse through a listing of available tokens, and may select one for display in association with the user.
[0103] In step 750, the display device may provide display information. The display device may provide the display information to storage medium(s) on the display device for storage, and/or may provide the display information over network(s) 230 for storage at server(s) 240. The display device may provide the display information to server(s) over network(s) 230 periodically, or based on a request received over network(s) 230 from server(s) 240. For example, server(s) 240 may regularly poll the display device over network(s) 230 for the display information.
[0104] The display device may, for example, store information about content that has been displayed and/or tokens that have been displayed. The display device may also store information about locations in the interface in which the content has been displayed, in which the tokens have been displayed, and/or in which the interactive regions have been provided. The display device may further store information defining associations between tokens and content that have been displayed in association with each other, between tokens and interactive regions that have been provided in association with each other, and/or between content and regions that have been provided in association with each other. The display device may further store information about user interaction histories and/or browsing histories. For example, the display device may store a history of user inputs performed by a user. The display device may also store a history of content with which a user has interacted. Any of the stored display information may be provided over network(s) 230 for storage at server(s) 240.
[0105] The display information may be used by one or more computer systems 1200 of computing environment 200 to carry out various aspects of the disclosure. For example, the display information may be used to help one or more computer systems 1200 (e.g., server(s) 240 or display device(s) 210) identify tokens associated with regions of an interface of the device, such as in step 310 of method 300. The display information may also be used to help the computer system(s) 1200 identify content associated with regions of an interface of the device, such as in step 320 of method 300.
[0106] Although FIG. 7 illustrates performing the steps of method 700 in a particular order, the disclosure is not so limited. For example, steps 710-750 may be performed in any order, or may be performed in parallel.
[0107] FIG. 8 illustrates a flowchart of an example method 800, consistent with embodiments of the present disclosure. Example method 800 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 800 may be performed by one or more one or more client devices 220. For example, a client device 220 may store a software program that, when executed, allows the client device to communicate with display device(s) 210 and/or server(s) 240, and/or to carry out method 800. The software application may be provided to the client device from a content provider. In some embodiments, the software application may be a web browser application.
[0108] In step 810, the client device receives display device identification information. In some embodiments, the display device identification information may be acquired based on some user action. For example, a user may see information that identifies a display device 210 displayed in association with the display device, and may enter the information into a software application stored on the second device. For example, the information could be a URL, code, character string, bar code, number (e.g., serial number), QR code, or any other data that can be used to identify an item. In some embodiments, the user may enter the identification information into the client device. For example, the user could type the URL into a web browser, type in a code, type in a character string, capture a bar code or QR code with a camera of the client device, etc.
[0109] In some embodiments, a software application stored on the client device may prompt the user with a request to determine identification information for the display device. For example, the client device could present the user with a prompt indicating that a nearby display device is trying to communicate with the client device over network(s) 230. For example, the display device may wish to send a code or other data to the client device over network(s) 230 that would allow the client device to identify the display device.
[01 10] In some embodiments, the client device may prompt the user with a request to allow the client device to determine its location. If authorized, the client device may determine its location via one or more global positioning satellite (GPS) signals, cellular signals, base station signals, or any other signals known to one of skill in the art in determining the location of the client device. Information regarding the location of the client device may then be used to identify the display device based on the proximity of the display device to the location of the client device. For example, location information may be transmitted to server(s) 240 over network(s) 230. Server(s) 240 may consult a database, listing, or table of display devices, and may identify the display device by determining that it is the display device closest in proximity to the client device.
[01 1 1 ] Alternatively, the device identification information may be automatically captured by the second device. For example, a software application running on the second device may automatically capture the device identification information using one or more cameras, video cameras, microphones, GPS signals, cellular signals, base station signals, or signals received from the first device over network(s) 230.
[01 12] As disclosed previously, in some embodiments, method 300 may be performed by the display device. In such embodiments, the client device may not need to identify the display device, because the client device may communicate directly with the display device over network(s) 230.
Accordingly, in such embodiments, step 810 may be skipped.
[01 13] In step 820, the client device may send an indication of a request to display representations over network(s) 230. For example, a user interacting with the interface of the display device may provide a user input on the client device to request that representations of the tokens being displayed in the interface be presented on the client device. In some embodiments, the user may enter the request before, after, or at any time during, his/her interaction with the interface of the first device. In some embodiments, the user may enter the request so long as the user remains within a certain proximity of the display device. In some embodiments, the client device may automatically request to display the representations after receiving the display device identification information in step 810. In some embodiments, the client device may automatically display the representations on the client device so long as the user interacts with, or remains in proximity of, the display device.
[01 14] In some embodiments, the user may enter the request at any time, even hours, days, months, or years after interacting with the display device. For example, the user could enter the display device identification information into an application on the client device, along with the time and/or date in which the user interacted with the display device. The display device and/or server(s) 240 may search through a history of stored display information and determine the tokens, content, and/or interactive regions presented on the display device at that time and/or date.
[01 15] In step 830, representations of the tokens displayed in the interface of the display device may be displayed on the client device. For example, the representations may be displayed in one or more user interface screens, such as a screen of user interface 1 10 of FIG. I B, or screen 610 of FIG. 6A.
[01 16] In step 840, the client device may send an indication that a representation was selected over network(s) 230. The indication may be sent to server(s) 240 over network(s) 230, or directly to the display device over network(s) 230 in some embodiments where the client device communicates directly with the display device. As previously discussed above, the user may select more than one of the representations, and one or more indications may be sent over network(s) 230 to identify the selected representations. In step 850, the client device may receive information related to content that was displayed in association with the token(s) corresponding to the selected representation(s) over network(s) 230.
[01 17] FIG. 9 illustrates a flowchart of an example method 900, consistent with embodiments of the present disclosure. Example method 900 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 900 may be performed by one or more client devices 220. For example, a client device 220 may store a software program that, when executed, allows the client device to communicate with display device(s) 210 and/or server(s) 240, and/or to perform method 900.
[01 18] In some embodiments of the disclosure, a plurality of content items may be displayed within an interactive region of an interface of a display device 210. For example, the display device may display a plurality of content items in the interactive region of the interface with which the user interacts. An example of such an interface is illustrated in FIG. 5. In such embodiments, step 850 of method 800 may be performed by performing method 900. For example, after sending an indication of a user selection of a representation over network(s) 230, step 910 may be performed. In step 910, the client device may display a list of links corresponding to content items that were displayed in association with the token represented by the representation. For example, FIG. 6B illustrates an example user interface screen 620 that displays a listing of links corresponding to content items. In the example of FIG. 6B, the links correspond to the content items displayed in first interactive region 560 of interface 500 of FIG. 5.
[01 19] In step 920, the client device may send an indication over network(s) 230 that a link presented in the listing was selected by the user. For example, the indication may be sent over network(s) 230 to server(s) 240. Alternatively, the indication may be sent over network(s) 230 to the display device in some embodiments where the client device communicates directly with the display device. In step 930, the client device may receive information related to a content item corresponding to the selected link over network(s) 230.
[0120] FIG. 10 illustrates a flowchart of an example method 1000, consistent with embodiments of the present disclosure. Example method 1000 may be implemented in a computing environment (see, e.g., FIG. 2) using one or more computer systems (see, e.g., FIG. 12). In some embodiments, method 1000 may be performed by one or more display devices 210.
[0121] In step 1010, a display device 210 may display a first content in a first interactive region of an interactive interface based on a first user's interactions with the first region. For example, the first user may select, open, or otherwise manipulate content displayed in the first region, such that a first content is displayed in the first region at a given time. FIG. 1 1 illustrates an example environment 1 100 in which method 1000 may be implemented. As shown in FIG. 1 1 , first content A 1 120 may be displayed in first interactive region A 1 140 of an interactive interface 1 1 10 based on a first user A 1 150's interactions with first interactive region A 1 140. As further illustrated in FIG. 1 1 , token A 1 130 may be displayed in association with first interactive region A 1 140.
[0122] In step 1020, the display device may display a second content in a second interactive region of the interface based on a second user's interactions with the second region. For example, the second user may select, open, or otherwise manipulate content displayed in the second region, such that a second content is displayed in the second region at the given time. As shown in FIG. 1 1 , second content B 1 122 may be displayed in second interactive region B 1 142 of an interactive interface 1 1 10 based on a second user B 1 152's interactions with second interaction region B 1 142. As further illustrated in FIG. 1 1 , token B 1 132 may be displayed in association with second interaction region B 1 142.
[0123] In step 1030, the display device may determine and display a third content, such as a merged content, relating to the first content and the second content in a third region of the interface. The merged content may be determined by the display device, or by server(s) 240. In some embodiments, the merged content may be determined by identifying content that has common or similar traits to the displayed first content and the displayed second content. For example, if merged content were to be determined based on the example content items illustrated in interactive regions 560, 562 of FIG. 5, the merged content may include advertisements for rain boots, sandals, or snow boots based on content item 546 displayed in interactive region 560 and content item 554 displayed in interactive region 562. The merged content may also include, for example, news about the health benefits of coffee, based on content item 542 displayed in interactive region 560 and content item 556 displayed in interactive region 562. [0124] The merged content may be displayed in a region separate from the first interactive region and the second interactive region. In some embodiments, the merged content may be displayed between the first interactive region and the second interactive region. In FIG. 1 1 , for example, merged content 1 124 is displayed in a region C 1 144 that is between region A 1 130 and region B 1 132. In some embodiments, a token, such as token C 1 134 shown in FIG. 1 1 , may be associated with the region displaying the merged content. One or more computer systems 1200 may perform one or more of methods 300, 400, 700, 800, or 900 to enable a user, such as user A 1 150 or user B 1 152, to select a representation of token C on his/her client device 220 to receive information related to the merged content at his/her client device 220.
[0125] Although two users are described with reference to method 1000 and environment
1 100, any number of users may interact with an interactive interface, such as interface 1 1 10. Interface 1 1 10 may present merged content based on commonalities or similarities among any number of content items being displayed on interface 1 1 10. For example, if a third user were interacting with interface 1 1 10, merged content 1 124 may be presented based on commonalities or similarities between content A 1 120, content B 1 122, and a content presented in a region with which the third user interacts.
[0126] Method 1000 may allow users using a multi-user interface to interact with one another. An interface displaying merged content based on method 1000 may also present merged content that is desirable to one or more users interacting with the interface, but which neither of the users would have identified on his/her own.
[0127] As previously discussed, a user may select multiple representations of tokens when the user desires to receive information related to content displayed in more than one interactive region. In some embodiments, merged content, such as that described with reference to FIGs. 10 and 1 1 , may be provided based on the multiple representations that were selected by the user. For example, first content that was displayed in a first interactive region of an interface and second content that was displayed in a second interactive region of the interface may be identified based on the tokens corresponding to the selected representations, and merged content may be determined based on commonalities or similarities between the first content and the second content. The merged content, and/or information related to the merged content, may then be provided to the user's client device 220.
[0128] As has been noted throughout this disclosure, information related to content that was displayed in an interactive region may be received at a client device 220 in response to a user selecting a representation of a token associated with the interactive region at the client device 220. Information related to content, as discussed throughout this disclosure, is not limited to information related to content that was displayed when the representation was selected. Rather, information related to content that was displayed at any time while a user was interacting with an interactive region may be provided in response to the selected representation. In some embodiments, all of the content that was displayed while a user was interacting with an interactive region may be provided in response to the selected representation. Alternatively, a list of links corresponding to all of the content that was displayed while a user was interacting with an interactive region may be displayed, such as the list illustrated in FIG. 6B, and a user may select a link corresponding to any one or more of the content items he/she viewed in order to receive information related to those content items.
[0129] In some embodiments, a display device 210 may be configured to save information related to one or more content items displayed in an interactive region with which a user is interacting in a storage, such as an electronic shopping cart, for later retrieval by the user. For example, as a user interacts with content items in an interactive region of an interface, the user may provide a particular user input requesting that information related to a particular content item be placed in a shopping cart. The information in the shopping cart may be stored at a display device 220, or server(s) 240. Accordingly, the user can identify only those content items in which he/she is interested while interacting with the interface, and may "checkout" and receive information related to the content items at his/her client device 220 by providing a certain user input, or by selecting a representation of the token associated with the interactive region.
[0130] In some embodiments, a user may select to receive additional information, such as a browsing history, associated with the user's interaction with an interaction region after selecting the representation of the token associated with the interactive region. A browsing history may include, for example, a listing of all of the content that was displayed in the interactive region while the user was interacting with the region.
[0131 ] In some embodiments, a first user using a first client device 220 may share a representation of a token with a second user using a second client device 220 over network(s) 230. For example, the first user may share the representation with a friend using a messaging, electronic mail, or social networking application. Upon receiving the representation, the second user may receive information related to content that was displayed in the interactive region associated with the represented token.
[0132] In some embodiments, after a user has selected token representation(s) on a client device 220, information related to content displayed in interactive region(s) associated with the represented token(s) may be continuously displayed on the client device. For example, if a user has selected a representation of a token associated with a first interactive region, information related to content displayed in the first interactive region may be continuously and dynamically displayed on the client device while the user interacts with the first interactive region. For example, if a user opens a content item in the first interactive region, a representation of that content item may be displayed on the client device. In some embodiments, information related to each of the content items displayed in the first interactive region may be dynamically displayed on the client device as the user manipulates the content items.
[0133] In some embodiments, the user may manipulate the content displayed in a first interactive region based on information related to the content that is displayed on the client device. For example, a user may open a content item in the first interactive region by opening a representation of the content item displayed on the client device. [0134] In some embodiments, the user may select to move content items between the interface displayed by display device and a display of the client device. For example, once a user has identified a particular interactive region by selecting a representation of a token associated with that region, the user may move content from the interface displayed by the display device to the client device. For example, a user may move a web page from being displayed in the interactive region in the interface of the display device to the display of the client device. Alternatively, the user may move content from the client device to the display device. For example, a user may move a web page from being displayed on the display of the client device to the interactive region of the interface of the display device. In some embodiments, a user may move content items back and forth between the interface of the display device and the display of the client device. For example, a user may move a web page from being displayed in the interactive region of the interface of the display device to being displayed in the display of the client device, may then manipulate the web page on the client device, and may then move the manipulated web page back to being displayed in the interactive region of the interface.
[0135] In moving content items between two different types of devices, such as between the display device and the client device, various transformations to the content may need to be made. In some embodiments disclosed herein, computer system(s) 1200 (e.g., server(s) 240 or display device(s) 210) may transform content and/or information related to content, in a manner that allows it to be displayed on a device of a different type. Based on signals or data received from a client device 220, the computer system(s) may identify a type of client device 220, and one or more transformations that need to be performed to content and/or information related to content based on the type. For example, the computer system(s) may identify that the client device is a smartphone, and that content needs to be resized accordingly to fit on the display of the smartphone. Alternatively, the computer system(s) may identify that content requires too much processing power for the smartphone, and may convert the content into a form that requires less processing power.
[0136] FIG. 12 is a block diagram illustrating an example computer system 1200 that may be used for implementing embodiments consistent with the present disclosure, including the example systems and methods described herein. Computer system 1200 may include one or more computing devices 1280. Computer system 1200 may be used to implement client device(s) 210, display device(s) 220, and/or server(s) 240. The arrangement and number of components in computer system 1200 is provided for purposes of illustration. Additional arrangements, number of components, or other modifications may be made, consistent with the present disclosure.
[0137] As shown in FIG. 12, a computing device 1280 may include one or more processors 1210 for executing instructions. Processors suitable for the execution of instructions include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. A computing device 1280 may also include one or more input/output (I/O) devices 1220. By way of example, I/O devices 1220 may include keys, buttons, mice, joysticks, styluses, gesture sensors (e.g., video cameras), motion sensors (e.g., infrared sensors, ultrasound sensors, etc.), voice sensors (e.g., microphones), etc. Keys and/or buttons may be physical and/or virtual (e.g., provided on a touch screen interface).
[0138] A computing device 1280 may include one or more storage devices configured to store data and/or software instructions used by processor(s) 1210 to perform operations consistent with disclosed embodiments. For example, a computing device 1280 may include main memory 1230 configured to store one or more software programs that, when executed by processor(s) 1210, cause processor(s) 1210 to perform functions or operations consistent with disclosed embodiments. By way of example, main memory 1230 may include NOR or NAND flash memory devices, read only memory (ROM) devices, random access memory (RAM) devices, etc. A computing device 1280 may also include one or more storage medium(s) 1240. By way of example, storage medium(s) 1240 may include hard drives, solid state drives, tape drives, redundant array of independent disks (RAID) arrays, etc. Although FIG. 12 illustrates only one main memory 1230 and one storage medium 1240, a computing device 1280 may include any number of main memories 1230 and storage mediums 1240. Further, although FIG. 12 illustrates main memory 1230 and storage medium 1240 as part of computing device 1280, main memory 1230 and/or storage medium 1240 may be located remotely and computing device 1280 may be able to access main memory 1230 and/or storage medium 1240 via network(s) 230.
[0139] Storage medium(s) 1240 may be configured to store data, and may store data received from one or more of server(s) 240, client device(s) 220, and display device(s) 210. The data may take or represent various content or information forms, such as documents, presentations, textual content, mapping information, geographic information, rating information, review information, polling information, directory information, pricing information, advertising information, product information, information regarding days and hours of operation for establishments, search indexes, news, audio files, video files, image files, user profile information, social network information, markup information (e.g., hypertext markup language (HTML) information, extensible markup language (XML) information), games, software applications, weather information, event information (e.g., information regarding appointments, holidays, events), stock market information, and any other type of information and/or content in which users may be interested, or any combination thereof. The data may further include other data received, stored, and/or inferred by a computer system 1200, such as data regarding locations of display device(s) 210, identities of display device(s) 210, locations of client device(s) 220, identities of client device(s) 220, tokens displayed at display device(s) 220, token(s) displayed and/or selected at client device(s) 210, content items displayed at display device(s) 220, content items provided to client device(s) 210, information provided to client device(s) 210, user interaction information (e.g., interaction histories, browsing histories), user preference information, and/or any other data used for carrying out embodiments consistent with the disclosure.
[0140] A computing device 1280 may also include one or more displays 1250 for displaying data and information. Display(s) 1250 may be implemented using one or more display panels, which may include, for example, one or more cathode ray tube (CRT) displays, liquid crystal displays (LCDs), plasma displays, light emitting diode (LED) displays, touch screen type displays, projector displays (e.g., images projected on a screen or surface, holographic images, etc.), organic light emitting diode (OLED) displays, field emission displays (FEDs), active matrix displays, vacuum fluorescent (VFR) displays, 3- dimensional (3-D) displays, electronic paper (e-ink) displays, microdisplays, or any combination of the above types of displays.
[01 41 ] A computing device 1280 may further include one or more communications interfaces
1260. Communications interface(s) 1260 may allow software and/or data to be transferred between server(s) 240, client device(s) 210, and/or display device(s) 220. Examples of communications interface(s) 1260 may include modems, network interface cards (e.g., an Ethernet card), communications ports, personal computer memory card international association (PCMCIA) slots and cards, antennas, etc. Communications interface(s) 1260 may transfer software and/or data in the form of signals, which may be electronic, electromagnetic, optical, and/or other types of signals. The signals may be provided to/from communications interface(s) 1260 via a communications path (e.g., network 230), which may be implemented using wired, wireless, cable, fiber optic, radio frequency (RF), and/or other communications channels.
[0142] The disclosed embodiments are not limited to separate programs or computers configured to perform dedicated tasks. For example, a server 240 may include a main memory 1230 that stores a single program or multiple programs and may additionally execute one or more programs located remotely from server 240. Similarly a display device 220 and/or client device 210 may execute one or more remotely stored programs instead of, or in addition to, programs stored on these devices. In some example embodiments, a server 240 may be capable of accessing separate server(s) and/or computing devices that generate, maintain, and provide web sites, and/or event creation and notification servers.
[0143] The computer-implemented methods disclosed herein may be executed, for example, by one or more processors that receive instructions from one or more non-transitory computer-readable storage mediums. Similarly, systems consistent with the present disclosure may include at least one processor and memory, and the memory may be a non-transitory computer-readable medium.
[0144] As used herein, a non-transitory computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, magnetic strip storage, semiconductor storage, optical disc storage, magneto-optical disc storage, and/or any other known physical storage medium. Singular terms, such as "memory" and "computer-readable storage medium," may additionally refer to multiple structures, such as a plurality of memories and/or computer-readable storage mediums.
[0145] As referred to herein, a "memory" may comprise any type of computer-readable storage medium unless otherwise specified. A computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the one or more processors to perform steps or stages consistent with embodiments disclosed herein. Additionally, one or more computer-readable storage mediums may be utilized in implementing a computer-implemented method. [0146] The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware and software, but systems and methods consistent with the present disclosure can be implemented as hardware alone.
[0147] Computer programs based on the written description and methods of this specification are within the skill of a software developer. The various programs or program modules can be created using a variety of programming techniques. For example, program sections or program modules can be designed in or by means of Java, C, C++, assembly language, or any such programming languages. One or more of such software sections or modules can be integrated into a computer system or existing communications software.
[0148] Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including reordering steps and/or inserting or deleting steps.
[0149] The many features and advantages of the disclosure are apparent from the detailed specification, and thus, it is intended that the appended claims cover all systems and methods, which fall within the true spirit and scope of the disclosure. As used herein, the indefinite articles "a" and "an" mean "one or more" in open-minded claims containing the transitional phrase "comprising," "including," and/or "having." Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the disclosure to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for providing information related to displayed content, the method comprising the following operations performed by one or more processors:
identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface;
causing representations of the first and second tokens to be presented at a second device;
receiving an indication that the representation of the first token has been selected at the second device; and
causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
2. The computer-implemented method according to claim 1 , wherein the first token is displayed in the interface at the first device when the first user begins interacting with the interface.
3. The computer-implemented method according to claim 1 , wherein the indication is a first indication, further comprising:
receiving a second indication that the representation of the second token has been selected at the second device; and
causing information related to content presented in the second interactive region to be sent to the second device based on the received second indication.
4. The computer-implemented method according to claim 1 , further comprising:
receiving a notification from the second device; and
associating the first device with the second device based on the notification.
5. The computer-implemented method according to claim 4, wherein the notification includes location information, further comprising identifying the first device from a plurality of devices based on the location information.
6. The computer-implemented method according to claim 1 , wherein the indication is a first indication, further comprising:
causing representations of the first and second tokens to be presented at a third device;
receiving a second indication that the representation of the second token has been selected at the third device; and
causing information related to content presented in the second interactive region to be sent to the third device based on the received second indication.
7. The computer-implemented method according to claim 1 , wherein the interface allows the first user to interact with the first interactive region while the second user interacts with the second interactive region.
8. The computer-implemented method according to claim 1 , wherein the content presented in the first interactive region includes at least one of advertising, shopping, mapping, news, or directory content.
9. The computer-implemented method according to claim 1 , wherein the information includes at least one of a universal resource locator (URL), software application, map direction, or product description.
10. The computer-implemented method according to claim 1 , wherein the first token displayed in the first interactive region of the interface includes at least one of a color, letter, number, character string, or image.
1 1 . The computer-implemented method according to claim 1 , wherein the first device is a public display device and the second device is a device of the first user.
12. A computer-implemented system for providing information related to displayed content, comprising:
a memory device that stores instructions; and
one or more processors that execute the instructions to:
identify tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface;
cause representations of the first and second tokens to be presented at a second device; receive an indication that the representation of the first token has been selected on the second device; and
cause information related to content presented in the first interactive region to be sent to the second device based on the received indication.
13. The computer-implemented system according to claim 12, wherein the first token is displayed on the first device when the first user begins interacting with the first device.
14. The computer-implemented system according to claim 12, wherein the indication is a first indication, and the one or more processors further execute the instructions to:
receive a second indication that the representation of the second token has been selected on the second device; and
cause information related to content presented in the second interactive region to be sent to the second device based on the received second indication.
15. The computer-implemented system according to claim 12, wherein the one or more processors further execute the instructions to:
receive a notification from the second device; and
associate the first device with the second device based on the notification.
16. The computer-implemented system according to claim 15, wherein the notification includes location information, and the one or more processors further execute the instructions to identify the first device from a plurality of devices based on the location information.
17. The computer-implemented system according to claim 12, wherein the indication is a first indication, and the one or more processors further execute the instructions to:
cause representations of the first and second tokens to be presented at a third device;
receive a second indication that the representation of the second token has been selected at the third device; and
cause information related to content presented in the second interactive region to be sent to the third device based on the received second indication.
18. The computer-implemented system according to claim 12, wherein the content presented in the first interactive region includes at least one of advertising, shopping, mapping, news, or directory content.
19. The computer-implemented system according to claim 12, wherein the information includes at least one of a universal resource locator (URL), software application, map direction, or product description.
20. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising:
identifying tokens displayed in an interface at a first device, the tokens including a first token associated with a first user interacting with a first interactive region of the interface, and a second token associated with a second user interacting with a second interactive region of the interface;
causing representations of the first and second tokens to be presented on a second device;
receiving an indication that the representation of the first token has been selected on the second device; and
causing information related to content presented in the first interactive region to be sent to the second device based on the received indication.
PCT/US2015/028827 2014-05-01 2015-05-01 Computerized systems and methods for providing information related to displayed content WO2015168580A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461987322P 2014-05-01 2014-05-01
US61/987,322 2014-05-01

Publications (1)

Publication Number Publication Date
WO2015168580A1 true WO2015168580A1 (en) 2015-11-05

Family

ID=53366244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/028827 WO2015168580A1 (en) 2014-05-01 2015-05-01 Computerized systems and methods for providing information related to displayed content

Country Status (1)

Country Link
WO (1) WO2015168580A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111013139A (en) * 2019-11-12 2020-04-17 北京字节跳动网络技术有限公司 Role interaction method, system, medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110124318A1 (en) * 2009-11-21 2011-05-26 Steven Curtis Ayer System/method for wireless internet advertising utilizing GPS and/or an identifier code via smartphone/smart device interaction
US20110289535A1 (en) * 2009-12-16 2011-11-24 Mozaik Multimedia Personalized and Multiuser Interactive Content System and Method
WO2014032088A1 (en) * 2012-08-27 2014-03-06 Red Propaganda Pty Ltd A method, application server, mobile communication device and multimedia terminal for displaying interactive multimedia
EP2708984A2 (en) * 2012-09-12 2014-03-19 Samsung Electronics Co., Ltd. Display Apparatus for Multiuser and Method Thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110124318A1 (en) * 2009-11-21 2011-05-26 Steven Curtis Ayer System/method for wireless internet advertising utilizing GPS and/or an identifier code via smartphone/smart device interaction
US20110289535A1 (en) * 2009-12-16 2011-11-24 Mozaik Multimedia Personalized and Multiuser Interactive Content System and Method
WO2014032088A1 (en) * 2012-08-27 2014-03-06 Red Propaganda Pty Ltd A method, application server, mobile communication device and multimedia terminal for displaying interactive multimedia
EP2708984A2 (en) * 2012-09-12 2014-03-19 Samsung Electronics Co., Ltd. Display Apparatus for Multiuser and Method Thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111013139A (en) * 2019-11-12 2020-04-17 北京字节跳动网络技术有限公司 Role interaction method, system, medium and electronic device
CN111013139B (en) * 2019-11-12 2023-07-25 北京字节跳动网络技术有限公司 Role interaction method, system, medium and electronic equipment

Similar Documents

Publication Publication Date Title
US9607055B2 (en) System and method for dynamically retrieving data specific to a region of a layer
CN107624187B (en) System and method for creating pages linked to interactive digital map locations
US20160188742A1 (en) Bookmarking Search Results
CN108701143B (en) Facilitating use of images in search queries
US8341185B2 (en) Method and apparatus for context-indexed network resources
US20150242525A1 (en) System for referring to and/or embedding posts within other post and posts within any part of another post
US9702706B2 (en) Facility wayfinding system
US20150169701A1 (en) Providing customized content in knowledge panels
AU2017204864A1 (en) Providing knowledge panels with search results
JP2015118708A (en) Method and apparatus for providing search results
CN102483753A (en) Computer Application Data In Search Results
CN104115147B (en) Location-aware applications are searched for
US20110302029A1 (en) Interactive Business Promotion System
CN102165473A (en) Video promotion in a video sharing site
US10156446B2 (en) Facility wayfinding system
US10936584B2 (en) Searching and accessing application-independent functionality
CN102934112A (en) Method and apparatus for generating map-based snippets
US20160307237A1 (en) Accessing Advertised Application States From A Current Application State
US20150339717A1 (en) Identifying content items associated with a mapping interface
US10002113B2 (en) Accessing related application states from a current application state
WO2015168580A1 (en) Computerized systems and methods for providing information related to displayed content
US9582515B1 (en) Detecting queries for specific places
JP2020042848A (en) Information providing system, information providing method, and information providing program
KR20200113809A (en) Method and system for supporting collection based on search
CA3009211A1 (en) Facility wayfinding system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15727744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15727744

Country of ref document: EP

Kind code of ref document: A1