US9508194B1 - Utilizing content output devices in an augmented reality environment - Google Patents

Utilizing content output devices in an augmented reality environment Download PDF

Info

Publication number
US9508194B1
US9508194B1 US12/982,457 US98245710A US9508194B1 US 9508194 B1 US9508194 B1 US 9508194B1 US 98245710 A US98245710 A US 98245710A US 9508194 B1 US9508194 B1 US 9508194B1
Authority
US
United States
Prior art keywords
display device
content
environment
display
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/982,457
Inventor
William Spencer Worley, III
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Amazon Technologies Inc
Original Assignee
Amazon Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amazon Technologies Inc filed Critical Amazon Technologies Inc
Priority to US12/982,457 priority Critical patent/US9508194B1/en
Assigned to RAWLES LLC reassignment RAWLES LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WORLEY III, WILLIAM SPENCER
Assigned to AMAZON TECHNOLOGIES, INC. reassignment AMAZON TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAWLES LLC
Application granted granted Critical
Publication of US9508194B1 publication Critical patent/US9508194B1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • Augmented reality environments enable interaction among users and virtual or computer-generated objects.
  • interactions may include electronic input, verbal input, physical input relating to the manipulation of physical objects, and so forth.
  • augmented reality environments may provide custom display devices for creating the environment, these environments do not utilize existing infrastructure to complement these custom devices.
  • FIG. 1 depicts an example environment that includes an augmented reality functional node (ARFN) and multiple other output devices.
  • ARFN augmented reality functional node
  • the ARFN may identify the presence of the other output devices and cause these devices to output content for the purpose of enhancing a user's consumption of the content.
  • FIG. 2 depicts the environment of FIG. 1 when the ARFN initially projects content onto a display medium, before transitioning the display of the content to a television that the ARFN has identified as residing within the environment.
  • FIG. 3 depicts the environment of FIG. 1 when the ARFN again projects content onto a display medium.
  • the ARFN has identified the presence of a stereo system and, therefore, sends audio content for output by the stereo system to supplement the user's consumption of the projected content.
  • FIG. 4 depicts the environment of FIG. 1 when the user is viewing the television described above.
  • the ARFN detects the content that the user is viewing on the television and, in response, begins projecting supplemental content on a projection surface.
  • FIG. 5 shows an example augmented reality functional node and selected components.
  • FIGS. 6-7 are illustrative processes of the ARFN determining the presence of an output device and, in response, utilizing the output device to output content for the purpose of enhancing a user's experience.
  • FIGS. 8-10 are illustrative processes of the ARFN complementing the operation of existing output devices in the environment of FIG. 1 for the purpose of enhancing a user's experience.
  • An augmented reality system may be configured to interact with objects within a scene and generate an augmented reality environment.
  • the augmented reality environment allows for virtual objects and information to merge and interact with real-world objects, and vice versa.
  • the environment may include one or more augmented reality functional nodes (ARFNs) that are configured to project content onto different non-powered and/or powered display mediums.
  • ARFNs augmented reality functional nodes
  • an ARFN may include a projector that projects still images, video, and the like onto walls, tables, prescribed stationary or mobile projection screens and the like.
  • the ARFN may include a camera to locate and track a particular display medium for continuously projecting the content onto the medium, even as the medium moves.
  • the environment includes multiple ARFNs that hand off the projection of content between one another as the medium moves between different zones of the environment.
  • the ARFNs described herein may be free from any output devices, and may solely control other components in the environment to create the augment reality environment.
  • the ARFN may project content within the environment as discussed above, the ARFN may also interactively control other components within the environment (e.g., other content output devices) to further enhance a user's experience while consuming the content.
  • the ARFN may identify that a flat-screen television exists within the same environment as the ARFN (e.g., within a same room).
  • the ARFN may instruct the television to display certain content. For instance, if the ARFN is projecting a particular movie, this node may determine that a user viewing the movie would have a better viewing experience if the movie were displayed on the television. As such, the ARFN may stream or otherwise provide an instruction to the television to begin displaying the movie. By doing so, the user is now able to view the previously projected content on a display device (here, the television) that is better suited for outputting the content.
  • a display device here, the television
  • the ARFN projects an electronic book (eBook), such as one of the books of the “Harry Potter” series by J. K. Rowling, for a user within the environment.
  • eBook electronic book
  • the ARFN again identifies the presence of the television as described above.
  • the ARFN may instruct the television to display a trailer for the particular Harry Potter movie that is based on the eBook that the ARFN currently projects.
  • the television may begin displaying the trailer and, as such, the user is able to view the trailer that is associated with the eBook that the user currently reads.
  • the ARFN identifies a stereo system that resides within the environment.
  • the ARFN may instruct the stereo system to output a song or other audio associated with the eBook that the ARFN currently projects. As such, the user may enjoy the soundtrack to “Harry Potter” while reading the projected Harry Potter eBook.
  • the ARFN may utilize multiple display devices in some instances. For instance, the ARFN may instruct the television to the display the trailer for “Harry Potter” while instructing the stereo system to output the audio associated with this trailer. While a few example output devices have been discussed, it is to be appreciated that the ARFN may utilize any number of other output devices capable of outputting content that the ARFN projects, content that is supplemental to the projected content, or any other form of content.
  • the ARFN may identify and utilize other devices within the environment for the benefit of the user within the environment. For instance, envision that a camera of the ARFN identifies when the user wakes up in the morning. Having learned the user's routine of making coffee shortly after waking up, the ARFN may send an instruction to the coffee maker to begin its brewing process, even before the user has made his way to the kitchen. In another example, the ARFN may identify (e.g., via the camera) when the user returns home from work in the evening.
  • the ARFN may instruct the oven to turn on to a certain pre-heated temperature in anticipation of the user using the oven to cook dinner
  • the ARFN may work in unison with any device within the environment and capable of communicating with the ARFN directly or indirectly, as described below.
  • the ARFN may itself complement the operation of the existing output devices.
  • the ARFN may identify existing output devices using any of the techniques discussed briefly above.
  • the ARFN may identify any content that these output devices currently output. For instance, a camera of the ARFN may capture images of a sporting event or a movie that a television outputs and, with these images, the ARFN may identify the content.
  • the ARFN may either take over the display of the content when appropriate, or may project related content. For instance, the ARFN may determine whether it should project the sporting event being displayed by the television in the above example. The ARFN may make this determination with reference to characteristics of the content, characteristics of the television, characteristics of the projector, preferences of a user in the room, in response to receiving an explicit instruction, or the based on any other factor or combinations of factors. After making this determination, the ARFN may begin projecting the content and may, in some instances, instruct the television to cease the display of the content. In some instances, the ARFN may receive the content from the television, from a content provider that was providing the content to the television, or from another content provider.
  • the ARFN may project content that is related to the output content. For instance, at least partly in response to determining that the television is displaying a particular sporting event, the ARFN may retrieve and project content that is related (e.g., supplemental) to the event. For instance, the ARFN may navigate to a website to retrieve and project statistics associated with the teams that the television currently displays. For instance, the ARFN may identify players that are being televised on the television and that are on a fantasy sports team of a user viewing the sporting event. The ARFN may, in response, retrieve statistics of these players during the game being televised, during the season, or the like. These statistics may include the number of fantasy points that these player(s) have acquired, or any other type of statistic. Additionally or alternatively, the ARFN may begin playing audio associated with the sporting event or may supplement the display of the event in any other way.
  • content that is related e.g., supplemental
  • the ARFN may navigate to a website to retrieve and project statistics associated with the teams that the television currently displays. For instance, the
  • Example Augmented Reality Environment describes one non-limiting environment that may implement the described techniques for utilizing existing output devices within an environment.
  • Example Output Device Utilization follows and describes several examples where an ARFN from FIG. 1 may use identified devices to output previously-projected content or to supplement the projected content.
  • ARFN Augmented Reality Functional Node
  • Example Processes follows, before a brief conclusion ends the discussion. This brief introduction, including section titles and corresponding summaries, is provided for the reader's convenience and is not intended to limit the scope of the claims, nor the proceeding sections.
  • FIG. 1 shows an illustrative augmented reality environment 100 that includes at least one augmented reality functional node (ARFN) 102 .
  • ARFN augmented reality functional node
  • FIG. 1 illustrates four ARFNs 102 , other embodiments may include more or fewer nodes.
  • the ARFN 102 may function to project content onto different display mediums within the environment 100 , as well as identify other output devices within the environment to utilize for outputting this content or to otherwise supplement the projected content.
  • FIG. 1 illustrates that the ARFN 102 may project any sort of visual content (e.g., eBooks, videos, images, etc.) onto a projection area 104 .
  • the projection area 104 here comprises a portion of the wall, the projection area 104 may comprise the illustrated table, chair, floor, or any other illustrated or non-illustrated surface within the environment 100 .
  • the illustrated user may carry a prescribed powered or non-powered display medium that the ARFN 102 tracks and project content onto.
  • the environment 100 further includes a television 106 , a tablet computing device 108 , and stereo system 110 .
  • the ARFN 102 may identify each of these output devices and may utilize one or more of these devices for outputting the previously-projected content, for outputting content that is supplemental to the projected content, and/or to output any other form of content. While FIG.
  • the ARFN 102 may identify and utilize a mobile phone, a laptop computer, a desktop computer, an electronic book reader device, a personal digital assistant (PDA), a portable music player, and/or any other computing, display, or audio device configured to output content visually, audibly, tactilely, or in any other manner.
  • a mobile phone a laptop computer, a desktop computer, an electronic book reader device, a personal digital assistant (PDA), a portable music player, and/or any other computing, display, or audio device configured to output content visually, audibly, tactilely, or in any other manner.
  • PDA personal digital assistant
  • the ARFN 102 includes a projector 112 , a camera 114 , one or more interfaces 116 , and a computing device 118 . Some or all of the components within the ARFN 102 may couple to one another in a wired manner, in a wireless manner, or in any other way. Furthermore, while FIG. 1 illustrates the components as residing adjacent to one another, some or all of the components may reside remote from one another in some instances.
  • the projector 112 functions to project content onto a projection surface 104 , as described above.
  • the camera 114 may be used to track a mobile projection surface, to identify the devices within the environment, to identify user behavior, and the like, as described below.
  • the interfaces 116 may enable the components within the ARFN 102 to communicate with one another, and/or may enable the ARFN 102 to communicate with other entities within and outside of the environment 100 .
  • the interfaces 116 may include wired or wireless interfaces that allow the ARFN to communicate with the devices 106 , 108 , and 110 over a local area network (LAN), a wide area network (WAN), or over any other sort of network.
  • the ARFN 102 may communicate directly with these devices (e.g., wired or wirelessly), or may communicate with this devices via third party devices, such as set-top boxes, content providers that communicate with the devices, and the like.
  • the interfaces 116 may allow the ARFN 102 to receive content from local or remote content providers for the purpose of projecting or otherwise outputting this content, or for sending the content to other devices in the environment 100 .
  • the ARFN may include or otherwise couple to the computing device 118 .
  • This computing device 118 may reside within a housing of the ARFN 102 , or may reside at another location. In either instance, the computing device 118 comprises one or more processors 120 and memory 122 .
  • the memory 122 may include computer-readable storage media (“CRSM”), which includes any available physical media accessible by a computing device to implement the instructions stored thereon.
  • CRSM computer-readable storage media
  • CRSM may include, but is not limited to, random access memory (“RAM”), read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory or other memory technology, compact disk read-only memory (“CD-ROM”), digital versatile disks (“DVD”) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disks
  • magnetic cassettes magnetic tape
  • magnetic disk storage magnetic disk storage devices
  • the memory 122 may store several modules such as instructions, datastores, and so forth, each of which is configured to execute on the processors 120 . While FIG. 1 illustrates a few example modules, it is to be appreciated that the computing device 118 may include other components found in traditional computing devices, such as an operating system, system buses, and the like. Furthermore, while FIG. 1 illustrates the memory 122 as storing the modules and other components, this data may be stored on other local or remote storage devices that area accessible to the computing device 118 via a local network, a wide area network, or the like. Similarly, the processors 120 may reside locally to one another, or may be remote from one another to form a distributed system.
  • the memory 122 may store or otherwise have access to an ancillary device identification module 124 , a datastore 126 storing indications of one or more identified devices 126 ( 1 ), . . . , 126 (N), a content output module 128 , a content datastore 130 , and an ancillary device adjustment module 132 .
  • the ancillary device identification module 124 functions to identify devices within the environment 100 that may be available for outputting content in lieu of, or in addition to, the ARFN 102 .
  • the ancillary device identification module 124 may function to identify the television 106 , the table device 108 , and the stereo system 110 .
  • the module 124 may identify devices that reside outside of the environment 100 .
  • the module 124 may identify these devices in any number of ways.
  • the scanning module 134 may work in combination with the camera 114 to scan the environment 100 for the devices.
  • the camera 114 may seek unique identifiers associated with the devices and may pass these identifiers back to the scanning module 134 for identification of a device.
  • one or more of the devices within the environment 100 may include a unique bar code, serial number, brand/model name, and/or any other identifier that the camera may read and pass back to the scanning module 134 .
  • the scanning module 134 may attempt to determine the identity of the device and, in some instances, one or more characteristics of the device. For instance, if the camera reads and returns a unique bar code on the television 106 , the scanning module may map this bar code to the make and model of the television. With this information, the ancillary device module 124 may determine one or more characteristics of the television 106 , such as a resolution of the display of the television, a size of the display, a type of the display (e.g., LCD, etc.) and the like.
  • the camera 114 may also provide an indication of the television's location within the room.
  • the ancillary device identification module 124 may then store this information in association within the device in the datastore 126 . That is, the module 124 may store the determined characteristics of the television 106 in the datastore 126 , thereby allowing the ARFN to later reference information associated with the television 106 when determining whether to utilize another device for outputting content.
  • the camera 114 may read a brand name on a device, such as the television 106 , and may pass this brand name back to the scanning module 134 .
  • the camera 114 and the scanning module 134 may together estimate dimensions of the television 106 with reference to objects of known size within the room, and/or utilizing any of the structured light techniques described in the related application incorporated by reference above.
  • the camera 114 and the scanning module 134 may also identify any other salient features that may help in identifying the television 106 , such as the color of the television, the layout of the controls, and the like.
  • the scanning module 134 may determine or deduce the identity of the television 106 .
  • the scanning module 134 may make this determination by comparing the aggregated information with information accessible via the interfaces 116 . For instance, the scanning module 134 may access, via the interfaces 116 , a website associated with the identified brand name to compare the estimated dimensions and the like against models described on the site. The module 134 may then identify a particular model based on this comparison. Furthermore, and as discussed above, the module 134 may store this identification in the datastore 126 for later access by the ARFN 102 .
  • the ancillary device identification module 124 may identify the devices within the environment 100 with use of the signal monitoring module 136 .
  • the signal monitoring module 136 may attempt to monitor or listen for signals sent to or from devices within the environment. For instance, if a device is controllable via a remote control, the module 136 may monitor signals sent between the remote the control and the corresponding device to help identify the device.
  • the illustrated user may control the television 106 via a remote control that sends infrared or another type of wireless signal to the television 106 for operating the television 106 .
  • the signal monitoring module 136 may monitor these wireless signals and may use these unique signals to deduce the identity of the television 106 .
  • the ancillary device identification module 124 may utilize the monitored signals, the images captured by the camera 114 , and/or other information to best deduce the most likely identity of the device, such as the television 106 .
  • the ancillary device identification module 124 may utilize the signal transmission module 138 to identity the devices within the environment 100 .
  • the signal transmission module 138 may send signals (e.g., infrared or other wireless signals) to the device in an attempt to elicit a response from the device.
  • the module 138 may determine the identity of the device based at least in part on which of the transmitted signals successfully elicited the response.
  • the signal transmission module 138 may send multiple different infrared signals to the television 106 in attempt to turn on the television 106 when it is currently off, or to otherwise control the television in an identifiable manner.
  • the module 138 may reference a manual of known signals used by different brands and/or models to determine with signal the television 106 responds to. Each of these signals may be sent at a different frequency or may be encoded uniquely.
  • the module 138 may map this signal back to a particular make and/or model of device. The module 138 may then store this device identification in the datastore 126 , as discussed above.
  • the ancillary device identification module 124 may identify the devices in any other number of ways.
  • a user of the ARFN 102 may explicit identify the devices, or the devices may include applications for sending their identifications to the ARFN 102 .
  • a user may download an application to each of her output devices in the environment, with the application functioning to determine the identity of the corresponding device and send this information to the ARFN 102 for storage in the datastore 126 .
  • FIGS. 2-3 illustrate and describe such an application in further detail.
  • the content output module 128 may utilize these devices to output content when appropriate. As illustrated, the content output module 128 may access content stored in the content datastore 130 for outputting on the projector 112 and/or on one of the devices within the environment 100 identified in the datastore 126 . This content may be persistently stored in the content datastore 130 , or the content may comprise data that is being streamed or otherwise provided “on demand” to the ARFN 102 .
  • the content may comprise eBooks, videos, songs, images, pictures, games, productivity applications, or any other type of content that the illustrated user may consume and/or interact with.
  • the content output module 128 may utilize one or more of the devices identified in the datastore 130 .
  • the module 128 stores or otherwise has access to an analysis module 140 and an instruction module 142 .
  • the analysis module 140 functions to determine when it is appropriate to utilize one of the output devices identified in the datastore 126
  • the instruction module 142 functions to instruct the devices to output content in response to such a determination.
  • the analysis module 140 may determine to utilize one of the other output devices in any number of ways. For instance, the user within the environment may explicitly instruct the ARFN 102 to utilize one of these devices. For example, envision that the ARFN 102 is projecting a movie onto a wall of the environment 100 , and that the user requests to switch the output to the television 106 and the stereo system 110 . In response, the content output module 140 may cease the projecting of the content and the instruction module 142 may instruct the television 106 and the stereo system 110 to begin outputting the visual and audio components of the movie, respectively. In response, the television 106 may begin displaying the movie and the speakers may begin outputting the audio content of the movie at a location where the projecting of the movie left off.
  • the user may provide this instruction via a gesture received by the camera 114 , via a physical control (e.g., a remote control) used to operate the ARFN 102 , via a sound command (e.g., a voice command from a user) received by a microphone of the ARN 102 , or in any other suitable way.
  • a physical control e.g., a remote control
  • a sound command e.g., a voice command from a user
  • the analysis module 140 may determine to utilize an output device other than the projector 112 with reference to the content and/or with reference to characteristics of the available devices themselves. For instance, when the ARFN 102 identifies the presence of the television 106 , the ARFN 102 may determine one or more display characteristics (e.g., size, resolution, etc.) of the television 106 , as discussed above. When projecting a movie or in response to receiving a request to project a movie, the analysis module may compare these characteristics (e.g., the display characteristics) to characteristics of the projector 112 and potentially to other devices in the environment 100 . Based on this comparison, the analysis module 140 may determine which device or combination of devices is best suited for outputting the content. In the movie example, the ARFN may determine that the television 106 and the stereo system 110 together are best suited to output the movie and, in response, the instruction module 142 may instruct these devices to do so.
  • the ARFN may determine that the television 106 and the stereo system 110 together are best suited to output the movie and, in response,
  • the content being projected itself may include an instruction to utilize one or more of the other devices within the environment 100 .
  • the projector 112 currently projects an eBook “Harry Potter” that the user within the environment 100 is reading.
  • This eBook may include an instruction to play a clip of the movie “Harry Potter” when the user reaches a particular scene.
  • the content may include an instruction to play the Harry Potter theme song upon the user finishing the book.
  • the user may request to transition from projecting of the eBook to playing an audio version of the eBook via the stereo system 110 .
  • the user may be leaving the environment (e.g., going outside for a walk) and, hence, may request that the eBook be sent to the tablet device 108 that communicates with the ARFN 102 , or another content provider that provides the content, via a wide area network (e.g., a public WiFi network, a cellular network, etc.), or the like.
  • a wide area network e.g., a public WiFi network, a cellular network, etc.
  • the ARFN 102 may monitor current conditions within the environment (e.g., with use of the camera 114 , microphones, etc.) to determine whether to project the content and/or utilize an existing output device. This may include measuring ambient light in the environment 100 , measuring a glare on the screen of the television 106 or other output device, or any other environmental condition.
  • the analysis module 140 may determine to utilize ancillary output devices based on the occurrence of multiple other events and based on multiple other factors. For instance, the ARFN 102 may decide to utilize ancillary output devices—or may refrain from doing so—based on a size of a display, a resolution of a display, a location of a display, current ambient lighting conditions in the environment, potentially environmental obstructions that could obstruct the projection of the content or the user's viewing of the content on an ancillary output device, and/or any other factor or combination of factors.
  • the ARFN 102 may supplement the projected content with supplementary information, such as advertisements, as discussed above.
  • supplementary information such as advertisements
  • the use of advertisements may offset some of the cost of the content, potentially saving the user money if he purchases the content. Therefore, in some instances the use of additional advertisements—and the potential savings realized there from—may push the balance in favor of projecting the content rather than displaying it on the ancillary output device.
  • the instruction module 142 may send an instruction to the particular device via the interfaces 116 , instructing the device to begin outputting certain content.
  • the ARFN 102 and the devices e.g., the television 106 , the tablet device 108 , the stereo system 110 , etc.
  • the instruction module 142 may send the instruction over this direct connection.
  • the instruction module 142 may send the instruction to a third party device, such as to a local set-top box that controls operation of the television 106 , to a remote satellite provider that controls operation of the television 106 , and the like.
  • the instruction module 142 includes the content along with the request. For instance, when transitioning from projecting a movie to displaying the movie on the television 106 , the module 142 may stream the content from the content datastore 130 directly to the television 106 .
  • the ARFN 102 may utilize an auxiliary input that allows the ARFN 102 to cause output of content on the device, such as the television 106 .
  • the ARFN 102 could send content via a wireless HDMI connection or via an RF HD broadcast. In the latter instances, the ARFN 102 may broadcast the content at a power level that complies with regulatory requirements.
  • the module 142 may refrain from sending the content to the ancillary output device, such as the television 106 , but may instead send an instruction to the device or the third party device for the output device to obtain the content.
  • the module 142 may operate the ancillary content device to obtain the content. For instance, the module 142 may turn on the television 106 and change the input channel to the channel displaying the appropriate content (e.g., the correct movie, show, etc.).
  • FIG. 1 illustrates that the ARFN 102 may include or have access to the ancillary device adjustment module 132 .
  • the module 132 may function to automatically adjust devices within the environment similar to how a technician may adjust such devices. For instance, upon sending content to the television 106 , the module 132 may adjust settings of the television 106 , such as brightness, contrast, volume, and the like. By doing so, the ARFN 102 is able to optimize settings associated with the devices in the environment 100 , as well as potentially tailor these settings based on the content being output and preferences of the users consuming the content.
  • the ARFN 102 may act to supplement the operation of these devices 106 , 108 , and 110 .
  • the ARFN 102 may utilize the projector 112 to project content in lieu of or along with one of these devices within the environment 100 .
  • the ARFN 102 may instruct the projector 112 to project content previously output by one of the other devices 106 , 108 , or 110 .
  • the ARFN 102 may also instruct the device previously outputting the content to cease doing so.
  • the ARFN 102 may instruct the projector 112 to project content that is related to the content being output by one of the other devices 106 , 108 , or 110 , as discussed below.
  • the ARFN 102 may identify the devices in any of the ways described above.
  • the camera 114 may scan the environment to identify the devices
  • the user within the environment 100 may explicitly identify the devices to the ARFN 102
  • the devices themselves may send their identities to the ARFN 102 , and the like.
  • the memory 122 of the ARFN 102 may store a content identification module 144 that functions to identify the content being output by one or more of the devices identified in the datastore 126 .
  • the content identification module 144 may identify this content in any number of ways. For instance, in one example, the camera 114 may scan content output devices that display visual content, such as the television 106 , the tablet computing device 108 , and the like. In doing so, the camera 114 may capture images that these devices display and may provide these images to the content identification module 144 .
  • the module 144 may compare these images to images associated with known content to find a match and identify the content. For instance, the module 144 may send the received images of the displayed content to a web service or other entity that may match the content to known content. Alternatively, the module 144 may send the images (e.g., in the form of video) to a group of one or more human users that may identify the content.
  • the modules 144 may send the images (e.g., in the form of video) to a group of one or more human users that may identify the content.
  • the camera 114 may scan the environment 100 to identify visual indications that can be mapped to the content being output. For instance, the camera 114 may capture an image of a set-top box that controls the television 106 , where the set-top box visually indicates a television channel that this user in the environment is currently watching. The content identification module 144 may then map this channel to identified content, such as with reference to a channel listing of television content. To provide another example, the camera 114 may capture images of a receiver of the stereo system 110 , where the receiver visually indicates a name of a song or artist that the stereo system currently plays. The content identification module 144 may then identify the content with use of these images.
  • a microphone of the ARFN may capture audio content and provide this to the module 114 .
  • the content identification module 144 may then attempt to match this audio content to known audio content to make the identification.
  • the camera 114 may capture images of a remote control being used by a user within the environment to control the output device, such as the television 106 .
  • the ARFN 102 may use these captured images to identify the output device itself, as well as the content being output. For instance, using the television 106 as an example, the camera 114 may capture the sequence of buttons that the user pushes in order to deduce the channel that the television 106 is currently displaying. Furthermore, the ARFN 102 may use a collection of this information in unison to deduce the identity of the content being output.
  • the output device e.g., the television 106
  • a device coupled to the output device e.g., the set-top box
  • the user within the environment may provide this identification to the ARFN 102 .
  • the content output module 128 may selectively instruct the projector 112 to project the content being output by the output device (e.g., the television 106 ) or content that is related to the content being output by the output device. For instance, the content output module 128 may determine one or more characteristics (e.g., display characteristics) of the content output device and may determine, based on these characteristics, if the viewing experience of the user would be enhanced by projecting all or part of the content via the projector 112 . That is, the module 128 may determine whether the projector 112 is better suited to output the content and, if so, may instruct the projector 112 to project the content (in whole or in part).
  • the output device e.g., the television 106
  • the content output module 128 may determine one or more characteristics (e.g., display characteristics) of the content output device and may determine, based on these characteristics, if the viewing experience of the user would be enhanced by projecting all or part of the content via the projector 112 . That is, the module 128
  • the content output module 128 may reference a characteristic of the content to determine whether the projector is better suited to output the content. For instance, if the television is displaying a static image, the content output module 128 may determine that it is more cost effective and equally suitable in terms of quality to project this static image. Of course, the converse may be true in other instances.
  • the content output module 128 may identify whether the identified content can be obtained in a more cost-effective manner after identifying the content. For instance, if the user is watching a particular movie, the content output module 128 may identify multiple content providers that provide the same or similar movies, as well as the cost charged by each provider. The ARFN 102 may then output a less expensive version of the movie in response, or may suggest to the user to consider the less expensive content provider when obtaining movies in the future (e.g., in the event that the user has already paid for the movie that is currently being displayed).
  • the content output module 128 may identify a user within the environment 100 with use of images received by the camera 114 . Conversely, the user may provide an indication of his presence within the environment 100 . In either instance, the content output module 128 may identify and reference a preference of the particular user when deciding whether to project the content. For instance, different users may set up different preferences regarding when to switch the output of content to the projector 112 . In each of these instances, the content output module 128 may allow the content output device within the environment 100 (e.g., the television 106 ) to continue outputting the content, or the module 128 may instruct the device to cease outputting the content. For instance, the content output module 128 may turn off the television 106 (e.g., via an infrared or other wireless signal).
  • the content output module 128 may turn off the television 106 (e.g., via an infrared or other wireless signal).
  • the content output module 128 may instruct the projector 112 to project content that is related to the content being output by the output device—instead of or in addition to projecting the content previously output by the content output device. For instance, in response to the content identification module 144 identifying the content, the content output module 128 may retrieve related content via the interfaces 116 . In some instances, the content output module 128 may request and retrieve content from a content provider that is different than the content provider that provides the identified content (e.g., the content being displayed by the television 106 ). For instance, if the television 106 displays a sporting event, the content output module 128 may request and retrieve content from a website that stores statistics or other types of information related to the sporting event. The content output module 128 may then instruct the projector 112 to project this supplemental information for the purpose of enhancing a viewing experience of the user within the environment 100 .
  • the related content may comprise one or more advertisements. These advertisements may or may not be associated with items displayed or otherwise output on the content output device. For instance, if the ARFN 102 determines that the stereo system 110 is playing a song by a certain artist, the ARFN 102 may instruct the projector 112 to project an advertisement to purchase the song or the album, or may project another advertisement associated with the song that is currently playing. In another example, the ARFN 102 may determine that the television 106 is playing a certain movie or show. In response, the ARFN 102 may project, output audibly, or otherwise output an advertisement that is associated with the content being output. This may include an advertisement to purchase or obtain the movie or show itself, to purchase or obtain items in the movie or show, or an advertisement for any other type of item (e.g., a product or service) that is associated with the displayed content.
  • a product or service any other type of item that is associated with the displayed content.
  • the ARFN 102 may utilize the other content output devices in the environment for outputting these types of advertisements. For instance, when the projector 112 projects certain content, the ARFN 102 may provide an instruction to one or more of the devices 106 , 108 , and 110 to output an advertisement that is related to the projected content. For instance, if the projector 112 projects a sporting event, the ARFN 102 may instruct the television to display an advertisement for purchasing tickets to future games, for sports memorabilia associated with the teams currently playing in the sporting event, or the like.
  • FIG. 2 depicts the environment 100 when the ARFN 102 projects content onto the projection surface 104 and then transitions the display of the content to the example television 106 .
  • the ARFN 102 projects content onto the projection surface 104 .
  • the projection surface 104 comprises a portion of the wall within the environment 100 .
  • the ARFN 102 decides to transition the projecting of the content to another device within the environment 100 , and namely to the television 106 .
  • the ARFN 102 may have received a request from the user, may have received a request from the content, may have identified the presence of the television 106 , or may have decided to transition the content for any other reason.
  • the ARFN 102 may have determined that the user's viewing experience will likely be enhanced by presenting the content on the television 106 rather than via the projector 112 .
  • FIG. 2 illustrates that the ARFN 102 instructs the television 106 to begin displaying the previously-projected content.
  • the television 106 may have been configured to communicate with the ARFN 102 via an application 204 , or the ARFN 102 may simply communicate with the television 106 via means that the television 106 provides off the shelf.
  • the ARFN 102 may communicate with the television 106 via infrared signals and may provide the content to the television 106 via an HDMI or another input channel that the television 106 provides natively.
  • the ARFN 102 has turned on the television 106 and has begun causing display of the content.
  • the ARFN 102 may stream the content directly to the television 106 , the ARFN 102 may configure the television 106 to play the content (e.g., by tuning the television 106 to an appropriate channel), or the ARFN 102 may instruct a third party device (e.g., associated with a satellite provider, etc.) to play the content on the television 106 .
  • the ARFN 102 has successfully transitioned from projecting the content to displaying the content on the television 106 , which may provide better display characteristics that enhance the user's viewing of the content.
  • FIG. 3 depicts another example of the ARFN 102 utilizing existing output devices within the environment 100 .
  • the ARFN 102 again projects content onto the projection surface 104 .
  • the ARFN 102 determines to utilize the stereo system 110 within the environment 100 for the purpose of providing better audio output for the content that the user is currently consuming.
  • the ARFN 102 instructs the stereo system 110 to output the content associated with the projected content.
  • the stereo system 110 may store the application 204 that allows that stereo system to communicate with the ARFN 102 .
  • the ARFN 102 may communicate with the stereo system 110 using components that are native to the stereo.
  • the stereo system 110 outputs the audio content associated with the projected content.
  • the ARFN 102 is able to supplement the projected content using the stereo system 110 residing within the environment 100 .
  • FIG. 4 depicts an example of the ARFN 102 supplementing the operation of existing output devices within the environment 100 .
  • the user watches content on the television 106 .
  • the ARFN 102 may identify the content that the television is currently displaying.
  • the ARFN 102 may identify the content in a variety of ways as described above.
  • the camera 114 may capture images of the content and may compare this to known content.
  • the camera 114 may read a set-top box coupled to the television 106 that displays the channel that the user currently views, or may learn of the channel that the user is watching by communicating directly with set-top box or the television 106 .
  • the user himself may inform the ARFN 102 of the content that the television 106 currently displays.
  • the ARFN 102 may utilize the interfaces 116 to retrieve content that is associated with and supplemental to the content that the television 106 currently displays.
  • the ARFN 102 may communicate with a content provider 404 , which may comprise a website or any other entity that stores information related to the currently broadcast content.
  • the ARFN projects the supplemental content onto a projection surface 406 using the projector 112 .
  • the television 106 displays a car race and, in response, the ARFN 102 retrieves and projects information regarding the cars and the drivers of the cars that the television 106 currently displays.
  • the user is able to learn information about the cars and the drivers in addition to the information being broadcast on the television 106 .
  • the supplemental content may comprise advertisements that are associated with the content on the television 106 .
  • the ARFN 102 may identify that one of the cars is sponsored by “Clorox®”.
  • the ARFN 102 may retrieve and project an advertisement for Clorox, a competitor of Clorox, and/or any other associated product or service.
  • FIG. 5 shows an illustrative schematic 500 of the ARFN 102 and selected components.
  • the ARFN 102 is configured to scan at least a portion of an environment 502 and the objects therein, as well as project content, as described above.
  • a chassis 504 is configured to hold the components of the ARFN 102 , such as the projector 112 .
  • the projector 112 is configured to generate images, such as visible light images perceptible to the user, visible light images imperceptible to the user, images with non-visible light, or a combination thereof.
  • This projector 112 may comprise a digital micromirror device (DMD), liquid crystal on silicon display (LCOS), liquid crystal display, 3LCD, and so forth configured to generate an image and project it onto a surface within the environment 502 .
  • DMD digital micromirror device
  • LCOS liquid crystal on silicon display
  • 3LCD liquid crystal display
  • the projector may be configured to project the content illustrated in FIGS. 1-4 .
  • the projector 112 has a projector field of view 506 which describes a particular solid angle.
  • the projector field of view 506 may vary according to changes in the configuration of the projector. For example, the projector field of view 506 may narrow upon application of an optical zoom to the projector.
  • the ARFN 102 may include a plurality of projectors 112 .
  • a camera 114 may also reside within the chassis 504 .
  • the camera 114 is configured to image the scene in visible light wavelengths, non-visible light wavelengths, or both.
  • the camera 114 has a camera field of view 508 , which describes a particular solid angle.
  • the camera field of view 508 may vary according to changes in the configuration of the camera 114 . For example, an optical zoom of the camera may narrow the camera field of view 508 .
  • the ARFN 102 may include a plurality of cameras 210 .
  • the chassis 504 may be mounted with a fixed orientation, or be coupled via an actuator to a fixture such that the chassis 504 may move.
  • Actuators may include piezoelectric actuators, motors, linear actuators, and other devices configured to displace or move the chassis 504 or components therein such as the projector 112 and/or the camera 114 .
  • the actuator may comprise a pan motor 510 , a tilt motor 512 , and so forth.
  • the pan motor 510 is configured to rotate the chassis 504 in a yawing motion, while the tilt motor 512 is configured to change the pitch of the chassis 504 . By panning and/or tilting the chassis 504 , different views of the scene may be acquired.
  • One or more microphones 514 may also reside within the chassis 504 , or elsewhere within the environment 502 . These microphones 514 may be used to acquire input from the user, for echolocation, location determination of a sound, or to otherwise aid in the characterization of and receipt of input from the environment 502 .
  • the user may make a particular noise such as a tap on a wall or snap of the fingers which are pre-designated as inputs within the augmented reality environment.
  • the user may alternatively use voice commands.
  • Such audio inputs may be located within the environment 502 using time-of-arrival differences among the microphones and used to summon an active zone within the augmented reality environment. Of course, these audio inputs may be located within the environment 502 using multiple different techniques.
  • One or more speakers 516 may also be present to provide for audible output.
  • the speakers 516 may be used to provide output from a text-to-speech module or to playback pre-recorded audio.
  • a transducer 518 may reside within the ARFN 102 , or elsewhere within the environment 502 .
  • the transducer may be configured to detect and/or generate inaudible signals, such as infrasound or ultrasound. These inaudible signals may be used to provide for signaling between ancillary devices and the ARFN 102 .
  • inaudible signals such as infrasound or ultrasound.
  • any other type of technique may be employed to provide for signaling between ancillary devices and the ARFN 102 .
  • a ranging system 520 may also reside in the ARFN 102 .
  • the ranging system 520 is configured to provide distance information from the ARFN 102 to a scanned object or set of objects.
  • the ranging system 520 may comprise radar, light detection and ranging (LIDAR), ultrasonic ranging, stereoscopic ranging, and so forth.
  • the transducer 518 , the microphones 514 , the speakers 516 , or a combination thereof may be configured to use echolocation or echo-ranging to determine distance and spatial characteristics of the environment and objects therein.
  • the computing device 118 resides within the chassis 504 . However, in other implementations all or a portion of the computing device 118 may reside in another location and coupled to the ARFN 102 . This coupling may occur via wire, fiber optic cable, wirelessly, or a combination thereof. Furthermore, additional resources external to the ARFN 102 may be accessed, such as resources in another ARFN 102 accessible via a local area network, cloud resources accessible via a wide area network connection, or a combination thereof.
  • This illustration also depicts a projector/camera linear offset designated “O”.
  • This is a linear distance between the projector 112 and the camera 114 . Placement of the projector 112 and the camera 114 at distance “O” from one another aids in the recovery of structured light data from the scene.
  • the known projector/camera linear offset “O” may also be used to calculate distances, dimensioning objects such as ancillary devices, and otherwise aid in the characterization of objects within the environment 502 .
  • the relative angle and size of the projector field of view 506 and camera field of view 508 may vary. Also, the angle of the projector 112 and the camera 114 relative to the chassis 504 may vary.
  • the components of the ARFN 102 may be distributed in one or more locations within the environment 502 .
  • microphones 514 and speakers 516 may be distributed throughout the environment 502 .
  • the projector 112 and the camera 114 may also be located in separate chassis 504 .
  • the ARFN 102 may also include discrete portable signaling devices used by users to issue inputs. For example, these may be acoustic clickers (audible or ultrasonic), electronic signaling devices such as infrared emitters, radio transmitters, and so forth.
  • FIGS. 6-7 are illustrative processes 600 and 700 of the ARFN 102 determining the presence of an output device and, in response, utilizing the output device to output content for the purpose of enhancing a user's experience. While the following description may describe the ARFN 102 as implementing these processes, other types of devices may implement a portion or all of these processes in other implementations.
  • FIGS. 8-10 are illustrative processes 800 , 900 , and 1000 of the ARFN 102 complementing the operation of existing output devices in an environment.
  • the processes described herein may be implemented by the architectures described herein, or by other architectures. These processes are illustrated as a collection of blocks in a logical flow graph. Some of the blocks represent operations that can be implemented in hardware, software, or a combination thereof.
  • the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations.
  • computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types.
  • the order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order or in parallel to implement the processes.
  • the process 600 includes, at 602 , instructing a first display device to project content in an environment.
  • the computing device 118 of the ARFN 102 may instruct the projector 112 to project a movie, an eBook, or any other content.
  • the ARFN 102 may identify a second display device in the environment.
  • the ARFN 102 may identify the presence of the television 106 , the tablet computing device 108 , a mobile phone, an electronic book reader device, or any other computing device.
  • the ARFN 102 may identify the second display device in any number of ways. For instance, at 604 ( 1 ), the ARFN 102 may scan the environment to locate the second display device. This may include scanning the environment with a camera to uniquely identify the second display device, with reference to a bar code of the device, a brand name of the device, dimensions of the device, or any other information that may be used to uniquely identify the second display device. The ARFN 102 may also scan the environment to watch for wireless signals (e.g., infrared signals, etc.) sent to or received from the ancillary devices, as described in further detail at 604 ( 4 ).
  • wireless signals e.g., infrared signals, etc.
  • the ARFN 102 may receive an indication of the second display device from a user. For instance, a user within the environment may provide an identification of the second display device to the ARFN 102 .
  • the ARFN may additionally or alternatively receive an indication of the second display device from the second device itself.
  • the second display device may include the application 204 that is configured to send the identity of the second display device to the ARFN 102 .
  • the ARFN 102 may identify the second display device, in whole or in part, by monitoring wireless signals sent to the second display device. For instance, the ARFN 102 may monitor infrared signals sent to the second display device from a remote control used operate the device, or may monitor any other type of wireless signal. Then, the ARFN 102 may identify the second display device by mapping the identified signal to a make and model of the device. Finally, the ARFN 102 may, at 604 ( 5 ), identify the second display device by sending wireless signals (e g, infrared signals, etc.) to the second display and identifying which signal elicits a response from the device. For instance, the ARFN 102 may send infrared signals to the second display device and, upon identifying a signal that elicits a response, may map this signal to a make and model of the device.
  • wireless signals e g, infrared signals, etc.
  • the second display device may be identified at 604 using any other suitable technique or combination of techniques.
  • the ARFN 102 determines whether or not the second display device is better suited to display the content that is currently being projected. For instance, the ARFN 102 may reference a display characteristic of the second display device to make this determination. This display characteristic may include a resolution of the second display device, a size of the second display device, a location of the second display device within the environment, or any other characteristic. Of course, the ARFN 102 may make this decision with reference to any other factor or combination of factors, including a preference of a user within the environment, ambient lighting conditions in the environment, potential obstructions in the environment, and the like.
  • the ARFN 102 may instruct the second display device to display some or all of the content in response to determining that the second display device is better suited. For instance, at 608 ( 1 ), the ARFN 102 may send the content being projected, or content that is supplemental to the projected content, to the second display device for display. Conversely, at 608 ( 2 ), the ARFN 102 may send an instruction to display the content or the supplemental content to a third party device, such as a local set-top box, a remote satellite provider, or the like.
  • a third party device such as a local set-top box, a remote satellite provider, or the like.
  • FIG. 7 illustrates another process 700 for utilizing output devices within an environment to enhance a user's consumption of content.
  • the ARFN 102 projects content with a first display device within an environment. For instance, the ARFN 102 may project content via the projector 112 described above.
  • the ARFN 102 determines that a second, different display device is in the environment and is available for displaying content. The ARFN 102 may make this determination in any of the ways described above, or otherwise.
  • the ARFN 102 causes display of content on the second display device at least partly in response to the determining. This content may be the projected content, content that is supplemental to the projected content, or any other content.
  • the ARFN 102 may cause display of the content on the second display device directly in response to the determining, while in other instances this may also occur at least partly in response to receiving an instruction from a user or device. For instance, a user may request (e.g., via a gesture, a voice command, etc.) to display the content via the ARFN 102 .
  • FIG. 8 illustrates an example process 800 of the ARFN 102 supplementing operation of other content output devices in an environment.
  • the ARFN 102 may scan an environment with a camera to identify a display device within the environment that is displaying digital content.
  • the device may comprise a television, a personal computer, a mobile phone, or any other device capable of displaying content.
  • the ARFN 102 may identify the digital content with reference to images scanned by the camera (e.g., by comparing the images to known content, by identifying a channel or other identifier of the content, etc.).
  • the ARFN 102 retrieves digital content that is related to the identified digital content. Finally, at 808 , the ARFN 102 may project, onto a display medium within the environment and via a projector, the digital content that is related to the identified digital content.
  • FIG. 9 illustrates the process 900 , which includes the ARFN 102 identifying content being output in an environment by a first content output device at 902 .
  • the ARFN 102 retrieves content that is related to the content being output by the content output device.
  • the ARFN 102 projects the related content by a second, different content output device in the environment.
  • FIG. 10 illustrates another process 1000 for supplementing the operation of existing display devices in an environment.
  • the ARFN 102 identifies content being displayed in an environment by a first display device.
  • the ARFN 102 determines whether to project the content using a second, different display device in the environment or whether to allow the first display device to continue displaying the content.
  • the ARFN 102 projects the content using the second display device at least partly in response to determining to project the content rather than simply allowing the first display device to continue displaying the content.

Abstract

Techniques for enabling an augmented reality system to utilize existing output devices in an environment are described herein. An augmented reality system may include one or more augmented reality functional nodes (ARFNs) that are configured to project content onto different non-powered and/or powered display mediums. In addition to projecting content in this manner, the ARFN may also utilize existing content output devices within the environment to further enhance a user's experience while consuming the content. For instance, the ARFN may identify that a television exists within the environment and, in response, the ARFN may instruct the television to display certain content. For instance, if the ARFN is projecting a particular movie, the ARFN may stream or otherwise provide an instruction to the television to begin displaying the movie if the television would provide a better viewing experience than the projector.

Description

RELATED APPLICATION
This application is related to U.S. patent application Ser. No. 12/977,924, filed on Dec. 23, 2010 and entitled “Characterization of a Scene with Structured Light,” which is herein incorporated by reference in its entirety.
BACKGROUND
Augmented reality environments enable interaction among users and virtual or computer-generated objects. Within an augmented reality environment, interactions may include electronic input, verbal input, physical input relating to the manipulation of physical objects, and so forth. While augmented reality environments may provide custom display devices for creating the environment, these environments do not utilize existing infrastructure to complement these custom devices.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
FIG. 1 depicts an example environment that includes an augmented reality functional node (ARFN) and multiple other output devices. Within this environment, the ARFN may identify the presence of the other output devices and cause these devices to output content for the purpose of enhancing a user's consumption of the content.
FIG. 2 depicts the environment of FIG. 1 when the ARFN initially projects content onto a display medium, before transitioning the display of the content to a television that the ARFN has identified as residing within the environment.
FIG. 3 depicts the environment of FIG. 1 when the ARFN again projects content onto a display medium. In this example, however, the ARFN has identified the presence of a stereo system and, therefore, sends audio content for output by the stereo system to supplement the user's consumption of the projected content.
FIG. 4 depicts the environment of FIG. 1 when the user is viewing the television described above. Here, the ARFN detects the content that the user is viewing on the television and, in response, begins projecting supplemental content on a projection surface.
FIG. 5 shows an example augmented reality functional node and selected components.
FIGS. 6-7 are illustrative processes of the ARFN determining the presence of an output device and, in response, utilizing the output device to output content for the purpose of enhancing a user's experience.
FIGS. 8-10 are illustrative processes of the ARFN complementing the operation of existing output devices in the environment of FIG. 1 for the purpose of enhancing a user's experience.
DETAILED DESCRIPTION
An augmented reality system may be configured to interact with objects within a scene and generate an augmented reality environment. The augmented reality environment allows for virtual objects and information to merge and interact with real-world objects, and vice versa.
In order to create this augment reality environment, the environment may include one or more augmented reality functional nodes (ARFNs) that are configured to project content onto different non-powered and/or powered display mediums. For instance, an ARFN may include a projector that projects still images, video, and the like onto walls, tables, prescribed stationary or mobile projection screens and the like. In addition, the ARFN may include a camera to locate and track a particular display medium for continuously projecting the content onto the medium, even as the medium moves. In some instances, the environment includes multiple ARFNs that hand off the projection of content between one another as the medium moves between different zones of the environment. In other instances, meanwhile, the ARFNs described herein may be free from any output devices, and may solely control other components in the environment to create the augment reality environment.
While the ARFN may project content within the environment as discussed above, the ARFN may also interactively control other components within the environment (e.g., other content output devices) to further enhance a user's experience while consuming the content. To provide an example, the ARFN may identify that a flat-screen television exists within the same environment as the ARFN (e.g., within a same room). In response, the ARFN may instruct the television to display certain content. For instance, if the ARFN is projecting a particular movie, this node may determine that a user viewing the movie would have a better viewing experience if the movie were displayed on the television. As such, the ARFN may stream or otherwise provide an instruction to the television to begin displaying the movie. By doing so, the user is now able to view the previously projected content on a display device (here, the television) that is better suited for outputting the content.
To provide another example, envision that the ARFN projects an electronic book (eBook), such as one of the books of the “Harry Potter” series by J. K. Rowling, for a user within the environment. Envision also that the ARFN again identifies the presence of the television as described above. In response, the ARFN may instruct the television to display a trailer for the particular Harry Potter movie that is based on the eBook that the ARFN currently projects. In response, the television may begin displaying the trailer and, as such, the user is able to view the trailer that is associated with the eBook that the user currently reads.
In yet another example, envision that the ARFN identifies a stereo system that resides within the environment. In the example above, the ARFN may instruct the stereo system to output a song or other audio associated with the eBook that the ARFN currently projects. As such, the user may enjoy the soundtrack to “Harry Potter” while reading the projected Harry Potter eBook.
Furthermore, the ARFN may utilize multiple display devices in some instances. For instance, the ARFN may instruct the television to the display the trailer for “Harry Potter” while instructing the stereo system to output the audio associated with this trailer. While a few example output devices have been discussed, it is to be appreciated that the ARFN may utilize any number of other output devices capable of outputting content that the ARFN projects, content that is supplemental to the projected content, or any other form of content.
In addition, the ARFN may identify and utilize other devices within the environment for the benefit of the user within the environment. For instance, envision that a camera of the ARFN identifies when the user wakes up in the morning. Having learned the user's routine of making coffee shortly after waking up, the ARFN may send an instruction to the coffee maker to begin its brewing process, even before the user has made his way to the kitchen. In another example, the ARFN may identify (e.g., via the camera) when the user returns home from work in the evening. In response, the ARFN may instruct the oven to turn on to a certain pre-heated temperature in anticipation of the user using the oven to cook dinner Again, while a few examples have been described, the ARFN may work in unison with any device within the environment and capable of communicating with the ARFN directly or indirectly, as described below.
In addition or as an alternative to utilizing existing output devices in an environment to complement operation of the ARFN, the ARFN may itself complement the operation of the existing output devices. For instance, the ARFN may identify existing output devices using any of the techniques discussed briefly above. Furthermore, the ARFN may identify any content that these output devices currently output. For instance, a camera of the ARFN may capture images of a sporting event or a movie that a television outputs and, with these images, the ARFN may identify the content.
In response to identifying the content, the ARFN may either take over the display of the content when appropriate, or may project related content. For instance, the ARFN may determine whether it should project the sporting event being displayed by the television in the above example. The ARFN may make this determination with reference to characteristics of the content, characteristics of the television, characteristics of the projector, preferences of a user in the room, in response to receiving an explicit instruction, or the based on any other factor or combinations of factors. After making this determination, the ARFN may begin projecting the content and may, in some instances, instruct the television to cease the display of the content. In some instances, the ARFN may receive the content from the television, from a content provider that was providing the content to the television, or from another content provider.
In another example, meanwhile, the ARFN may project content that is related to the output content. For instance, at least partly in response to determining that the television is displaying a particular sporting event, the ARFN may retrieve and project content that is related (e.g., supplemental) to the event. For instance, the ARFN may navigate to a website to retrieve and project statistics associated with the teams that the television currently displays. For instance, the ARFN may identify players that are being televised on the television and that are on a fantasy sports team of a user viewing the sporting event. The ARFN may, in response, retrieve statistics of these players during the game being televised, during the season, or the like. These statistics may include the number of fantasy points that these player(s) have acquired, or any other type of statistic. Additionally or alternatively, the ARFN may begin playing audio associated with the sporting event or may supplement the display of the event in any other way.
The detailed discussion below begins with a section entitled “Example Augmented Reality Environment,” which describes one non-limiting environment that may implement the described techniques for utilizing existing output devices within an environment. A section entitled “Example Output Device Utilization” follows and describes several examples where an ARFN from FIG. 1 may use identified devices to output previously-projected content or to supplement the projected content. Next, a section entitled “Example Augmented Reality Functional Node (ARFN)” provides additional details regarding an example ARFN that the previously-described environment may implement. A section entitled “Example Processes” follows, before a brief conclusion ends the discussion. This brief introduction, including section titles and corresponding summaries, is provided for the reader's convenience and is not intended to limit the scope of the claims, nor the proceeding sections.
Example Augmented Reality Environment
FIG. 1 shows an illustrative augmented reality environment 100 that includes at least one augmented reality functional node (ARFN) 102. In this illustration, multiple ARFNs 102 are positioned in the corners of the ceiling of the room, although in other implementations the ARFNs 102 may be positioned in other locations within the environment 100. Further, while FIG. 1 illustrates four ARFNs 102, other embodiments may include more or fewer nodes.
As described briefly above, the ARFN 102 may function to project content onto different display mediums within the environment 100, as well as identify other output devices within the environment to utilize for outputting this content or to otherwise supplement the projected content. FIG. 1, for instance, illustrates that the ARFN 102 may project any sort of visual content (e.g., eBooks, videos, images, etc.) onto a projection area 104. While the projection area 104 here comprises a portion of the wall, the projection area 104 may comprise the illustrated table, chair, floor, or any other illustrated or non-illustrated surface within the environment 100. In some instances, the illustrated user may carry a prescribed powered or non-powered display medium that the ARFN 102 tracks and project content onto.
In this example, the environment 100 further includes a television 106, a tablet computing device 108, and stereo system 110. As described in detail below, the ARFN 102 may identify each of these output devices and may utilize one or more of these devices for outputting the previously-projected content, for outputting content that is supplemental to the projected content, and/or to output any other form of content. While FIG. 1 illustrates the television 106, the tablet 108, and the stereo system 110, in other embodiments the ARFN 102 may identify and utilize a mobile phone, a laptop computer, a desktop computer, an electronic book reader device, a personal digital assistant (PDA), a portable music player, and/or any other computing, display, or audio device configured to output content visually, audibly, tactilely, or in any other manner.
As illustrated, the ARFN 102 includes a projector 112, a camera 114, one or more interfaces 116, and a computing device 118. Some or all of the components within the ARFN 102 may couple to one another in a wired manner, in a wireless manner, or in any other way. Furthermore, while FIG. 1 illustrates the components as residing adjacent to one another, some or all of the components may reside remote from one another in some instances.
Turning to these components, the projector 112 functions to project content onto a projection surface 104, as described above. The camera 114, meanwhile, may be used to track a mobile projection surface, to identify the devices within the environment, to identify user behavior, and the like, as described below.
The interfaces 116 may enable the components within the ARFN 102 to communicate with one another, and/or may enable the ARFN 102 to communicate with other entities within and outside of the environment 100. For instance, the interfaces 116 may include wired or wireless interfaces that allow the ARFN to communicate with the devices 106, 108, and 110 over a local area network (LAN), a wide area network (WAN), or over any other sort of network. The ARFN 102 may communicate directly with these devices (e.g., wired or wirelessly), or may communicate with this devices via third party devices, such as set-top boxes, content providers that communicate with the devices, and the like. Furthermore, the interfaces 116 may allow the ARFN 102 to receive content from local or remote content providers for the purpose of projecting or otherwise outputting this content, or for sending the content to other devices in the environment 100.
As illustrated, the ARFN may include or otherwise couple to the computing device 118. This computing device 118 may reside within a housing of the ARFN 102, or may reside at another location. In either instance, the computing device 118 comprises one or more processors 120 and memory 122. The memory 122 may include computer-readable storage media (“CRSM”), which includes any available physical media accessible by a computing device to implement the instructions stored thereon. CRSM may include, but is not limited to, random access memory (“RAM”), read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), flash memory or other memory technology, compact disk read-only memory (“CD-ROM”), digital versatile disks (“DVD”) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device.
The memory 122 may store several modules such as instructions, datastores, and so forth, each of which is configured to execute on the processors 120. While FIG. 1 illustrates a few example modules, it is to be appreciated that the computing device 118 may include other components found in traditional computing devices, such as an operating system, system buses, and the like. Furthermore, while FIG. 1 illustrates the memory 122 as storing the modules and other components, this data may be stored on other local or remote storage devices that area accessible to the computing device 118 via a local network, a wide area network, or the like. Similarly, the processors 120 may reside locally to one another, or may be remote from one another to form a distributed system.
As illustrated, the memory 122 may store or otherwise have access to an ancillary device identification module 124, a datastore 126 storing indications of one or more identified devices 126(1), . . . , 126(N), a content output module 128, a content datastore 130, and an ancillary device adjustment module 132.
The ancillary device identification module 124 functions to identify devices within the environment 100 that may be available for outputting content in lieu of, or in addition to, the ARFN 102. For instance, in this example, the ancillary device identification module 124 may function to identify the television 106, the table device 108, and the stereo system 110. Furthermore, in some instances, the module 124 may identify devices that reside outside of the environment 100.
The module 124 may identify these devices in any number of ways. The scanning module 134, for instance, may work in combination with the camera 114 to scan the environment 100 for the devices. The camera 114 may seek unique identifiers associated with the devices and may pass these identifiers back to the scanning module 134 for identification of a device. For instance, one or more of the devices within the environment 100 may include a unique bar code, serial number, brand/model name, and/or any other identifier that the camera may read and pass back to the scanning module 134.
In response to receiving a unique identifier, the scanning module 134 may attempt to determine the identity of the device and, in some instances, one or more characteristics of the device. For instance, if the camera reads and returns a unique bar code on the television 106, the scanning module may map this bar code to the make and model of the television. With this information, the ancillary device module 124 may determine one or more characteristics of the television 106, such as a resolution of the display of the television, a size of the display, a type of the display (e.g., LCD, etc.) and the like.
In addition, the camera 114 may also provide an indication of the television's location within the room. The ancillary device identification module 124 may then store this information in association within the device in the datastore 126. That is, the module 124 may store the determined characteristics of the television 106 in the datastore 126, thereby allowing the ARFN to later reference information associated with the television 106 when determining whether to utilize another device for outputting content.
In another example, the camera 114 may read a brand name on a device, such as the television 106, and may pass this brand name back to the scanning module 134. In addition, the camera 114 and the scanning module 134 may together estimate dimensions of the television 106 with reference to objects of known size within the room, and/or utilizing any of the structured light techniques described in the related application incorporated by reference above. The camera 114 and the scanning module 134 may also identify any other salient features that may help in identifying the television 106, such as the color of the television, the layout of the controls, and the like. Using the brand name, the estimated dimensions, and/or other aggregated information, the scanning module 134 may determine or deduce the identity of the television 106.
In some instances, the scanning module 134 may make this determination by comparing the aggregated information with information accessible via the interfaces 116. For instance, the scanning module 134 may access, via the interfaces 116, a website associated with the identified brand name to compare the estimated dimensions and the like against models described on the site. The module 134 may then identify a particular model based on this comparison. Furthermore, and as discussed above, the module 134 may store this identification in the datastore 126 for later access by the ARFN 102.
In addition or in the alternative, the ancillary device identification module 124 may identify the devices within the environment 100 with use of the signal monitoring module 136. The signal monitoring module 136 may attempt to monitor or listen for signals sent to or from devices within the environment. For instance, if a device is controllable via a remote control, the module 136 may monitor signals sent between the remote the control and the corresponding device to help identify the device.
In one example, the illustrated user may control the television 106 via a remote control that sends infrared or another type of wireless signal to the television 106 for operating the television 106. The signal monitoring module 136 may monitor these wireless signals and may use these unique signals to deduce the identity of the television 106. In some instances, the ancillary device identification module 124 may utilize the monitored signals, the images captured by the camera 114, and/or other information to best deduce the most likely identity of the device, such as the television 106.
Further, the ancillary device identification module 124 may utilize the signal transmission module 138 to identity the devices within the environment 100. The signal transmission module 138 may send signals (e.g., infrared or other wireless signals) to the device in an attempt to elicit a response from the device. When a device does respond, the module 138 may determine the identity of the device based at least in part on which of the transmitted signals successfully elicited the response.
For instance, the signal transmission module 138 may send multiple different infrared signals to the television 106 in attempt to turn on the television 106 when it is currently off, or to otherwise control the television in an identifiable manner. For instance, the module 138 may reference a manual of known signals used by different brands and/or models to determine with signal the television 106 responds to. Each of these signals may be sent at a different frequency or may be encoded uniquely. Upon successfully turning on the television 106 with use of a transmitted infrared signal, the module 138 may map this signal back to a particular make and/or model of device. The module 138 may then store this device identification in the datastore 126, as discussed above.
While a few example modules have been illustrated, it is to be appreciated that the ancillary device identification module 124 may identify the devices in any other number of ways. For instance, a user of the ARFN 102 may explicit identify the devices, or the devices may include applications for sending their identifications to the ARFN 102. For instance, a user may download an application to each of her output devices in the environment, with the application functioning to determine the identity of the corresponding device and send this information to the ARFN 102 for storage in the datastore 126. FIGS. 2-3 illustrate and describe such an application in further detail.
After identifying the devices that are within the environment 100 and available for use, the content output module 128 may utilize these devices to output content when appropriate. As illustrated, the content output module 128 may access content stored in the content datastore 130 for outputting on the projector 112 and/or on one of the devices within the environment 100 identified in the datastore 126. This content may be persistently stored in the content datastore 130, or the content may comprise data that is being streamed or otherwise provided “on demand” to the ARFN 102. The content may comprise eBooks, videos, songs, images, pictures, games, productivity applications, or any other type of content that the illustrated user may consume and/or interact with.
In addition or in the alternative to outputting this content via the projector 112, the content output module 128 may utilize one or more of the devices identified in the datastore 130. As such, the module 128 stores or otherwise has access to an analysis module 140 and an instruction module 142. The analysis module 140 functions to determine when it is appropriate to utilize one of the output devices identified in the datastore 126, while the instruction module 142 functions to instruct the devices to output content in response to such a determination.
The analysis module 140 may determine to utilize one of the other output devices in any number of ways. For instance, the user within the environment may explicitly instruct the ARFN 102 to utilize one of these devices. For example, envision that the ARFN 102 is projecting a movie onto a wall of the environment 100, and that the user requests to switch the output to the television 106 and the stereo system 110. In response, the content output module 140 may cease the projecting of the content and the instruction module 142 may instruct the television 106 and the stereo system 110 to begin outputting the visual and audio components of the movie, respectively. In response, the television 106 may begin displaying the movie and the speakers may begin outputting the audio content of the movie at a location where the projecting of the movie left off. In some instances, the user may provide this instruction via a gesture received by the camera 114, via a physical control (e.g., a remote control) used to operate the ARFN 102, via a sound command (e.g., a voice command from a user) received by a microphone of the ARN 102, or in any other suitable way.
In yet another instance, the analysis module 140 may determine to utilize an output device other than the projector 112 with reference to the content and/or with reference to characteristics of the available devices themselves. For instance, when the ARFN 102 identifies the presence of the television 106, the ARFN 102 may determine one or more display characteristics (e.g., size, resolution, etc.) of the television 106, as discussed above. When projecting a movie or in response to receiving a request to project a movie, the analysis module may compare these characteristics (e.g., the display characteristics) to characteristics of the projector 112 and potentially to other devices in the environment 100. Based on this comparison, the analysis module 140 may determine which device or combination of devices is best suited for outputting the content. In the movie example, the ARFN may determine that the television 106 and the stereo system 110 together are best suited to output the movie and, in response, the instruction module 142 may instruct these devices to do so.
In still another example, the content being projected itself may include an instruction to utilize one or more of the other devices within the environment 100. For instance, envision that the projector 112 currently projects an eBook “Harry Potter” that the user within the environment 100 is reading. This eBook may include an instruction to play a clip of the movie “Harry Potter” when the user reaches a particular scene. Or, the content may include an instruction to play the Harry Potter theme song upon the user finishing the book. In yet another example, the user may request to transition from projecting of the eBook to playing an audio version of the eBook via the stereo system 110. Or, the user may be leaving the environment (e.g., going outside for a walk) and, hence, may request that the eBook be sent to the tablet device 108 that communicates with the ARFN 102, or another content provider that provides the content, via a wide area network (e.g., a public WiFi network, a cellular network, etc.), or the like.
In still other instances, the ARFN 102 may monitor current conditions within the environment (e.g., with use of the camera 114, microphones, etc.) to determine whether to project the content and/or utilize an existing output device. This may include measuring ambient light in the environment 100, measuring a glare on the screen of the television 106 or other output device, or any other environmental condition.
While a few examples have been described, the analysis module 140 may determine to utilize ancillary output devices based on the occurrence of multiple other events and based on multiple other factors. For instance, the ARFN 102 may decide to utilize ancillary output devices—or may refrain from doing so—based on a size of a display, a resolution of a display, a location of a display, current ambient lighting conditions in the environment, potentially environmental obstructions that could obstruct the projection of the content or the user's viewing of the content on an ancillary output device, and/or any other factor or combination of factors.
Further, in some embodiments the ARFN 102 may supplement the projected content with supplementary information, such as advertisements, as discussed above. The use of advertisements may offset some of the cost of the content, potentially saving the user money if he purchases the content. Therefore, in some instances the use of additional advertisements—and the potential savings realized there from—may push the balance in favor of projecting the content rather than displaying it on the ancillary output device.
In each instance, the instruction module 142 may send an instruction to the particular device via the interfaces 116, instructing the device to begin outputting certain content. In some instances, the ARFN 102 and the devices (e.g., the television 106, the tablet device 108, the stereo system 110, etc.) may directly couple to one another in a wired or wireless manner. In this instance, the instruction module 142 may send the instruction over this direct connection. In other instances, meanwhile, the instruction module 142 may send the instruction to a third party device, such as to a local set-top box that controls operation of the television 106, to a remote satellite provider that controls operation of the television 106, and the like.
In some instances, the instruction module 142 includes the content along with the request. For instance, when transitioning from projecting a movie to displaying the movie on the television 106, the module 142 may stream the content from the content datastore 130 directly to the television 106. For example, the ARFN 102 may utilize an auxiliary input that allows the ARFN 102 to cause output of content on the device, such as the television 106. In one specific example, the ARFN 102 could send content via a wireless HDMI connection or via an RF HD broadcast. In the latter instances, the ARFN 102 may broadcast the content at a power level that complies with regulatory requirements.
In other instances, meanwhile, the module 142 may refrain from sending the content to the ancillary output device, such as the television 106, but may instead send an instruction to the device or the third party device for the output device to obtain the content. In one example, the module 142 may operate the ancillary content device to obtain the content. For instance, the module 142 may turn on the television 106 and change the input channel to the channel displaying the appropriate content (e.g., the correct movie, show, etc.).
Finally, FIG. 1 illustrates that the ARFN 102 may include or have access to the ancillary device adjustment module 132. The module 132 may function to automatically adjust devices within the environment similar to how a technician may adjust such devices. For instance, upon sending content to the television 106, the module 132 may adjust settings of the television 106, such as brightness, contrast, volume, and the like. By doing so, the ARFN 102 is able to optimize settings associated with the devices in the environment 100, as well as potentially tailor these settings based on the content being output and preferences of the users consuming the content.
Furthermore, in addition to the devices 106, 108, and 110 supplementing the operation of the ARFN 102, the ARFN 102 may act to supplement the operation of these devices 106, 108, and 110. For instance, the ARFN 102 may utilize the projector 112 to project content in lieu of or along with one of these devices within the environment 100. In one example, the ARFN 102 may instruct the projector 112 to project content previously output by one of the other devices 106, 108, or 110. As such, the ARFN 102 may also instruct the device previously outputting the content to cease doing so. Additionally or alternatively, the ARFN 102 may instruct the projector 112 to project content that is related to the content being output by one of the other devices 106, 108, or 110, as discussed below.
To complement the operation of the devices 106, 108, and/or 110 in this manner, the ARFN 102 may identify the devices in any of the ways described above. For instance, the camera 114 may scan the environment to identify the devices, the user within the environment 100 may explicitly identify the devices to the ARFN 102, the devices themselves may send their identities to the ARFN 102, and the like.
Furthermore, the memory 122 of the ARFN 102 may store a content identification module 144 that functions to identify the content being output by one or more of the devices identified in the datastore 126. The content identification module 144 may identify this content in any number of ways. For instance, in one example, the camera 114 may scan content output devices that display visual content, such as the television 106, the tablet computing device 108, and the like. In doing so, the camera 114 may capture images that these devices display and may provide these images to the content identification module 144.
In response to receiving these images, the module 144 may compare these images to images associated with known content to find a match and identify the content. For instance, the module 144 may send the received images of the displayed content to a web service or other entity that may match the content to known content. Alternatively, the module 144 may send the images (e.g., in the form of video) to a group of one or more human users that may identify the content.
In another example, the camera 114 may scan the environment 100 to identify visual indications that can be mapped to the content being output. For instance, the camera 114 may capture an image of a set-top box that controls the television 106, where the set-top box visually indicates a television channel that this user in the environment is currently watching. The content identification module 144 may then map this channel to identified content, such as with reference to a channel listing of television content. To provide another example, the camera 114 may capture images of a receiver of the stereo system 110, where the receiver visually indicates a name of a song or artist that the stereo system currently plays. The content identification module 144 may then identify the content with use of these images.
In another example, a microphone of the ARFN (illustrated below with reference to FIG. 5) may capture audio content and provide this to the module 114. The content identification module 144 may then attempt to match this audio content to known audio content to make the identification. In yet another example, the camera 114 may capture images of a remote control being used by a user within the environment to control the output device, such as the television 106. The ARFN 102 may use these captured images to identify the output device itself, as well as the content being output. For instance, using the television 106 as an example, the camera 114 may capture the sequence of buttons that the user pushes in order to deduce the channel that the television 106 is currently displaying. Furthermore, the ARFN 102 may use a collection of this information in unison to deduce the identity of the content being output.
In still other instances, the output device (e.g., the television 106) or a device coupled to the output device (e.g., the set-top box) may inform the ARFN 102 directly of the content being output. Additionally or alternatively, the user within the environment may provide this identification to the ARFN 102.
Regardless of how the content identification module 144 identifies the content, the content output module 128 may selectively instruct the projector 112 to project the content being output by the output device (e.g., the television 106) or content that is related to the content being output by the output device. For instance, the content output module 128 may determine one or more characteristics (e.g., display characteristics) of the content output device and may determine, based on these characteristics, if the viewing experience of the user would be enhanced by projecting all or part of the content via the projector 112. That is, the module 128 may determine whether the projector 112 is better suited to output the content and, if so, may instruct the projector 112 to project the content (in whole or in part).
In other instances, the content output module 128 may reference a characteristic of the content to determine whether the projector is better suited to output the content. For instance, if the television is displaying a static image, the content output module 128 may determine that it is more cost effective and equally suitable in terms of quality to project this static image. Of course, the converse may be true in other instances. Furthermore, without regard to the type of the content, the content output module 128 may identify whether the identified content can be obtained in a more cost-effective manner after identifying the content. For instance, if the user is watching a particular movie, the content output module 128 may identify multiple content providers that provide the same or similar movies, as well as the cost charged by each provider. The ARFN 102 may then output a less expensive version of the movie in response, or may suggest to the user to consider the less expensive content provider when obtaining movies in the future (e.g., in the event that the user has already paid for the movie that is currently being displayed).
In still other instances, the content output module 128 may identify a user within the environment 100 with use of images received by the camera 114. Conversely, the user may provide an indication of his presence within the environment 100. In either instance, the content output module 128 may identify and reference a preference of the particular user when deciding whether to project the content. For instance, different users may set up different preferences regarding when to switch the output of content to the projector 112. In each of these instances, the content output module 128 may allow the content output device within the environment 100 (e.g., the television 106) to continue outputting the content, or the module 128 may instruct the device to cease outputting the content. For instance, the content output module 128 may turn off the television 106 (e.g., via an infrared or other wireless signal).
In some instances, the content output module 128 may instruct the projector 112 to project content that is related to the content being output by the output device—instead of or in addition to projecting the content previously output by the content output device. For instance, in response to the content identification module 144 identifying the content, the content output module 128 may retrieve related content via the interfaces 116. In some instances, the content output module 128 may request and retrieve content from a content provider that is different than the content provider that provides the identified content (e.g., the content being displayed by the television 106). For instance, if the television 106 displays a sporting event, the content output module 128 may request and retrieve content from a website that stores statistics or other types of information related to the sporting event. The content output module 128 may then instruct the projector 112 to project this supplemental information for the purpose of enhancing a viewing experience of the user within the environment 100.
In other instances, meanwhile, the related content may comprise one or more advertisements. These advertisements may or may not be associated with items displayed or otherwise output on the content output device. For instance, if the ARFN 102 determines that the stereo system 110 is playing a song by a certain artist, the ARFN 102 may instruct the projector 112 to project an advertisement to purchase the song or the album, or may project another advertisement associated with the song that is currently playing. In another example, the ARFN 102 may determine that the television 106 is playing a certain movie or show. In response, the ARFN 102 may project, output audibly, or otherwise output an advertisement that is associated with the content being output. This may include an advertisement to purchase or obtain the movie or show itself, to purchase or obtain items in the movie or show, or an advertisement for any other type of item (e.g., a product or service) that is associated with the displayed content.
Conversely, the ARFN 102 may utilize the other content output devices in the environment for outputting these types of advertisements. For instance, when the projector 112 projects certain content, the ARFN 102 may provide an instruction to one or more of the devices 106, 108, and 110 to output an advertisement that is related to the projected content. For instance, if the projector 112 projects a sporting event, the ARFN 102 may instruct the television to display an advertisement for purchasing tickets to future games, for sports memorabilia associated with the teams currently playing in the sporting event, or the like.
Example Output Device Utilization
FIG. 2 depicts the environment 100 when the ARFN 102 projects content onto the projection surface 104 and then transitions the display of the content to the example television 106.
As illustrated, at 202(1), the ARFN 102 projects content onto the projection surface 104. Here, the projection surface 104 comprises a portion of the wall within the environment 100. During this projection, the ARFN 102 decides to transition the projecting of the content to another device within the environment 100, and namely to the television 106. For instance, the ARFN 102 may have received a request from the user, may have received a request from the content, may have identified the presence of the television 106, or may have decided to transition the content for any other reason. In one example, the ARFN 102 may have determined that the user's viewing experience will likely be enhanced by presenting the content on the television 106 rather than via the projector 112.
Between 202(1) and 202(2), FIG. 2 illustrates that the ARFN 102 instructs the television 106 to begin displaying the previously-projected content. As illustrated, the television 106 may have been configured to communicate with the ARFN 102 via an application 204, or the ARFN 102 may simply communicate with the television 106 via means that the television 106 provides off the shelf. For instance, the ARFN 102 may communicate with the television 106 via infrared signals and may provide the content to the television 106 via an HDMI or another input channel that the television 106 provides natively.
At 202(2), the ARFN 102 has turned on the television 106 and has begun causing display of the content. The ARFN 102 may stream the content directly to the television 106, the ARFN 102 may configure the television 106 to play the content (e.g., by tuning the television 106 to an appropriate channel), or the ARFN 102 may instruct a third party device (e.g., associated with a satellite provider, etc.) to play the content on the television 106. In each of these instances, the ARFN 102 has successfully transitioned from projecting the content to displaying the content on the television 106, which may provide better display characteristics that enhance the user's viewing of the content.
FIG. 3 depicts another example of the ARFN 102 utilizing existing output devices within the environment 100. At 302(1), the ARFN 102 again projects content onto the projection surface 104. Thereafter, the ARFN 102 determines to utilize the stereo system 110 within the environment 100 for the purpose of providing better audio output for the content that the user is currently consuming. As such, between 302(1) and 302(2), the ARFN 102 instructs the stereo system 110 to output the content associated with the projected content. Similar to the television 106, the stereo system 110 may store the application 204 that allows that stereo system to communicate with the ARFN 102. In other instances, however, the ARFN 102 may communicate with the stereo system 110 using components that are native to the stereo.
At 302(2), the stereo system 110 outputs the audio content associated with the projected content. As such, the ARFN 102 is able to supplement the projected content using the stereo system 110 residing within the environment 100.
FIG. 4 depicts an example of the ARFN 102 supplementing the operation of existing output devices within the environment 100. Here, at 402(1), the user watches content on the television 106. In response, the ARFN 102 may identify the content that the television is currently displaying. The ARFN 102 may identify the content in a variety of ways as described above. For instance, the camera 114 may capture images of the content and may compare this to known content. Conversely, the camera 114 may read a set-top box coupled to the television 106 that displays the channel that the user currently views, or may learn of the channel that the user is watching by communicating directly with set-top box or the television 106. In yet another example, the user himself may inform the ARFN 102 of the content that the television 106 currently displays.
In each of these instances, between 402(1) and 402(2), the ARFN 102 may utilize the interfaces 116 to retrieve content that is associated with and supplemental to the content that the television 106 currently displays. For instance, the ARFN 102 may communicate with a content provider 404, which may comprise a website or any other entity that stores information related to the currently broadcast content.
At 402(2), the ARFN projects the supplemental content onto a projection surface 406 using the projector 112. In this example, the television 106 displays a car race and, in response, the ARFN 102 retrieves and projects information regarding the cars and the drivers of the cars that the television 106 currently displays. As such, the user is able to learn information about the cars and the drivers in addition to the information being broadcast on the television 106. Furthermore, and as discussed above, the supplemental content may comprise advertisements that are associated with the content on the television 106. For instance, the ARFN 102 may identify that one of the cars is sponsored by “Clorox®”. In response, the ARFN 102 may retrieve and project an advertisement for Clorox, a competitor of Clorox, and/or any other associated product or service.
Example Augmented Reality Functional Node (ARFN)
FIG. 5 shows an illustrative schematic 500 of the ARFN 102 and selected components. The ARFN 102 is configured to scan at least a portion of an environment 502 and the objects therein, as well as project content, as described above.
A chassis 504 is configured to hold the components of the ARFN 102, such as the projector 112. The projector 112 is configured to generate images, such as visible light images perceptible to the user, visible light images imperceptible to the user, images with non-visible light, or a combination thereof. This projector 112 may comprise a digital micromirror device (DMD), liquid crystal on silicon display (LCOS), liquid crystal display, 3LCD, and so forth configured to generate an image and project it onto a surface within the environment 502. For example, as described herein, the projector may be configured to project the content illustrated in FIGS. 1-4.
The projector 112 has a projector field of view 506 which describes a particular solid angle. The projector field of view 506 may vary according to changes in the configuration of the projector. For example, the projector field of view 506 may narrow upon application of an optical zoom to the projector. In some implementations, the ARFN 102 may include a plurality of projectors 112.
A camera 114 may also reside within the chassis 504. The camera 114 is configured to image the scene in visible light wavelengths, non-visible light wavelengths, or both. The camera 114 has a camera field of view 508, which describes a particular solid angle. The camera field of view 508 may vary according to changes in the configuration of the camera 114. For example, an optical zoom of the camera may narrow the camera field of view 508. In some implementations, the ARFN 102 may include a plurality of cameras 210.
The chassis 504 may be mounted with a fixed orientation, or be coupled via an actuator to a fixture such that the chassis 504 may move. Actuators may include piezoelectric actuators, motors, linear actuators, and other devices configured to displace or move the chassis 504 or components therein such as the projector 112 and/or the camera 114. The actuator may comprise a pan motor 510, a tilt motor 512, and so forth. The pan motor 510 is configured to rotate the chassis 504 in a yawing motion, while the tilt motor 512 is configured to change the pitch of the chassis 504. By panning and/or tilting the chassis 504, different views of the scene may be acquired.
One or more microphones 514 may also reside within the chassis 504, or elsewhere within the environment 502. These microphones 514 may be used to acquire input from the user, for echolocation, location determination of a sound, or to otherwise aid in the characterization of and receipt of input from the environment 502. For example, the user may make a particular noise such as a tap on a wall or snap of the fingers which are pre-designated as inputs within the augmented reality environment. The user may alternatively use voice commands. Such audio inputs may be located within the environment 502 using time-of-arrival differences among the microphones and used to summon an active zone within the augmented reality environment. Of course, these audio inputs may be located within the environment 502 using multiple different techniques.
One or more speakers 516 may also be present to provide for audible output. For example, the speakers 516 may be used to provide output from a text-to-speech module or to playback pre-recorded audio.
A transducer 518 may reside within the ARFN 102, or elsewhere within the environment 502. The transducer may be configured to detect and/or generate inaudible signals, such as infrasound or ultrasound. These inaudible signals may be used to provide for signaling between ancillary devices and the ARFN 102. Of course, any other type of technique may be employed to provide for signaling between ancillary devices and the ARFN 102.
A ranging system 520 may also reside in the ARFN 102. The ranging system 520 is configured to provide distance information from the ARFN 102 to a scanned object or set of objects. The ranging system 520 may comprise radar, light detection and ranging (LIDAR), ultrasonic ranging, stereoscopic ranging, and so forth. In some implementations the transducer 518, the microphones 514, the speakers 516, or a combination thereof may be configured to use echolocation or echo-ranging to determine distance and spatial characteristics of the environment and objects therein.
In this illustration, the computing device 118 resides within the chassis 504. However, in other implementations all or a portion of the computing device 118 may reside in another location and coupled to the ARFN 102. This coupling may occur via wire, fiber optic cable, wirelessly, or a combination thereof. Furthermore, additional resources external to the ARFN 102 may be accessed, such as resources in another ARFN 102 accessible via a local area network, cloud resources accessible via a wide area network connection, or a combination thereof.
This illustration also depicts a projector/camera linear offset designated “O”. This is a linear distance between the projector 112 and the camera 114. Placement of the projector 112 and the camera 114 at distance “O” from one another aids in the recovery of structured light data from the scene. The known projector/camera linear offset “O” may also be used to calculate distances, dimensioning objects such as ancillary devices, and otherwise aid in the characterization of objects within the environment 502. In other implementations the relative angle and size of the projector field of view 506 and camera field of view 508 may vary. Also, the angle of the projector 112 and the camera 114 relative to the chassis 504 may vary.
In other implementations the components of the ARFN 102 may be distributed in one or more locations within the environment 502. For instance, microphones 514 and speakers 516 may be distributed throughout the environment 502. The projector 112 and the camera 114 may also be located in separate chassis 504. The ARFN 102 may also include discrete portable signaling devices used by users to issue inputs. For example, these may be acoustic clickers (audible or ultrasonic), electronic signaling devices such as infrared emitters, radio transmitters, and so forth.
Example Processes
FIGS. 6-7 are illustrative processes 600 and 700 of the ARFN 102 determining the presence of an output device and, in response, utilizing the output device to output content for the purpose of enhancing a user's experience. While the following description may describe the ARFN 102 as implementing these processes, other types of devices may implement a portion or all of these processes in other implementations. FIGS. 8-10 are illustrative processes 800, 900, and 1000 of the ARFN 102 complementing the operation of existing output devices in an environment. The processes described herein may be implemented by the architectures described herein, or by other architectures. These processes are illustrated as a collection of blocks in a logical flow graph. Some of the blocks represent operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order or in parallel to implement the processes.
The process 600 includes, at 602, instructing a first display device to project content in an environment. For instance, the computing device 118 of the ARFN 102 may instruct the projector 112 to project a movie, an eBook, or any other content. At 604, the ARFN 102 may identify a second display device in the environment. For instance, the ARFN 102 may identify the presence of the television 106, the tablet computing device 108, a mobile phone, an electronic book reader device, or any other computing device.
The ARFN 102 may identify the second display device in any number of ways. For instance, at 604(1), the ARFN 102 may scan the environment to locate the second display device. This may include scanning the environment with a camera to uniquely identify the second display device, with reference to a bar code of the device, a brand name of the device, dimensions of the device, or any other information that may be used to uniquely identify the second display device. The ARFN 102 may also scan the environment to watch for wireless signals (e.g., infrared signals, etc.) sent to or received from the ancillary devices, as described in further detail at 604(4).
Additionally or alternatively, at 604(2), the ARFN 102 may receive an indication of the second display device from a user. For instance, a user within the environment may provide an identification of the second display device to the ARFN 102. At 604(3), the ARFN may additionally or alternatively receive an indication of the second display device from the second device itself. For instance, the second display device may include the application 204 that is configured to send the identity of the second display device to the ARFN 102.
At 604(4), meanwhile, the ARFN 102 may identify the second display device, in whole or in part, by monitoring wireless signals sent to the second display device. For instance, the ARFN 102 may monitor infrared signals sent to the second display device from a remote control used operate the device, or may monitor any other type of wireless signal. Then, the ARFN 102 may identify the second display device by mapping the identified signal to a make and model of the device. Finally, the ARFN 102 may, at 604(5), identify the second display device by sending wireless signals (e g, infrared signals, etc.) to the second display and identifying which signal elicits a response from the device. For instance, the ARFN 102 may send infrared signals to the second display device and, upon identifying a signal that elicits a response, may map this signal to a make and model of the device.
While a few examples have been described it is to be appreciated that the second display device may be identified at 604 using any other suitable technique or combination of techniques.
At 606, the ARFN 102 determines whether or not the second display device is better suited to display the content that is currently being projected. For instance, the ARFN 102 may reference a display characteristic of the second display device to make this determination. This display characteristic may include a resolution of the second display device, a size of the second display device, a location of the second display device within the environment, or any other characteristic. Of course, the ARFN 102 may make this decision with reference to any other factor or combination of factors, including a preference of a user within the environment, ambient lighting conditions in the environment, potential obstructions in the environment, and the like.
At 608, the ARFN 102 may instruct the second display device to display some or all of the content in response to determining that the second display device is better suited. For instance, at 608(1), the ARFN 102 may send the content being projected, or content that is supplemental to the projected content, to the second display device for display. Conversely, at 608(2), the ARFN 102 may send an instruction to display the content or the supplemental content to a third party device, such as a local set-top box, a remote satellite provider, or the like.
FIG. 7 illustrates another process 700 for utilizing output devices within an environment to enhance a user's consumption of content. At 702, the ARFN 102 projects content with a first display device within an environment. For instance, the ARFN 102 may project content via the projector 112 described above. At 704, the ARFN 102 determines that a second, different display device is in the environment and is available for displaying content. The ARFN 102 may make this determination in any of the ways described above, or otherwise. At 706, the ARFN 102 causes display of content on the second display device at least partly in response to the determining. This content may be the projected content, content that is supplemental to the projected content, or any other content. In some instances, the ARFN 102 may cause display of the content on the second display device directly in response to the determining, while in other instances this may also occur at least partly in response to receiving an instruction from a user or device. For instance, a user may request (e.g., via a gesture, a voice command, etc.) to display the content via the ARFN 102.
FIG. 8, meanwhile, illustrates an example process 800 of the ARFN 102 supplementing operation of other content output devices in an environment. At 802, the ARFN 102 may scan an environment with a camera to identify a display device within the environment that is displaying digital content. The device may comprise a television, a personal computer, a mobile phone, or any other device capable of displaying content. At least partly in response to identifying a display device displaying digital content, at 804 the ARFN 102 may identify the digital content with reference to images scanned by the camera (e.g., by comparing the images to known content, by identifying a channel or other identifier of the content, etc.).
At 806, the ARFN 102 retrieves digital content that is related to the identified digital content. Finally, at 808, the ARFN 102 may project, onto a display medium within the environment and via a projector, the digital content that is related to the identified digital content.
FIG. 9 illustrates the process 900, which includes the ARFN 102 identifying content being output in an environment by a first content output device at 902. At 904, the ARFN 102 retrieves content that is related to the content being output by the content output device. At 906, the ARFN 102 projects the related content by a second, different content output device in the environment.
Finally, FIG. 10 illustrates another process 1000 for supplementing the operation of existing display devices in an environment. At 1002, the ARFN 102 identifies content being displayed in an environment by a first display device. At 1004, the ARFN 102 determines whether to project the content using a second, different display device in the environment or whether to allow the first display device to continue displaying the content. Finally, at 1006, the ARFN 102 projects the content using the second display device at least partly in response to determining to project the content rather than simply allowing the first display device to continue displaying the content.
CONCLUSION
Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as illustrative forms of implementing the claims.

Claims (40)

What is claimed is:
1. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed, cause one or more processors to perform acts comprising:
instructing a first display device to project digital content for viewing within an environment, the first display device having a first display characteristic;
causing a camera to capture imagery of at least a portion of the environment;
identifying a second display device located within the environment based at least in part on the imagery captured by the camera;
selecting, based at least in part on the first display characteristic, the second, different, display device having a second display characteristic, wherein the first display characteristic includes at least a first location of the first display device within the environment and the second display characteristic includes at least a second location of the second display device within the environment; and
instructing, based at least in part on a comparison between the first display characteristic and the second display characteristic, (i) the second display device to display the digital content and (ii) the first display device to discontinue projecting.
2. One or more non-transitory computer-readable media as recited in claim 1, wherein the first display device comprises a projector that projects the digital content onto a non-powered display medium.
3. One or more non-transitory computer-readable media as recited in claim 1, wherein the second display device comprises a television, a mobile phone, a tablet computer, a laptop computer, a desktop computer, an electronic book reader device, a portable media player, or a personal digital assistant (PDA).
4. One or more non-transitory computer-readable media as recited in claim 1, wherein the causing the camera to capture the imagery further comprises causing the camera to scan the environment to capture the imagery.
5. One or more non-transitory computer-readable media as recited in claim 1, wherein the identifying the second display device further comprises receiving an indication that the second display device resides within the environment.
6. One or more non-transitory computer-readable media as recited in claim 1, wherein the identifying the second display device further comprises receiving an indication from the second display device that the second display device resides within the environment.
7. One or more non-transitory computer-readable media as recited in claim 1, wherein the identifying the second display device further comprises monitoring wireless signals sent from or received at the second display device.
8. One or more non-transitory computer-readable media as recited in claim 1, wherein:
the second display device is controllable via a remote control configured to send infrared signals to the second display device; and
the identifying the second display device further comprises monitoring the infrared signals sent to the second display device via the remote control and identifying the second display device with reference to the infrared signals.
9. One or more non-transitory computer-readable media as recited in claim 1, wherein the identifying the second display device further comprises:
sending multiple different wireless signals to the second display device;
determining which wireless signal of the multiple wireless signals results in a response from the second display device; and
identifying the second display device with reference to the wireless signal that results in the response from the second display device.
10. One or more non-transitory computer-readable media as recited in claim 1, wherein the display characteristic further includes at least one of a type of a display of the second display device, a size of the display of the second display device, or a resolution of the display of the second display device.
11. One or more non-transitory computer-readable media as recited in claim 1, wherein the instructing the second display device to display the digital content comprises sending the digital content to the second display device via a wired or wireless connection or sending an instruction to a third party device that causes the second display device to display the digital content.
12. A method comprising:
under control of one or more computing systems configured with specific executable instructions,
projecting content using a first display device residing in an environment;
determining that a second, different display device also resides within the environment and is available for displaying content, the second display device located at a second location that is different than a first location associated with the first display device;
determining that the second display device at the second location has an absence of physical obstructions that physically obstruct display of the content on the second display device or that physically obstruct a viewing of the content on the second display device;
selecting the second display device based at least in part on the second location of the second display device, a determined availability of the second display device, and the absence of the physical obstructions;
causing display of content on the second display device within the environment; and
terminating the projecting by the first display device based at least in part on the causing the display of the content on the second display device.
13. A method as recited in claim 12, wherein the first display device comprises a projector configured to project content onto a non-powered display medium and wherein the second display device comprises a display device other than a projector.
14. A method as recited in claim 12, wherein the selecting is further based at least in part on determined physical obstructions that physically obstruct display of the content on the first display device or that physically obstruct a viewing of the content on the first display device.
15. A method as recited in claim 14, further comprising:
causing a camera to capture imagery of the environment; and
analyzing the imagery to determine the physical obstructions that physically obstruct display of the content on the first display device or that physically obstruct a viewing of the content on the first display device.
16. A method as recited in claim 12, wherein the content displayed by the second display device comprises at least the projected content.
17. A method as recited in claim 12, further comprising receiving a request within the environment to display at least a portion of the projected content on the second display device, and wherein the causing the display of the content on the second display device also occurs at least partly in response to the receiving of the request.
18. A method as recited in claim 12, wherein the content displayed by the second display device comprises at least a portion of the projected content, and further comprising:
determining that the second display device is better suited for displaying the projected content than the first display device; and
wherein the causing the display of the content on the second display device also occurs at least partly in response to the determining that the second display device is better suited for displaying the projected content.
19. A method as recited in claim 12, further comprising interpreting a gesture from within the environment, wherein the determining that the second display device also resides within the environment and is available for displaying content is based at least in part on the interpreting the gesture, and wherein the gesture identifies at least a location of the second display device.
20. A method as recited in claim 12, wherein the determining that the second display device also resides within the environment and is available for displaying content comprises scanning the environment with a camera to identify the second display device.
21. A method as recited in claim 12, wherein the determining that the second display device also resides within the environment and is available for displaying content comprises receiving an indication that the second display device resides within the environment.
22. A method as recited in claim 12, wherein the determining that the second display device also resides within the environment and is available for displaying content comprises receiving an indication from the second display device that the second display device resides within the environment.
23. A method as recited in claim 12, wherein the determining that the second display device also resides within the environment and is available for displaying content comprises (i) monitoring wireless signals sent from or received at the second display device, or (ii) sending wireless signals to the second display device.
24. A system comprising:
one or more processors;
memory;
a projector, coupled to the one or more processors and configured to project content within an environment at a first location;
a display device configured to display the content within the environment;
an ancillary device identification module, stored in or accessible by the memory and executable on the one or more processors, to identify the display device within the environment and configured to display the content in lieu of the projector projecting the content in response to determining that a second location of the display device is at least one of physically unobstructed or closer to a line of sight of a user than the first location associated with the content projected by the projector; and
a content output module, stored in or accessible by the memory and executable on the one or more processors, to send, at least partly in response to the identifying, a first instruction to the display device to display the content and to send a second instruction to the projector to terminate projecting.
25. A system as recited in claim 24, wherein the display device is configured to output visual content and audible content.
26. A system as recited in claim 24, wherein the content output module is further executable on the one or more processors to provide the content being projected by the projector or displayed by the display device at least partly in response to identifying the display device within the environment.
27. A system as recited in claim 24, further comprising an ancillary device adjustment module, stored in or accessible by the memory and executable on the one or more processors, to adjust, remotely, one or more output settings of the display device.
28. A system as recited in claim 27, wherein the one or more output settings comprise a resolution of the display device, a volume of the display device, a contrast of the display device, or a brightness of the display device.
29. A system as recited in claim 27, wherein the ancillary device adjustment module adjusts the one or more output settings with reference to content being output by the display device.
30. A system as recited in claim 24, wherein:
the ancillary device identification module is further executable to identify multiple output devices within the environment that are configured to output the content being projected by the projector; and
the content output module is further executable to send, at least partly in response to the identifying, respective instructions to the multiple output devices to output the content being projected by the projector.
31. A system as recited in claim 24, wherein the ancillary device identification module is further configured to identify an audio output device also within the environment and configured to emit audio of the content, and wherein the content output module is further configured to send, at least partly in response to the identifying the audio output device, a third instruction to the audio output device to emit the audio of the content.
32. A system as recited in claim 31, wherein the content output module is further configured to send, at least partly in response to the identifying the audio output device, a fourth instruction to the projector to terminate emitting of the audio of the content.
33. A system as recited in claim 31, further comprising the audio output device.
34. A system as recited in claim 31, wherein the audio output device includes wireless speakers.
35. A system as recited in claim 24, wherein the display device comprises a television, a mobile phone, a tablet computer, a laptop computer, a desktop computer, an electronic book reader device, a portable media player, or a personal digital assistant (PDA).
36. A system as recited in claim 24, wherein the projector includes a first display characteristic and the display device includes a second display characteristic, and wherein the content output module determines, based on a comparison between the first display characteristic and the second display characteristic, to transition the digital content from the projector to the display device.
37. A method comprising:
causing a first display device to display content in an environment, wherein the content includes imagery and audio;
causing an original source to emit the audio associated with the imagery;
determining that a second, different display device also resides within the environment and is available for displaying the content;
interpreting a gesture from within the environment, wherein the determining that the second display device also resides within the environment and is available for displaying content is based at least in part on the interpreting the gesture, and wherein the gesture identifies at least a location of the second display device;
causing the second display device to display the content within the environment at least partly in response to the determining; and
causing the second display device to emit the audio of the content.
38. A method as recited in claim 37, further comprising causing an audio device that is different than the projector to emit audio of the content in the environment.
39. A method as recited in claim 37, further comprising continuing, for at least the period of time, to emit the audio of the content from an original source simultaneous with the causing the second display device to display the content.
40. A method as recited in claim 37, wherein the causing a first display device to display content includes causing a projector to project content onto a surface in the environment.
US12/982,457 2010-12-30 2010-12-30 Utilizing content output devices in an augmented reality environment Expired - Fee Related US9508194B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/982,457 US9508194B1 (en) 2010-12-30 2010-12-30 Utilizing content output devices in an augmented reality environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/982,457 US9508194B1 (en) 2010-12-30 2010-12-30 Utilizing content output devices in an augmented reality environment

Publications (1)

Publication Number Publication Date
US9508194B1 true US9508194B1 (en) 2016-11-29

Family

ID=57351976

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/982,457 Expired - Fee Related US9508194B1 (en) 2010-12-30 2010-12-30 Utilizing content output devices in an augmented reality environment

Country Status (1)

Country Link
US (1) US9508194B1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160188123A1 (en) * 2014-12-25 2016-06-30 Panasonic Intellectual Property Management Co., Ltd. Projection device
US20160205432A1 (en) * 2010-09-20 2016-07-14 Echostar Technologies L.L.C. Methods of displaying an electronic program guide
US20170330036A1 (en) * 2015-01-29 2017-11-16 Aurasma Limited Provide augmented reality content
US20180115803A1 (en) * 2016-10-25 2018-04-26 Alphonso Inc. System and method for detecting unknown tv commercials from a live tv stream
US20180191990A1 (en) * 2015-09-02 2018-07-05 Bandai Namco Entertainment Inc. Projection system
US10108718B2 (en) 2016-11-02 2018-10-23 Alphonso Inc. System and method for detecting repeating content, including commercials, in a video data stream
US10346474B1 (en) 2018-03-30 2019-07-09 Alphonso Inc. System and method for detecting repeating content, including commercials, in a video data stream using audio-based and video-based automated content recognition
WO2021211265A1 (en) * 2020-04-17 2021-10-21 Apple Inc. Multi-device continuity for use with extended reality systems
US20210367997A1 (en) * 2013-12-10 2021-11-25 Google Llc Providing content to co-located devices with enhanced presentation characteristics
US11539798B2 (en) * 2019-06-28 2022-12-27 Lenovo (Beijing) Co., Ltd. Information processing method and electronic device
US20230156280A1 (en) * 2021-11-18 2023-05-18 Synamedia Limited Systems, Devices, and Methods for Selecting TV User Interface Transitions

Citations (130)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3835245A (en) 1971-08-07 1974-09-10 Image Analysing Computers Ltd Information modification in image analysis systems employing line scanning
US3840699A (en) 1972-05-25 1974-10-08 W Bowerman Television system for enhancing and tracking an object
US4112463A (en) 1976-03-31 1978-09-05 Robert Bosch Gmbh System for detecting a motion in the monitoring area of two or more television cameras
US5704836A (en) 1995-03-23 1998-01-06 Perception Systems, Inc. Motion-based command generation technology
US5946209A (en) 1995-02-02 1999-08-31 Hubbell Incorporated Motion sensing system with adaptive timing for controlling lighting fixtures
US6059576A (en) 1997-11-21 2000-05-09 Brann; Theodore L. Training and safety device, system and method to aid in proper movement during physical activity
US6098091A (en) 1996-12-30 2000-08-01 Intel Corporation Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities
US20010049713A1 (en) 1998-02-26 2001-12-06 Sun Microsystems Inc. Method and apparatus for dynamic distributed computing over a network
US20010056574A1 (en) 2000-06-26 2001-12-27 Richards Angus Duncan VTV system
US20020001044A1 (en) 2000-06-29 2002-01-03 Villamide Jesus Mendez Projection apparatus and method of image projection
US20020070278A1 (en) 2000-12-11 2002-06-13 Hung Patrick Siu-Ying Method and apparatus for scanning electronic barcodes
US20020168069A1 (en) 2001-02-28 2002-11-14 Babak Tehranchi Copy protection for digital motion picture image data
US6503195B1 (en) 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
US20030156306A1 (en) 2001-12-17 2003-08-21 Dai Nippon Printing Co., Ltd. Computer-generated hologram fabrication process, and hologram-recorded medium
US6618076B1 (en) 1999-12-23 2003-09-09 Justsystem Corporation Method and apparatus for calibrating projector-camera system
US6690618B2 (en) 2001-04-03 2004-02-10 Canesta, Inc. Method and apparatus for approximating a source position of a sound-causing event for determining an input used in operating an electronic device
US20040046736A1 (en) 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20040114581A1 (en) * 2002-12-16 2004-06-17 Hans Mathieu Claude Voice-over-IP communicator
US6760045B1 (en) 2000-02-22 2004-07-06 Gateway, Inc. Simultaneous projected presentation of client browser display
US6789903B2 (en) 2003-02-18 2004-09-14 Imatte, Inc. Generating an inhibit signal by pattern displacement
US20040190716A1 (en) 2003-03-27 2004-09-30 Eastman Kodak Company Projector with enhanced security camcorder defeat
US6803928B2 (en) 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US20040201823A1 (en) 2003-04-11 2004-10-14 Ramesh Raskar Context aware projector
US6811267B1 (en) 2003-06-09 2004-11-02 Hewlett-Packard Development Company, L.P. Display system with nonvisible data projection
US20050081164A1 (en) 2003-08-28 2005-04-14 Tatsuya Hama Information processing apparatus, information processing method, information processing program and storage medium containing information processing program
US20050099432A1 (en) * 2003-11-12 2005-05-12 International Business Machines Corporation Multi-value hidden object properties in a presentation graphics application
US20050110964A1 (en) 2002-05-28 2005-05-26 Matthew Bell Interactive video window display system
US20050128196A1 (en) 2003-10-08 2005-06-16 Popescu Voicu S. System and method for three dimensional modeling
US20050254683A1 (en) 2000-04-24 2005-11-17 Schumann Robert W Visual copyright protection
US20050264555A1 (en) 2004-05-28 2005-12-01 Zhou Zhi Y Interactive system and method
US20050276444A1 (en) 2004-05-28 2005-12-15 Zhou Zhi Y Interactive system and method
US20050289590A1 (en) 2004-05-28 2005-12-29 Cheok Adrian D Marketing platform
US20050288078A1 (en) 2004-05-28 2005-12-29 Cheok Adrian D Game
US20060028400A1 (en) 2004-08-03 2006-02-09 Silverbrook Research Pty Ltd Head mounted display with wave front modulator
US20060041926A1 (en) 2004-04-30 2006-02-23 Vulcan Inc. Voice control of multimedia content
US20060080408A1 (en) 2004-04-30 2006-04-13 Vulcan Inc. Smart home control of electronic devices
US7046214B2 (en) 2003-12-17 2006-05-16 Information Decision Technologies, Llc Method and system for accomplishing a scalable, multi-user, extended range, distributed, augmented reality environment
US20060152803A1 (en) 2005-01-11 2006-07-13 Provitola Anthony I Enhancement of depth perception
US20060170880A1 (en) 2002-12-04 2006-08-03 Barco Control Rooms Gmbh Brightness and colour control of a projection appliance
US20060218503A1 (en) 2005-03-22 2006-09-28 Microsoft Corporation Operating system program launch menu search
US7134756B2 (en) 2004-05-04 2006-11-14 Microsoft Corporation Selectable projector and imaging modes of display table
US20060262140A1 (en) 2005-05-18 2006-11-23 Kujawa Gregory A Method and apparatus to facilitate visual augmentation of perceived reality
US20070005747A1 (en) 2005-06-30 2007-01-04 Batni Ramachendra P Control server employment of offer message from resource server to determine whether to add indication of the resource server to resource server mapping table
US7168813B2 (en) 2004-06-17 2007-01-30 Microsoft Corporation Mediacube
US20070024644A1 (en) 2005-04-15 2007-02-01 Herman Bailey Interactive augmented reality system
US20070239211A1 (en) 2006-03-31 2007-10-11 Andras Lorincz Embedded neural prosthesis
US20070260669A1 (en) 2002-06-20 2007-11-08 Steven Neiman Method for dividing computations
US7315241B1 (en) 2004-12-01 2008-01-01 Hrl Laboratories, Llc Enhanced perception lighting
US20080094588A1 (en) 2006-10-06 2008-04-24 Cole James R Projector/camera system
US20080151195A1 (en) 2006-12-21 2008-06-26 Texas Instruments Incorporated Apparatus and Method for Increasing Compensation Sequence Storage Density in a Projection Visual Display System
US20080174735A1 (en) 2007-01-23 2008-07-24 Emiscape, Inc. Projection Display with Holographic Screen
US20080180640A1 (en) 2007-01-29 2008-07-31 Seiko Epson Corporation Projector
US20080186255A1 (en) 2006-12-07 2008-08-07 Cohen Philip R Systems and methods for data annotation, recordation, and communication
US7418392B1 (en) 2003-09-25 2008-08-26 Sensory, Inc. System and method for controlling the operation of a device by voice commands
US20080229318A1 (en) 2007-03-16 2008-09-18 Carsten Franke Multi-objective allocation of computational jobs in client-server or hosting environments
US20080273754A1 (en) 2007-05-04 2008-11-06 Leviton Manufacturing Co., Inc. Apparatus and method for defining an area of interest for image sensing
US20080301175A1 (en) 2007-05-31 2008-12-04 Michael Applebaum Distributed system for monitoring information events
US20080320482A1 (en) 2007-06-20 2008-12-25 Dawson Christopher J Management of grid computing resources based on service level requirements
US20090066805A1 (en) 2007-08-27 2009-03-12 Sanyo Electric Co., Ltd. Video camera
US20090073034A1 (en) 2007-05-19 2009-03-19 Ching-Fang Lin 4D GIS virtual reality for controlling, monitoring and prediction of manned/unmanned system
US7538764B2 (en) 2001-01-05 2009-05-26 Interuniversitair Micro-Elektronica Centrum (Imec) System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display
US20090184888A1 (en) * 2008-01-18 2009-07-23 Jyi-Yuan Chen Display control system and method thereof
WO2009112585A1 (en) 2008-03-14 2009-09-17 Alcatel Lucent Method for implementing rich video on mobile terminals
US20090323097A1 (en) 2008-06-26 2009-12-31 Canon Kabushiki Kaisha Information processing apparatus, control method of image processing system and computer program thereof
US20100011637A1 (en) 2008-07-15 2010-01-21 Yudong Zhang Displaying device and method thereof
US20100026479A1 (en) 2007-05-24 2010-02-04 Bao Tran Wireless occupancy and day-light sensing
US20100039568A1 (en) 2007-03-06 2010-02-18 Emil Tchoukaleysky Digital cinema anti-camcording method and apparatus based on image frame post-sampling
US20100060723A1 (en) 2006-11-08 2010-03-11 Nec Corporation Display system
US20100066676A1 (en) 2006-02-08 2010-03-18 Oblong Industries, Inc. Gestural Control of Autonomous and Semi-Autonomous Systems
US7720683B1 (en) 2003-06-13 2010-05-18 Sensory, Inc. Method and apparatus of specifying and performing speech recognition operations
US7743348B2 (en) 2004-06-30 2010-06-22 Microsoft Corporation Using physical objects to adjust attributes of an interactive display application
US20100164990A1 (en) 2005-08-15 2010-07-01 Koninklijke Philips Electronics, N.V. System, apparatus, and method for augmented reality glasses for end-user programming
US20100199232A1 (en) * 2009-02-03 2010-08-05 Massachusetts Institute Of Technology Wearable Gestural Interface
US20100207872A1 (en) 2009-02-17 2010-08-19 Pixar Imgagin Inc. Optical displacement detecting device and operating method thereof
US20100228632A1 (en) 2009-03-03 2010-09-09 Rodriguez Tony F Narrowcasting From Public Displays, and Related Methods
US20100240455A1 (en) * 2007-11-09 2010-09-23 Wms Gaming, Inc. Presenting secondary content for a wagering game
US20100257252A1 (en) 2009-04-01 2010-10-07 Microsoft Corporation Augmented Reality Cloud Computing
US20100253541A1 (en) 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Traffic infrastructure indicator on head-up display
US20100281095A1 (en) 2009-04-21 2010-11-04 Wehner Camille B Mobile grid computing
US20100284055A1 (en) 2007-10-19 2010-11-11 Qualcomm Mems Technologies, Inc. Display with integrated photovoltaic device
US20100322469A1 (en) 2009-05-21 2010-12-23 Sharma Ravi K Combined Watermarking and Fingerprinting
US20110010222A1 (en) 2009-07-08 2011-01-13 International Business Machines Corporation Point-in-time based energy saving recommendations
US20110012925A1 (en) 2009-07-20 2011-01-20 Igrs Engineering Lab. Ltd. Image marking method and apparatus
US20110050885A1 (en) 2009-08-25 2011-03-03 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US20110061100A1 (en) 2009-09-10 2011-03-10 Nokia Corporation Method and apparatus for controlling access
US7911444B2 (en) 2005-08-31 2011-03-22 Microsoft Corporation Input method for surface of interactive display
US20110072047A1 (en) 2009-09-21 2011-03-24 Microsoft Corporation Interest Learning from an Image Collection for Advertising
US7925996B2 (en) 2004-11-18 2011-04-12 Microsoft Corporation Method and system for providing multiple input connecting user interface
US20110087731A1 (en) 2009-10-08 2011-04-14 Laura Wong Systems and methods to process a request received at an application program interface
US20110093094A1 (en) 2006-01-13 2011-04-21 Rahul Goyal In-Wall Occupancy Sensor with RF Control
US20110098056A1 (en) 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US7949148B2 (en) 2006-01-23 2011-05-24 Digimarc Corporation Object processing employing movement
US20110134204A1 (en) 2007-12-05 2011-06-09 Florida Gulf Coast University System and methods for facilitating collaboration of a group
US20110154350A1 (en) 2009-12-18 2011-06-23 International Business Machines Corporation Automated cloud workload management in a map-reduce environment
US20110161912A1 (en) 2009-12-30 2011-06-30 Qualzoom, Inc. System for creation and distribution of software applications usable on multiple mobile device platforms
US20110164163A1 (en) 2010-01-05 2011-07-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US20110178942A1 (en) 2010-01-18 2011-07-21 Isight Partners, Inc. Targeted Security Implementation Through Security Loss Forecasting
WO2011088053A2 (en) 2010-01-18 2011-07-21 Apple Inc. Intelligent automated assistant
US7991220B2 (en) 2004-09-01 2011-08-02 Sony Computer Entertainment Inc. Augmented reality game system using identification information to display a virtual object in association with a position of a real object
US20110197147A1 (en) 2010-02-11 2011-08-11 Apple Inc. Projected display shared workspaces
US20110216090A1 (en) 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Real-time interactive augmented reality system and method and recording medium storing program for implementing the method
WO2011115623A1 (en) 2010-03-18 2011-09-22 Hewlett-Packard Development Company, L.P. Interacting with a device
US20110238751A1 (en) 2010-03-26 2011-09-29 Nokia Corporation Method and apparatus for ad-hoc peer-to-peer augmented reality environment
US20110249197A1 (en) 2010-04-07 2011-10-13 Microvision, Inc. Wavelength Combining Apparatus, System and Method
US20110289308A1 (en) 2010-05-18 2011-11-24 Sobko Andrey V Team security for portable information devices
US20120009874A1 (en) 2010-07-09 2012-01-12 Nokia Corporation Allowed spectrum information distribution system
US8107736B2 (en) 2008-07-10 2012-01-31 Novell, Inc. System and method for device mapping based on images and reference points
US8159739B2 (en) 2005-01-10 2012-04-17 Au Optronics Corporation Display apparatus
US20120120296A1 (en) 2010-11-17 2012-05-17 Verizon Patent And Licensing, Inc. Methods and Systems for Dynamically Presenting Enhanced Content During a Presentation of a Media Content Instance
US20120124245A1 (en) 2010-11-17 2012-05-17 Flextronics Id, Llc Universal remote control with automated setup
US20120130513A1 (en) 2010-11-18 2012-05-24 Verizon Patent And Licensing Inc. Smart home device management
US20120127320A1 (en) 2009-07-31 2012-05-24 Tibor Balogh Method And Apparatus For Displaying 3D Images
US8199966B2 (en) 2008-05-14 2012-06-12 International Business Machines Corporation System and method for providing contemporaneous product information with animated virtual representations
US8253746B2 (en) 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US8255829B1 (en) 2009-08-19 2012-08-28 Sprint Communications Company L.P. Determining system level information technology security requirements
US20120223885A1 (en) 2011-03-02 2012-09-06 Microsoft Corporation Immersive display experience
US8285256B2 (en) 2008-07-28 2012-10-09 Embarq Holdings Company, Llc System and method for projecting information from a wireless device
US8284205B2 (en) 2007-10-24 2012-10-09 Apple Inc. Methods and apparatuses for load balancing between multiple processing units
US8307388B2 (en) 2006-09-07 2012-11-06 Porto Vinci Ltd. LLC Automatic adjustment of devices in a home entertainment system
US8308304B2 (en) 2008-06-17 2012-11-13 The Invention Science Fund I, Llc Systems associated with receiving and transmitting information related to projection
US20120306878A1 (en) 2008-02-29 2012-12-06 Microsoft Corporation Modeling and Rendering of Heterogeneous Translucent Materals Using The Diffusion Equation
US8356254B2 (en) 2006-10-25 2013-01-15 International Business Machines Corporation System and method for interacting with a display
US8382295B1 (en) 2010-06-30 2013-02-26 Amazon Technologies, Inc. Optical assembly for electronic devices
US8408720B2 (en) * 2009-04-10 2013-04-02 Funai Electric Co., Ltd. Image display apparatus, image display method, and recording medium having image display program stored therein
US20130235354A1 (en) 2010-04-28 2013-09-12 Lemoptix Sa Micro-projection device with antis-peckle imaging mode
US20130300637A1 (en) 2010-10-04 2013-11-14 G Dirk Smits System and method for 3-d projection and enhancements for interactivity
US8591039B2 (en) 2008-10-28 2013-11-26 Smart Technologies Ulc Image projection methods and interactive input/projection systems employing the same
US20140125649A1 (en) 2000-08-22 2014-05-08 Bruce Carlin Network repository of digitalized 3D object models, and networked generation of photorealistic images based upon these models
US8723787B2 (en) * 2008-06-17 2014-05-13 The Invention Science Fund I, Llc Methods and systems related to an image capture projection surface
US8743145B1 (en) 2010-08-26 2014-06-03 Amazon Technologies, Inc. Visual overlay for augmenting reality

Patent Citations (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3835245A (en) 1971-08-07 1974-09-10 Image Analysing Computers Ltd Information modification in image analysis systems employing line scanning
US3840699A (en) 1972-05-25 1974-10-08 W Bowerman Television system for enhancing and tracking an object
US4112463A (en) 1976-03-31 1978-09-05 Robert Bosch Gmbh System for detecting a motion in the monitoring area of two or more television cameras
US5946209A (en) 1995-02-02 1999-08-31 Hubbell Incorporated Motion sensing system with adaptive timing for controlling lighting fixtures
US5704836A (en) 1995-03-23 1998-01-06 Perception Systems, Inc. Motion-based command generation technology
US6098091A (en) 1996-12-30 2000-08-01 Intel Corporation Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities
US20040046736A1 (en) 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US6059576A (en) 1997-11-21 2000-05-09 Brann; Theodore L. Training and safety device, system and method to aid in proper movement during physical activity
US20010049713A1 (en) 1998-02-26 2001-12-06 Sun Microsystems Inc. Method and apparatus for dynamic distributed computing over a network
US6503195B1 (en) 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
US6618076B1 (en) 1999-12-23 2003-09-09 Justsystem Corporation Method and apparatus for calibrating projector-camera system
US6760045B1 (en) 2000-02-22 2004-07-06 Gateway, Inc. Simultaneous projected presentation of client browser display
US20050254683A1 (en) 2000-04-24 2005-11-17 Schumann Robert W Visual copyright protection
US6803928B2 (en) 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US20010056574A1 (en) 2000-06-26 2001-12-27 Richards Angus Duncan VTV system
US20020001044A1 (en) 2000-06-29 2002-01-03 Villamide Jesus Mendez Projection apparatus and method of image projection
US20140125649A1 (en) 2000-08-22 2014-05-08 Bruce Carlin Network repository of digitalized 3D object models, and networked generation of photorealistic images based upon these models
US20020070278A1 (en) 2000-12-11 2002-06-13 Hung Patrick Siu-Ying Method and apparatus for scanning electronic barcodes
US7538764B2 (en) 2001-01-05 2009-05-26 Interuniversitair Micro-Elektronica Centrum (Imec) System and method to obtain surface structures of multi-dimensional objects, and to represent those surface structures for animation, transmission and display
US20020168069A1 (en) 2001-02-28 2002-11-14 Babak Tehranchi Copy protection for digital motion picture image data
US6690618B2 (en) 2001-04-03 2004-02-10 Canesta, Inc. Method and apparatus for approximating a source position of a sound-causing event for determining an input used in operating an electronic device
US20030156306A1 (en) 2001-12-17 2003-08-21 Dai Nippon Printing Co., Ltd. Computer-generated hologram fabrication process, and hologram-recorded medium
US20050110964A1 (en) 2002-05-28 2005-05-26 Matthew Bell Interactive video window display system
US20070260669A1 (en) 2002-06-20 2007-11-08 Steven Neiman Method for dividing computations
US20060170880A1 (en) 2002-12-04 2006-08-03 Barco Control Rooms Gmbh Brightness and colour control of a projection appliance
US20040114581A1 (en) * 2002-12-16 2004-06-17 Hans Mathieu Claude Voice-over-IP communicator
US6789903B2 (en) 2003-02-18 2004-09-14 Imatte, Inc. Generating an inhibit signal by pattern displacement
US20040190716A1 (en) 2003-03-27 2004-09-30 Eastman Kodak Company Projector with enhanced security camcorder defeat
US20040201823A1 (en) 2003-04-11 2004-10-14 Ramesh Raskar Context aware projector
US6811267B1 (en) 2003-06-09 2004-11-02 Hewlett-Packard Development Company, L.P. Display system with nonvisible data projection
US7720683B1 (en) 2003-06-13 2010-05-18 Sensory, Inc. Method and apparatus of specifying and performing speech recognition operations
US20050081164A1 (en) 2003-08-28 2005-04-14 Tatsuya Hama Information processing apparatus, information processing method, information processing program and storage medium containing information processing program
US7774204B2 (en) 2003-09-25 2010-08-10 Sensory, Inc. System and method for controlling the operation of a device by voice commands
US7418392B1 (en) 2003-09-25 2008-08-26 Sensory, Inc. System and method for controlling the operation of a device by voice commands
US20050128196A1 (en) 2003-10-08 2005-06-16 Popescu Voicu S. System and method for three dimensional modeling
US20050099432A1 (en) * 2003-11-12 2005-05-12 International Business Machines Corporation Multi-value hidden object properties in a presentation graphics application
US7046214B2 (en) 2003-12-17 2006-05-16 Information Decision Technologies, Llc Method and system for accomplishing a scalable, multi-user, extended range, distributed, augmented reality environment
US20060080408A1 (en) 2004-04-30 2006-04-13 Vulcan Inc. Smart home control of electronic devices
US20060041926A1 (en) 2004-04-30 2006-02-23 Vulcan Inc. Voice control of multimedia content
US7134756B2 (en) 2004-05-04 2006-11-14 Microsoft Corporation Selectable projector and imaging modes of display table
US20050264555A1 (en) 2004-05-28 2005-12-01 Zhou Zhi Y Interactive system and method
US20050276444A1 (en) 2004-05-28 2005-12-15 Zhou Zhi Y Interactive system and method
US20050289590A1 (en) 2004-05-28 2005-12-29 Cheok Adrian D Marketing platform
US20050288078A1 (en) 2004-05-28 2005-12-29 Cheok Adrian D Game
US7168813B2 (en) 2004-06-17 2007-01-30 Microsoft Corporation Mediacube
US7743348B2 (en) 2004-06-30 2010-06-22 Microsoft Corporation Using physical objects to adjust attributes of an interactive display application
US20060028400A1 (en) 2004-08-03 2006-02-09 Silverbrook Research Pty Ltd Head mounted display with wave front modulator
US7991220B2 (en) 2004-09-01 2011-08-02 Sony Computer Entertainment Inc. Augmented reality game system using identification information to display a virtual object in association with a position of a real object
US7925996B2 (en) 2004-11-18 2011-04-12 Microsoft Corporation Method and system for providing multiple input connecting user interface
US7315241B1 (en) 2004-12-01 2008-01-01 Hrl Laboratories, Llc Enhanced perception lighting
US8159739B2 (en) 2005-01-10 2012-04-17 Au Optronics Corporation Display apparatus
US20060152803A1 (en) 2005-01-11 2006-07-13 Provitola Anthony I Enhancement of depth perception
US20060218503A1 (en) 2005-03-22 2006-09-28 Microsoft Corporation Operating system program launch menu search
US20070024644A1 (en) 2005-04-15 2007-02-01 Herman Bailey Interactive augmented reality system
US20060262140A1 (en) 2005-05-18 2006-11-23 Kujawa Gregory A Method and apparatus to facilitate visual augmentation of perceived reality
US20070005747A1 (en) 2005-06-30 2007-01-04 Batni Ramachendra P Control server employment of offer message from resource server to determine whether to add indication of the resource server to resource server mapping table
US20100164990A1 (en) 2005-08-15 2010-07-01 Koninklijke Philips Electronics, N.V. System, apparatus, and method for augmented reality glasses for end-user programming
US7911444B2 (en) 2005-08-31 2011-03-22 Microsoft Corporation Input method for surface of interactive display
US20110093094A1 (en) 2006-01-13 2011-04-21 Rahul Goyal In-Wall Occupancy Sensor with RF Control
US7949148B2 (en) 2006-01-23 2011-05-24 Digimarc Corporation Object processing employing movement
US20100066676A1 (en) 2006-02-08 2010-03-18 Oblong Industries, Inc. Gestural Control of Autonomous and Semi-Autonomous Systems
US20070239211A1 (en) 2006-03-31 2007-10-11 Andras Lorincz Embedded neural prosthesis
US8307388B2 (en) 2006-09-07 2012-11-06 Porto Vinci Ltd. LLC Automatic adjustment of devices in a home entertainment system
US20080094588A1 (en) 2006-10-06 2008-04-24 Cole James R Projector/camera system
US8356254B2 (en) 2006-10-25 2013-01-15 International Business Machines Corporation System and method for interacting with a display
US20100060723A1 (en) 2006-11-08 2010-03-11 Nec Corporation Display system
US20080186255A1 (en) 2006-12-07 2008-08-07 Cohen Philip R Systems and methods for data annotation, recordation, and communication
US20080151195A1 (en) 2006-12-21 2008-06-26 Texas Instruments Incorporated Apparatus and Method for Increasing Compensation Sequence Storage Density in a Projection Visual Display System
US20080174735A1 (en) 2007-01-23 2008-07-24 Emiscape, Inc. Projection Display with Holographic Screen
US20080180640A1 (en) 2007-01-29 2008-07-31 Seiko Epson Corporation Projector
US20100039568A1 (en) 2007-03-06 2010-02-18 Emil Tchoukaleysky Digital cinema anti-camcording method and apparatus based on image frame post-sampling
US20080229318A1 (en) 2007-03-16 2008-09-18 Carsten Franke Multi-objective allocation of computational jobs in client-server or hosting environments
US20080273754A1 (en) 2007-05-04 2008-11-06 Leviton Manufacturing Co., Inc. Apparatus and method for defining an area of interest for image sensing
US20090073034A1 (en) 2007-05-19 2009-03-19 Ching-Fang Lin 4D GIS virtual reality for controlling, monitoring and prediction of manned/unmanned system
US20100026479A1 (en) 2007-05-24 2010-02-04 Bao Tran Wireless occupancy and day-light sensing
US20080301175A1 (en) 2007-05-31 2008-12-04 Michael Applebaum Distributed system for monitoring information events
US20080320482A1 (en) 2007-06-20 2008-12-25 Dawson Christopher J Management of grid computing resources based on service level requirements
US20090066805A1 (en) 2007-08-27 2009-03-12 Sanyo Electric Co., Ltd. Video camera
US20100284055A1 (en) 2007-10-19 2010-11-11 Qualcomm Mems Technologies, Inc. Display with integrated photovoltaic device
US8284205B2 (en) 2007-10-24 2012-10-09 Apple Inc. Methods and apparatuses for load balancing between multiple processing units
US20100240455A1 (en) * 2007-11-09 2010-09-23 Wms Gaming, Inc. Presenting secondary content for a wagering game
US20110134204A1 (en) 2007-12-05 2011-06-09 Florida Gulf Coast University System and methods for facilitating collaboration of a group
US20090184888A1 (en) * 2008-01-18 2009-07-23 Jyi-Yuan Chen Display control system and method thereof
US20120306878A1 (en) 2008-02-29 2012-12-06 Microsoft Corporation Modeling and Rendering of Heterogeneous Translucent Materals Using The Diffusion Equation
WO2009112585A1 (en) 2008-03-14 2009-09-17 Alcatel Lucent Method for implementing rich video on mobile terminals
US20110096844A1 (en) 2008-03-14 2011-04-28 Olivier Poupel Method for implementing rich video on mobile terminals
US8199966B2 (en) 2008-05-14 2012-06-12 International Business Machines Corporation System and method for providing contemporaneous product information with animated virtual representations
US8308304B2 (en) 2008-06-17 2012-11-13 The Invention Science Fund I, Llc Systems associated with receiving and transmitting information related to projection
US8723787B2 (en) * 2008-06-17 2014-05-13 The Invention Science Fund I, Llc Methods and systems related to an image capture projection surface
US20090323097A1 (en) 2008-06-26 2009-12-31 Canon Kabushiki Kaisha Information processing apparatus, control method of image processing system and computer program thereof
US8107736B2 (en) 2008-07-10 2012-01-31 Novell, Inc. System and method for device mapping based on images and reference points
US20100011637A1 (en) 2008-07-15 2010-01-21 Yudong Zhang Displaying device and method thereof
US8285256B2 (en) 2008-07-28 2012-10-09 Embarq Holdings Company, Llc System and method for projecting information from a wireless device
US8591039B2 (en) 2008-10-28 2013-11-26 Smart Technologies Ulc Image projection methods and interactive input/projection systems employing the same
US20100199232A1 (en) * 2009-02-03 2010-08-05 Massachusetts Institute Of Technology Wearable Gestural Interface
US20100207872A1 (en) 2009-02-17 2010-08-19 Pixar Imgagin Inc. Optical displacement detecting device and operating method thereof
US20100228632A1 (en) 2009-03-03 2010-09-09 Rodriguez Tony F Narrowcasting From Public Displays, and Related Methods
US20100257252A1 (en) 2009-04-01 2010-10-07 Microsoft Corporation Augmented Reality Cloud Computing
US20100253541A1 (en) 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Traffic infrastructure indicator on head-up display
US8408720B2 (en) * 2009-04-10 2013-04-02 Funai Electric Co., Ltd. Image display apparatus, image display method, and recording medium having image display program stored therein
US20100281095A1 (en) 2009-04-21 2010-11-04 Wehner Camille B Mobile grid computing
US8253746B2 (en) 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US20100322469A1 (en) 2009-05-21 2010-12-23 Sharma Ravi K Combined Watermarking and Fingerprinting
US20110010222A1 (en) 2009-07-08 2011-01-13 International Business Machines Corporation Point-in-time based energy saving recommendations
US20110012925A1 (en) 2009-07-20 2011-01-20 Igrs Engineering Lab. Ltd. Image marking method and apparatus
US20120127320A1 (en) 2009-07-31 2012-05-24 Tibor Balogh Method And Apparatus For Displaying 3D Images
US8255829B1 (en) 2009-08-19 2012-08-28 Sprint Communications Company L.P. Determining system level information technology security requirements
US8264536B2 (en) 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US20110050885A1 (en) 2009-08-25 2011-03-03 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US20110061100A1 (en) 2009-09-10 2011-03-10 Nokia Corporation Method and apparatus for controlling access
US20110072047A1 (en) 2009-09-21 2011-03-24 Microsoft Corporation Interest Learning from an Image Collection for Advertising
US20110087731A1 (en) 2009-10-08 2011-04-14 Laura Wong Systems and methods to process a request received at an application program interface
US20110098056A1 (en) 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110154350A1 (en) 2009-12-18 2011-06-23 International Business Machines Corporation Automated cloud workload management in a map-reduce environment
US20110161912A1 (en) 2009-12-30 2011-06-30 Qualzoom, Inc. System for creation and distribution of software applications usable on multiple mobile device platforms
US20110164163A1 (en) 2010-01-05 2011-07-07 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US20110178942A1 (en) 2010-01-18 2011-07-21 Isight Partners, Inc. Targeted Security Implementation Through Security Loss Forecasting
WO2011088053A2 (en) 2010-01-18 2011-07-21 Apple Inc. Intelligent automated assistant
US20110197147A1 (en) 2010-02-11 2011-08-11 Apple Inc. Projected display shared workspaces
US20110216090A1 (en) 2010-03-03 2011-09-08 Gwangju Institute Of Science And Technology Real-time interactive augmented reality system and method and recording medium storing program for implementing the method
WO2011115623A1 (en) 2010-03-18 2011-09-22 Hewlett-Packard Development Company, L.P. Interacting with a device
US20110238751A1 (en) 2010-03-26 2011-09-29 Nokia Corporation Method and apparatus for ad-hoc peer-to-peer augmented reality environment
US20110249197A1 (en) 2010-04-07 2011-10-13 Microvision, Inc. Wavelength Combining Apparatus, System and Method
US20130235354A1 (en) 2010-04-28 2013-09-12 Lemoptix Sa Micro-projection device with antis-peckle imaging mode
US20110289308A1 (en) 2010-05-18 2011-11-24 Sobko Andrey V Team security for portable information devices
US8382295B1 (en) 2010-06-30 2013-02-26 Amazon Technologies, Inc. Optical assembly for electronic devices
US20120009874A1 (en) 2010-07-09 2012-01-12 Nokia Corporation Allowed spectrum information distribution system
US8743145B1 (en) 2010-08-26 2014-06-03 Amazon Technologies, Inc. Visual overlay for augmenting reality
US20130300637A1 (en) 2010-10-04 2013-11-14 G Dirk Smits System and method for 3-d projection and enhancements for interactivity
US20120120296A1 (en) 2010-11-17 2012-05-17 Verizon Patent And Licensing, Inc. Methods and Systems for Dynamically Presenting Enhanced Content During a Presentation of a Media Content Instance
US20120124245A1 (en) 2010-11-17 2012-05-17 Flextronics Id, Llc Universal remote control with automated setup
US20120130513A1 (en) 2010-11-18 2012-05-24 Verizon Patent And Licensing Inc. Smart home device management
US20120223885A1 (en) 2011-03-02 2012-09-06 Microsoft Corporation Immersive display experience

Non-Patent Citations (28)

* Cited by examiner, † Cited by third party
Title
Final Office Action for U.S. Appl. No. 13/236,294, mailed on Mar. 13, 2014, Christopher Coley, "Optical Interference Mitigation", 14 pages.
Foscam User Manual, Model:F19821W, retrieved at <<http://foscam.us/downloads/FI9821W%20user%20manual.pdf>>, May 2010, pp. 45-46 (71 pages).
Foscam User Manual, Model:F19821W, retrieved at >, May 2010, pp. 45-46 (71 pages).
Office Action for U.S. Appl. No. 12/975,175, mailed on Apr. 10, 2014, William Spencer Worley III, "Designation of Zones of Interest Within an Augmented Reality Environment", 33 pages.
Office Action for U.S. Appl. No. 12/975,175, mailed on Oct. 1, 2014, William Spencer Worley III, "Designation of Zones of Interest Within an Augmented Reality Environment", 36 pages.
Office action for U.S. Appl. No. 12/977,760, mailed on Jun. 4, 2013, Worley III et al., "Generation and Modulation of Non-Visible Structured Light", 12 pages.
Office action for U.S. Appl. No. 12/977,760, mailed on Oct. 15, 2012, Worley III et al., "Generation and Modulation of Non-Visible Structured Light ", 13 pages.
Office Action for U.S. Appl. No. 12/977,760, mailed on Oct. 16, 2014, William Spencer Worley III, "Generation and Modulation of Non-Visible Structured Light", 11 pages.
Office action for U.S. Appl. No. 12/977,924, mailed on Nov. 15, 2013, Coley, et al., "Characterization of a Scene With Structured Light", 9 pages.
Office Action for U.S. Appl. No. 12/977,949, mailed on Jan. 22, 2014, William Spencer Worley III, "Powered Augmented Reality Projection Accessory Display Device", 11 pages.
Office Action for U.S. Appl. No. 12/977,992, mailed on Apr. 4, 2014, William Spencer Worley III, "Unpowered Augmented Reality Projection Accessory Display Device", 6 pages.
Office action for U.S. Appl. No. 12/978,800, mailed on Aug. 27, 2015, Worley III et al., "Integrated Augmented Reality Environment", 51 pages.
Office Action for U.S. Appl. No. 12/978,800, mailed on Dec. 2, 2014, William Spencer Worley III, "Integrated Augmented Reality Environment", 46 pages.
Office action for U.S. Appl. No. 12/978,800, mailed on Jan. 5, 2016 Worley III et al., "Integrated Augmented Reality Environment", 62 pages.
Office Action for U.S. Appl. No. 12/978,800, mailed on Jun. 17, 2014, Worley III, "Integrated Augmented Reality Environment", 40 pages.
Office Action for U.S. Appl. No. 12/978,800, mailed on Oct. 25, 2013, William Spencer Worley III, "Integrated Augmented Reality Environment", 36 pages.
Office action for U.S. Appl. No. 12/982,519, mailed on Aug. 14, 2014, Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 12 pages.
Office action for U.S. Appl. No. 12/982,519, mailed on Aug. 29, 2013, Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 12 pages.
Office action for U.S. Appl. No. 12/982,519, mailed on Dec. 31, 2015 Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 26 pages.
Office Action for U.S. Appl. No. 12/982,519, mailed on Feb. 12, 2014, William Spencer Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 12 pages.
Office action for U.S. Appl. No. 12/982,519, mailed on Feb. 7, 2013, Worley III , "Complementing Operation of Display Devices in an Augmented Reality Environment", 13 pages.
Office Action for U.S. Appl. No. 12/982,519, mailed on Jul. 23, 2015, Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 18 pages.
Office Action for U.S. Appl. No. 12/982,519, mailed on Mar. 5, 2015, William Spencer Worley III, "Complementing Operation of Display Devices in an Augmented Reality Environment", 13 pages.
Office Action for U.S. Appl. No. 13/236,294, mailed on Nov. 7, 2013, Christopher Coley, "Optical Interference Mitigation", 12 pages.
Office Action for U.S. Appl. No. 13/236,294, mailed on Oct. 22, 2014, Christopher Coley, "Optical Interference Mitigation", 20 pages.
Office action for U.S. Appl. No. 14/498,590 mailed on Oct. 17, 2015, Worley III et al., "Powered Augmented Reality Projection Accessory Display Device", 7 pages.
Pinhanez, "The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces", IBM Thomas Watson Research Center, Ubicomp 2001, 18 pages.
Sneath, "The Bumper List of Windows 7 Secrets", retrieved on Aug. 21, 2013, at http://blogs.msdn.com/b/tims/archive/2009/01/12/ the bumper-list-of-windows-7-secrets.aspx., 2009, 13 pages.

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205432A1 (en) * 2010-09-20 2016-07-14 Echostar Technologies L.L.C. Methods of displaying an electronic program guide
US9762949B2 (en) * 2010-09-20 2017-09-12 Echostar Technologies L.L.C. Methods of displaying an electronic program guide
US11711418B2 (en) * 2013-12-10 2023-07-25 Google Llc Providing content to co-located devices with enhanced presentation characteristics
US20210367997A1 (en) * 2013-12-10 2021-11-25 Google Llc Providing content to co-located devices with enhanced presentation characteristics
US10122976B2 (en) * 2014-12-25 2018-11-06 Panasonic Intellectual Property Management Co., Ltd. Projection device for controlling a position of an image projected on a projection surface
US20160188123A1 (en) * 2014-12-25 2016-06-30 Panasonic Intellectual Property Management Co., Ltd. Projection device
US20170330036A1 (en) * 2015-01-29 2017-11-16 Aurasma Limited Provide augmented reality content
US20180191990A1 (en) * 2015-09-02 2018-07-05 Bandai Namco Entertainment Inc. Projection system
US20180115803A1 (en) * 2016-10-25 2018-04-26 Alphonso Inc. System and method for detecting unknown tv commercials from a live tv stream
US10136185B2 (en) * 2016-10-25 2018-11-20 Alphonso Inc. System and method for detecting unknown TV commercials from a live TV stream
US10805681B2 (en) * 2016-10-25 2020-10-13 Alphonso Inc. System and method for detecting unknown TV commercials from a live TV stream
WO2018081033A1 (en) * 2016-10-25 2018-05-03 Alphonso Inc. System and method for detecting unknown tv commercials from a live tv stream
US10614137B2 (en) 2016-11-02 2020-04-07 Alphonso Inc. System and method for detecting repeating content, including commercials, in a video data stream
US10108718B2 (en) 2016-11-02 2018-10-23 Alphonso Inc. System and method for detecting repeating content, including commercials, in a video data stream
US10346474B1 (en) 2018-03-30 2019-07-09 Alphonso Inc. System and method for detecting repeating content, including commercials, in a video data stream using audio-based and video-based automated content recognition
US11539798B2 (en) * 2019-06-28 2022-12-27 Lenovo (Beijing) Co., Ltd. Information processing method and electronic device
WO2021211265A1 (en) * 2020-04-17 2021-10-21 Apple Inc. Multi-device continuity for use with extended reality systems
US20230156280A1 (en) * 2021-11-18 2023-05-18 Synamedia Limited Systems, Devices, and Methods for Selecting TV User Interface Transitions

Similar Documents

Publication Publication Date Title
US9508194B1 (en) Utilizing content output devices in an augmented reality environment
US9607315B1 (en) Complementing operation of display devices in an augmented reality environment
US10469891B2 (en) Playing multimedia content on multiple devices
US10013857B2 (en) Using haptic technologies to provide enhanced media experiences
US9581962B1 (en) Methods and systems for generating and using simulated 3D images
CN102346898A (en) Automatic customized advertisement generation system
EP1635613A2 (en) Audio-visual system and tuning method therefor
WO2016009864A1 (en) Information processing device, display device, information processing method, program, and information processing system
US9723293B1 (en) Identifying projection surfaces in augmented reality environments
US10296281B2 (en) Handheld multi vantage point player
US20150221334A1 (en) Audio capture for multi point image capture systems
US10156898B2 (en) Multi vantage point player with wearable display
CN106647821A (en) Indoor projection following control method and system
US10664225B2 (en) Multi vantage point audio player
US20150304724A1 (en) Multi vantage point player
JP6487596B1 (en) Easy-to-use karaoke equipment switching device
JP5941760B2 (en) Display control device, display control method, display system, program, and recording medium
KR102159816B1 (en) Apparatus and method for playing back tangible multimedia contents
KR102136463B1 (en) Smart projector and method for controlling thereof
Miller My TV for Seniors
US11928381B2 (en) Display device and operating method thereof
US20220300236A1 (en) Display device and operating method thereof
US20140125763A1 (en) 3d led output device and process for emitting 3d content output for large screen applications and which is visible without 3d glasses
US20210150657A1 (en) Metadata watermarking for &#39;nested spectating&#39;

Legal Events

Date Code Title Description
AS Assignment

Owner name: RAWLES LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WORLEY III, WILLIAM SPENCER;REEL/FRAME:025712/0775

Effective date: 20110107

AS Assignment

Owner name: AMAZON TECHNOLOGIES, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RAWLES LLC;REEL/FRAME:037103/0084

Effective date: 20151106

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20201129