US20170048667A1 - Gaze-directed content delivery - Google Patents
Gaze-directed content delivery Download PDFInfo
- Publication number
- US20170048667A1 US20170048667A1 US15/184,712 US201615184712A US2017048667A1 US 20170048667 A1 US20170048667 A1 US 20170048667A1 US 201615184712 A US201615184712 A US 201615184712A US 2017048667 A1 US2017048667 A1 US 2017048667A1
- Authority
- US
- United States
- Prior art keywords
- geo
- pane
- content
- location
- gaze
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/026—Services making use of location information using location based information parameters using orientation information, e.g. compass
Definitions
- Geo-location technologies such as GPS (Global Positioning System) may be used by smart phones and other GPS-equipped devices to obtain content on locations of interest to a user of that device based on the user's geographic location.
- images selected by the user may be analyzed for their content in order to determine their identity.
- image analysis may be computationally demanding and may remain prone to error. These challenges may hinder the development of content delivery based on geo-location technology.
- FIGS. 1A, 1B and 1C show three examples of geo-pane geometries according to embodiments
- FIG. 2 is a block diagram of an example illustrating the connections among a location aware device, a geo-pane, and a cloud computing infrastructure according to an embodiment
- FIG. 3 illustrates an example of a geo-pane in conjunction with a gaze vector according to an embodiment
- FIG. 4 is a flowchart of an example of a method of providing content based on user gaze according to an embodiment
- FIG. 5 illustrates an example of an embodiment employing a 2D geo-pane
- FIGS. 6A and 6B show examples of unidirectional and bidirectional 2D geo-panes, respectively, according to embodiments
- FIGS. 7A and 7B show examples of embodiments using inbound only and outbound only 3D geo-panes, respectively.
- FIGS. 8A and 8B illustrate examples of an embodiment employing geo-fencing.
- FIG. 9 is a block diagram of an example of a logic architecture according to an embodiment
- FIG. 10 is a block diagram of an example of a processor according to an embodiment.
- FIG. 11 is a block diagram of an example of a system according to an embodiment.
- a location aware device may be any device that has the capability of making use of geo-location technology to locate the device on a map or in a coordinate system. Examples of such technologies include those based on Global Positioning Systems (GPSs), such as are now in widespread use in smart phones; radar; sonar; indoor GPS systems; near-field communication (NFC); cellular tower triangulation systems; Wi-Fi and Wi-Fi triangulation systems; Radio Frequency Identification systems (RFID); laser positioning systems; and Bluetooth systems, which are used primarily in localized settings.
- GPSs Global Positioning Systems
- NFC near-field communication
- RFID Radio Frequency Identification systems
- RFID Radio Frequency Identification systems
- laser positioning systems and Bluetooth systems, which are used primarily in localized settings.
- a location aware device may be any device that is aware of its location with respect to a coordinate system. It may have the ability to transmit its location or data determinative of its location to another device or to a system, which may be local to it or more distant, such as in the cloud. That coordinate system may be defined in some local space, such as a room or building, or it may cover wide swaths of the planet, such as GPS. It may be based on any known mathematical system, including Cartesian coordinates, cylindrical coordinates, spherical coordinates etc.
- the coordinates are latitude, longitude, and altitude. While most of the embodiments here are described in terms of GPS, it is understood that embodiments may utilize these other systems or other mathematically acceptable coordinate system.
- GPS is the most widely used of these systems. Although initially developed for the military and made available for civilian use only with hobbled capabilities, GPS is now available with a geographic resolution that is generally accurate within several meters and improving. GPS capabilities are now tightly woven into smart phones, and provide latitude and longitude measurements by which users of these devices locate themselves on city maps. Less well known may be that GPS systems may also provide altitude information. Hence, a GPS is an example of a geo-location system that is capable of locating the user by latitude, longitude, and altitude. Also, GPS accuracy may be enhanced by use of supplemental systems such as various ground-based augmented GPS systems. One such system, Nationalwide Differential GPS System (NDGPS), offers accuracy to within 10 cm.
- NGPS National Differential GPS System
- Altitude measurements may be augmented by use of barometric pressure sensors and other devices available for measuring altitude.
- altitude may be determined via triangulation devices and by using Wi-Fi access points, which have been used to determine the particular floor of a multi-story building that a user may be in.
- GPS location may also form the origin of a locally defined three dimensional (3D) coordinate system.
- smart phones may now be equipped with an accelerometer, gyroscope and a magnetometer, and with the sensor data provided by these components, the direction in which the device is oriented may be determined using basic vector mechanics and for example, well known techniques employed in smart phone design.
- the embodiments disclosed herein make use of geo-location technology to provide the user of a location aware device pointing in a particular direction with content associated with that direction, as shall be explained further below.
- geo-pane may refer to a two dimensional pane of space and the virtual frame, known as a “geo-frame,” that bounds it.
- a geo-pane may be defined by its coordinates, and these coordinates may be taken at whatever location on the geo-frame is most appropriate.
- FIG. 1A One simple example of a two dimensional (2D) geo-pane is shown in FIG. 1A , in which the geo-pane is a rectangle having four corners 10 a, 10 b, 10 c, 10 d.
- the geo-pane may be defined by the coordinates of its four corners, or at the centers of each of its four sides.
- the geo-frame may be a square, rectangular or any other sort of polygon, a circle or an ellipse. Where the illustrated geo-pane is a polygon, it may be most convenient to define it in terms of the coordinates of its vertices. In the case of a circular geo-pane, the geo-pane may be defined by indicating its center and radius. Since the geo-pane is bound by its frame, anything that is contained within the boundaries of the geo-frame may be associated with that geo-pane.
- a geo-pane may also be three dimensional volume of space, such as a cube 12 ( FIG. 1B ) having vertices 12 a,b,c,d,e,f, as well as a box, right cylinder, polyhedron, sphere, hemisphere 14 ( FIG. 1C ), or any portion thereof that may be used to define a volume of space.
- a cube 12 FIG. 1B
- vertices 12 a,b,c,d,e,f as well as a box, right cylinder, polyhedron, sphere, hemisphere 14 ( FIG. 1C ), or any portion thereof that may be used to define a volume of space.
- 2D geo-panes that determine two dimensional regions
- 3D geo-panes that enclose volumes.
- a geo-fence may be a virtual fence in which GPS or other positioning system is used to define the boundaries of some physical space. It may be two dimensional, in that it is defined in terms of ground coordinates only, or it may be bounded above and below and be three dimensional. A geo-fence may be of any bounded shape. A geo-fence may be used to define a region of space that contains a 2D or 3D geo-pane, and it may be associated with a 2D or a 3D geo-pane whether or not the geo-pane is located inside the geo-fence.
- Geo-panes may be defined by location aware devices.
- the 2D geo-pane of FIG. 1A may be defined by placing a location aware device at its four corners and obtaining the coordinates there in GPS or other system.
- a 3D geo-pane may be defined by placing the location aware device at its vertices or at points in space sufficient to bound some region of interest.
- a 3D geo-pane may be defined in terms of a single 2D geo-pane that has been mathematically thickened to provide depth. The located points may correspond to something physical, like the perimeter of a statue or the corners of a wall, or they may simply be points in space of interest to someone.
- a geo-pane may be regarded as property, having an owner to whom it is registered in a database. Alternatively, it may be dedicated to the public, or subject to an open license for use by others. Ownership of the geo-pane may be separate from ownership of any real object within its coordinates, or they may be bundled together.
- a location aware device 210 forwards the geo-coordinates 270 that define the geo-pane to a geo-pane registration server 202 in the cloud 200 (e.g., a cloud computing infrastructure), where it may be registered to an owner.
- the cloud 200 e.g., a cloud computing infrastructure
- Any content that the owner may wish to associate with that geo-pane may be stored in a content server 204 for gaze-based access by a device 250 , which forwards various sensor data 275 to the cloud and which may receive content 280 in return, as is explained in greater detail below.
- a geo-fence may contain one or more geo-panes, either 2D or 3D, or be associated with a geo-pane that it does not contain. As with geo-panes, a geo-fence may be registered in the cloud to an owner.
- the creator/owner of the geo-pane may also provide downloadable content to be associated with that particular geo-pane.
- the content may take any form that may be made use of by the accessing location aware device, including text, video, music, narration, images, graphics, software, firmware, email, web pages, applications, e-services, voice, data, etc. It may be offered free of charge, or for a fee.
- a location aware device may, using an internal/embedded accelerometer, gyroscope, magnetometer, etc., also establish its orientation in space. That is to say, taking the location of the device as an origin, the direction of orientation may be defined.
- the location aware device is a pair of smart glasses 250 equipped with a camera 252 such that in use its orientation will be more or less coincident with the gaze direction of the wearer.
- the gaze direction may be expressed mathematically in terms of a gaze ray 260 having its origin 264 at the GPS location of the device, or as a gaze unit vector 261 pointing along that ray, or as a set of direction cosines, depending on the particular mathematical formalism one may wish to employ. Whatever the formalism employed, a location aware device may “know” both its location and its orientation.
- device orientation will normally be determined in terms appropriate to the physical nature of the location aware device in question.
- the device is a camera or other optical device
- one natural way of defining the orientation is to look to the orientation of the optical path as seen through the view finder.
- so-called “smart glasses” have been developed (such as those marketed by Google®). It is natural to define the orientation of such a device as the direction of the user's gaze when wearing them and looking at some location or object of interest.
- Other devices where there may be some preferred direction of orientation (which for lexicographical compactness is called “gaze” herein regardless of the device in question) are smart watches, wearables, helmet mounted devices, and smart phones where the user chooses to define some orientation.
- a wand might be adapted to include GPS, and then by pointing the wand in a specific direction one defines a gaze for the wand.
- a game controller may be designed to have a gaze.
- the operation of one embodiment is further addressed by way of reference to the flow chart 300 in FIG. 4 .
- the method 300 may be implemented as a module in set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof.
- PLAs programmable logic arrays
- FPGAs field programmable gate arrays
- CPLDs complex programmable logic devices
- ASIC application specific integrated circuit
- CMOS complementary metal oxide semiconductor
- TTL transistor-transistor logic
- computer program code to carry out operations shown in method 300 may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- object oriented programming language such as Java, Smalltalk, C++ or the like
- conventional procedural programming languages such as the “C” programming language or similar programming languages.
- the user initiates a request for information on something the user sees. This may or may not correspond to a known geo-pane for which content exists. Typically, the user will initiate a request because the context of the user's location suggests that such information may be available through his or her directed gaze.
- the GPS coordinates as well as the orientation of gaze are forwarded to the cloud, where a database (which may be resident in the geo-pane registration server 202 of FIG. 2 ) and/or additional computational or relational software determines at block 330 whether his GPS coordinates are inside a known geo-fence.
- the system may calculate in block 340 whether the gaze points to a geo-pane associated with the area inside the geo-fenced area. If it does, then in block 350 the content associated with that geo-pane may be downloaded to the user's device.
- any geo-panes may be identified as being in line with the user's gaze
- certain distance rules may be brought into play at block 370 .
- the user may be free to define for the system the effective range of his gaze. For example, he may only be interested in geo-panes that are within five meters of his position. Or he may select a greater distance, out to his effective horizon or beyond.
- the owner of the geo-pane may have set up the registration of his geo-pane with a set of rules so that content is delivered only to those devices that are within a certain predetermined distance of the location of the geo-pane.
- the distance may be undefined by either party, and subject to rules determined by default to system rules in the cloud or the location aware device. If the distance rules are met, then the content is delivered to the device at block 350 . Otherwise, the process again terminates until the user moves some distance or manually requests a new search for content.
- the flow chart by-passes blocks 330 and 340 .
- the device GPS coordinates and gaze direction may be obtained at block 320 , and it may be determined at block 360 whether the gaze direction intercepts a geo-pane. If it does and if any defined distance limitations are met, then the content associated with the geo-pane may be downloaded to the device at block 350 as before.
- the illustrated arrangement may be, insofar as the user is concerned, simplicity itself Merely by gazing at an object, she may obtain additional content concerning it.
- the user is wearing smart glasses, simply looking at an object suffices to provide additional information and content associated with that object.
- the owner or registrant of the geo-pane may be given broad authority to determine the circumstances under which content will be provided. Hence, according to another rule that may optionally be used, content is delivered only to paying customers or to those who have met some other standards defined by the owner of the geo-pane.
- FIG. 5 illustrates an embodiment and its use in a museum-like setting.
- a user 400 stands in front of a painting 430 mounted to a wall 410 .
- She is holding a location aware smart phone 440 having an optical system that defines a gaze vector 450 .
- the gaze vector 450 is pointed in the direction of the painting 430 .
- the painting 430 is located within a geo-pane 420 that is generally aligned with the wall 410 to which the painting has been mounted.
- the illustrated geo-pane 420 is defined by the coordinates of its corners 420 a, 420 b, 420 c, 420 d, which contain information regarding the latitude, longitude, and altitude of the corner or some other coordinate information that may be related to the GPS coordinates of the device held by the woman.
- the smart phone may transmit its GPS coordinates and its orientation, i.e., the gaze vector 450 , to the cloud.
- the gaze vector 450 is determined in terms of a device pitch 460 and a device bearing 470 , although any known approach for determining an orientation may be employed.
- servers may determine if there is a geo-pane within some defined distance of the smart phone. That distance may be set by the user using their smart phone, or by the owner of the geo-pane in question. If the smart phone 440 is located within the defined range of the geo-pane, and if the gaze vector 450 lies along a line that intersects the geo-pane, then the cloud may download content of interest to the smart phone. For example, in a museum setting, by pointing at a painting, the user could receive information on its history or other works by that same artist.
- Geo-panes may be unidirectional, as shown in FIG. 6A , or bidirectional as is shown in FIG. 6B .
- FIG. 6A In the unidirectional case ( FIG. 6A ), only users on one side of the geo-pane—in this instance, the woman on the right—receive content about it.
- FIG. 6B In a bidirectional geo-pane ( FIG. 6B ), users on either side of the geo-pane may receive the same content. Alternatively, each side may receive its own unique content, which may be of value in perspective-specific displays. If the user is wearing smart glasses, this content may further vary with the user's gaze direction as the user turns his head.
- One way of providing such selective directionality to the geo-pane is to note its frame orientation so that the cloud knows when the user is standing in front of it versus behind it. Such an approach may be useful in, for example, the museum setting in which wall-mounted paintings are viewed. While the person in the room with the painting would want to see that content, another person in the adjoining room (still within range of the painting) would not.
- 3D stand-alone geo-panes such as are shown in FIGS. 7 A and 7 B, have volume.
- a statue 700 has been geo-paned, i.e. located in a box-shaped 3D geo-pane 710 so that users on any side looking in (here users 720 , 721 , 722 ) may see content or information relevant to the statue no matter which side they are on.
- the illustrated approach is an example of an inbound only geo-pane, in which content is only displayed if an individual is looking into the geo-pane. Someone standing on the statue platform and looking out would not see this content.
- some sides may be active and provide content while others are not.
- FIG. 7B there is shown an outbound-looking 3D geo-pane 730 in the shape of a capped cylinder, where users 723 , 724 , and 725 standing inside the volume of the geo-pane looking out may access whatever content may be associated with an outbound gaze, which may or may not be the same for all users inside the geo-paned area, but those on the outside looking in do not see any of that content.
- Another embodiment combines geo-panes with geo-fences.
- paintings hanging on display in a museum as in FIGS. 8A and 8B .
- the two people may both be staring at the same wall and at the same distance from it, but as there is a different painting on each side they do not want to see any content concerning the unseen painting in the adjacent room.
- this embodiment includes 3D geo-fences, which are volumes with which 2D and 3D geo-panes may be associated, whether or not they are actually inside the boundaries of the geo-fence.
- room 1 has a geo-fence 1
- room 2 has a geo-fence 2 .
- Any objects of interest in each room may be associated with a particular geo-pane, and the geo-fences are associated with these geo-panes in a data base.
- any user standing in room 1 will also be inside geo-fence 1 , and she will have access only to the data associated with the geo-panes that have been associated with the geo-fence in that room.
- the user in the adjacent room would be inside geo-fence 2 and have access to the different geo-pane content associated with that geo-fence in that room.
- a user's gaze may intersect geo-panes in adjoining rooms but so long as he is in the correct geo-fenced area, he will have access to the correct content.
- wearable smart glasses or other augmented reality device one may obtain such additional information about the painting he is gazing at as the owner of the museum may wish to provide.
- the spatial resolution of this system may be affected by the particular geo-location technology employed, and one system may prove more useful in a given context than another.
- One may also combine systems, using GPS up to the limits of its resolution, and employing other systems with higher resolutions to augment it.
- the present embodiments are not limited to any one system for geo-location.
- the geo-panes are static, i.e., they may be fixed in space.
- geo-panes may be mobile, as may be geo-fences, provided the locating coordinates are updated as they move about.
- a 3D geo-pane may be automatically created around the user with public profile information as the content. This geo-pane would move with the person and thus in at least this sense could be considered to be mobile.
- a location aware device e.g., Google Glasses®
- Google Glasses® would be able to gaze upon the other user within the geo-pane and swiftly download information from the cloud concerning them such as name, job title, etc.
- Such an approach may be much simpler than capturing an image of a person, and then forwarding it to a service for image analysis and identification. Also, it provides the owner of the geo-pane with control over his content, and he may limit access to devices having the proper permissions.
- creation of geo-panes and geo-fences may be a simple matter of identifying, for example, the vertices, corners, edges, centers, or center faces of areas of interest and transmitting those coordinates to a server or other store of data where it may be associated with content.
- gaze may refer to the direction a location aware device is pointed, or it may refer to a direction vector of unit magnitude (e.g., one meter, if MKS is used) having its origin (i.e., its tail end) at the coordinates of the device.
- gaze vector may refer to a ray beginning at the coordinates of the location aware device and stretching out to infinity.
- Communication with the cloud may be conducted via one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques may involve communications across one or more wireless networks.
- Exemplary wireless networks include (but are not limited to) wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, and satellite networks.
- a location aware device may include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computers, clothing computers, and other wearable computers.
- a location aware device may be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications.
- the logic architecture 904 may generally implement one or more aspects of the method 300 ( FIG. 4 ) already discussed above.
- the logic architecture 904 may include a sensor processing module 905 that comprises a geo-location module 906 and a gaze-orientation module 907 .
- a geo-pane identification module 910 and a geo-fence identification module 912 .
- a location aware device 915 such as a smart phone, may include a number of sensors 922 of use in determining the geo-location of the location aware device 915 .
- These sensors may include a gyroscope 924 , an accelerometer 926 , a magnetometer 928 , and an air pressure sensor 930 .
- Data provided by these sensors 922 may be utilized within sensor processing module 905 by the geo-location module 906 and gaze orientation module 907 to determine the geo-location and the gaze orientation of the device.
- Geo-pane identification module 910 utilizes this information to identify any geo-panes to associate with the direction of gaze, as discussed previously. Where geo-fences are employed, the geo-fence identification module uses the geo-location of the device to determine if there are any geo-fences to take into account.
- the location aware device 915 may include a content request module 925 that requests content associated with geo-panes identified in the logic architecture 904 . If suitable content is identified, it is downloaded from the content server 930 to the location aware device 915 .
- FIG. 10 illustrates a processor core 2000 according to one embodiment.
- the processor core 2000 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only one processor core 2000 is illustrated in FIG. 10 , a processing element may alternatively include more than one of the processor core 2000 illustrated in FIG. 10 .
- the processor core 2000 may be a single-threaded core or, for at least one embodiment, the processor core 2000 may be multithreaded in that it may include more than one hardware thread context (or “logical processor”) per core.
- FIG. 10 also illustrates a memory 2700 coupled to the processor core 2000 .
- the memory 2700 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art.
- the memory 2700 may include one or more code 2130 instruction(s) to be executed by the processor core 2000 , wherein the code 2130 may implement the method ( FIG. 4 ), already discussed.
- the processor core 2000 follows a program sequence of instructions indicated by the code 2130 . Each instruction may enter a front end portion 2100 and be processed by one or more decoders 2200 .
- the decoder 2200 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction.
- the illustrated front end 2100 also includes register renaming logic 2250 and scheduling logic 2300 , which generally allocate resources and queue the operation corresponding to the convert instruction for execution.
- the processor core 2000 is shown including execution logic 2500 having a set of execution units 2550 - 1 through 2550 -N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function.
- the illustrated execution logic 2500 performs the operations specified by code instructions.
- back end logic 2600 retires the instructions of the code 2130 .
- the processor core 2000 allows out of order execution but requires in order retirement of instructions.
- Retirement logic 2650 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 2000 is transformed during execution of the code 2130 , at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 2250 , and any registers (not shown) modified by the execution logic 2500 .
- a processing element may include other elements on chip with the processor core 2000 .
- a processing element may include memory control logic along with the processor core 2000 .
- the processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic.
- the processing element may also include one or more caches.
- FIG. 11 shown is a block diagram of a system 1000 embodiment in accordance with an embodiment. Shown in FIG. 11 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080 . While two processing elements 1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.
- the system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050 . It should be understood that any or all of the interconnects illustrated in FIG. 11 may be implemented as a multi-drop bus rather than point-to-point interconnect.
- each of processing elements 1070 and 1080 may be multicore processors, including first and second processor cores (i.e., processor cores 1074 a and 1074 b and processor cores 1084 a and 1084 b ).
- Such cores 1074 a , 1074 b , 1084 a , 1084 b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 10 .
- Each processing element 1070 , 1080 may include at least one shared cache 1896 a , 1896 b .
- the shared cache 1896 a , 1896 b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074 a , 1074 b and 1084 a , 1084 b , respectively.
- the shared cache 1896 a , 1896 b may locally cache data stored in a memory 1032 , 1034 for faster access by components of the processor.
- the shared cache 1896 a , 1896 b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof
- LLC last level cache
- processing elements 1070 , 1080 may be present in a given processor.
- processing elements 1070 , 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array.
- additional processing element(s) may include additional processors(s) that are the same as a first processor 1070 , additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070 , accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element.
- accelerators such as, e.g., graphics accelerators or digital signal processing (DSP) units
- DSP digital signal processing
- processing elements 1070 , 1080 there can be a variety of differences between the processing elements 1070 , 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070 , 1080 .
- the various processing elements 1070 , 1080 may reside in the same die package.
- the first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078 .
- the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088 .
- MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034 , which may be portions of main memory locally attached to the respective processors. While the MC 1072 and 1082 is illustrated as integrated into the processing elements 1070 , 1080 , for alternative embodiments the MC logic may be discrete logic outside the processing elements 1070 , 1080 rather than integrated therein.
- the first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086 , respectively.
- the I/O subsystem 1090 includes P-P interfaces 1094 and 1098 .
- I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038 .
- bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090 .
- a point-to-point interconnect may couple these components.
- I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096 .
- the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.
- PCI Peripheral Component Interconnect
- various I/O devices 1014 may be coupled to the first bus 1016 , along with a bus bridge 1018 which may couple the first bus 1016 to a second bus 1020 .
- the second bus 1020 may be a low pin count (LPC) bus.
- Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012 , network controllers/communication device(s) 1026 (which may in turn be in communication with a computer network), and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030 , in one embodiment.
- the code 1030 may include instructions for performing embodiments of one or more of the methods described above.
- the illustrated code 1030 may implement the method already discussed with respect to FIG. 4 or any embodiment herein, and may be similar to the code 2130 ( FIG. 10 ), already discussed.
- an audio I/O 1024 may be coupled to second bus 1020 .
- a system may implement a multi-drop bus or another such communication topology.
- the elements of FIG. 11 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 11 .
- Example 1 may include a system to provide content to a device comprising a gaze orientation module to determine a gaze direction based on sensor-based information, and a geo-pane identification module to identify regions of space and associate content with said regions of space based on the gaze direction.
- a gaze orientation module to determine a gaze direction based on sensor-based information
- a geo-pane identification module to identify regions of space and associate content with said regions of space based on the gaze direction.
- Example 2 may include the system of Example 1, further comprising a geo-location module to determine device coordinates within a coordinate system, wherein content is associated with said regions of space based on the device coordinates.
- Example 3 may include the system of Example 1, wherein at least one of the modules is integrated into a device that is wearable about a portion of the human body.
- Example 4 may include the system of Example 1, wherein content is associated with geo-panes.
- Example 5 may include the system of Example 1, wherein said content and data defining geo-panes are registered to an owner.
- Example 6 may include the system of any of Examples 1-5, further comprising a device for use in game play.
- Example 7 may include the system of any of Examples 1 or 3-6, further comprising a content server, a geo-fence identification module to identify fenced regions of space, and a geo-location module to determine device coordinates within a coordinate system, wherein content is associated with said regions of space based on the device coordinates.
- Example 8 may include the system of any of Examples 1-6, further comprising a mobile device, wherein a user of the mobile device may select content based on an orientation of said device.
- Example 9 may include a method to provide content to a device, comprising obtaining spatial coordinates and an orientation of a device, said orientation defining a gaze direction, identifying a geo-pane based on the spatial coordinates and the gaze direction, and providing content to the device that is relevant to the geo-pane.
- Example 10 may include the method of Example 9, wherein the geo-pane is three dimensional.
- Example 11 may include the method of Example 9, further including determining whether the spatial coordinates of the device lie within a geo-fence, and determining whether there are any geo-panes associated with said geo-fence, wherein providing the content to the device includes providing content concerning the geo-pane associated with the geo-fence to which the device is pointing.
- Example 12 may include the method of any of Examples 9-11, wherein the spatial coordinates are obtained through a combination of geo-location systems, at least one of which is local to the device.
- Example 13 may include the method of Example 7, further including registering ownership of a geo-pane with a registrar.
- Example 14 may include the method of Examples 9 or 11, further including establishing a rule set to determine whether content is to be made available.
- Example 15 may include the method of Examples 7 or 9, wherein the rule set comprises rules that control content delivery to devices located within a predetermined range of the geo-pane.
- Example 16 may include the method of Example 9, wherein the geo-pane is mobile.
- Example 17 may include the method of Example 9, wherein the content that is relevant to a geo-pane varies with a gaze direction of a user.
- Example 18 may include the method of Examples 9 or 16, wherein the geo-panes are utilized in game play.
- Example 19 may include the method of Example 9, wherein content is associated with geo-panes.
- Example 20 may include at least one computer readable storage medium comprising a set of instructions which, when executed by one or more servers, cause the server to obtain spatial coordinates and an orientation of a device, said orientation defining a gaze direction, identify a geo-pane based on the spatial coordinates and the gaze direction, and provide content to the device that is relevant to the geo-pane.
- Example 21 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to obtain spatial coordinates through a combination of geo-location systems, at least one of which is local to the device.
- Example 22 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to restrict content delivery to devices located within a predetermined range of the geo-pane.
- Example 23 may include the at least one computer readable storage medium of any of Examples 20-22, wherein the instructions, when executed, cause a server to vary delivery of content with the gaze direction.
- Example 24 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to determine whether the spatial coordinates of the device lie within a geo-fence, and determine whether there are any geo-panes associated with said geo-fence, wherein providing the content to the device includes providing content concerning a geo-pane associated with a geo-fence based on gaze direction.
- Example 25 may include a mobile device comprising one or more sensors, a geo-location module to use sensor-based information from the one or more sensors to determine the location of the mobile device, an orientation module to use information from the one or more sensors to determine an orientation of the mobile device, and a content request module to request the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 26 may include the mobile device of Example 25, wherein the sensor-based information is provided by at least one sensor selected from a group consisting of a gyroscope, magnetometer, barometer, pressure sensor, and accelerometer.
- the sensor-based information is provided by at least one sensor selected from a group consisting of a gyroscope, magnetometer, barometer, pressure sensor, and accelerometer.
- Example 27 may include the mobile device of Example 25, wherein the sensor-based information can at least partially define coordinates of a geo-pane.
- Example 28 may include at least one computer readable storage medium comprising a set of instructions which, when executed by a mobile device, cause the mobile device to determine the location of the mobile device, determine an orientation of the mobile device, and request the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 29 may include the at least one computer readable storage medium of claim 28 , wherein the mobile device comprises sensors.
- Example 30 may include a method of providing content to a mobile device, comprising obtaining spatial coordinates and orientation of a device, said orientation defining a gaze direction, determining whether the gaze direction points to a geo-pane, and providing content to the device that is relevant to the geo-pane.
- Example 31 may include the method of Example 30, wherein the spatial coordinates are obtained through a combination of geo-location systems, including global positioning system coordinates.
- Example 32 may include the method of Examples 30 or 31, further including providing for registration of a geo-pane, and associating content with the geo-pane.
- Example 33 may include the method of Example 30, further including establishing a rule set to determine whether content is to be made available.
- Example 34 may include the method of Example 33, wherein the rules comprise a set of permissions.
- Example 35 may include the method of Example 30, wherein the geo-pane is mobile.
- Example 36 may include the method of Examples 30 or 35, wherein the content that is associated with a geo-pane varies with the orientation of a direction of gaze of a user.
- Example 37 may include the method of Example 30, wherein the geo-fence is mobile.
- Example 38 may include the method of any of Examples 30, 35, or 37, wherein the geo-panes are utilized in game play.
- Example 39 may include the method of Example 38, wherein the game play includes using geo-panes associated with the layout of a city.
- Example 40 may include a mobile device comprising one or more sensors, means for using sensor-based information from the one or more sensors to determine the location of the mobile device, means for using information from the one or more sensors to determine an orientation of the mobile device, and means for requesting the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 41 may include the mobile device of Example 40, further comprising means for controlling delivery of content.
- Example 42 may include a system for providing location specific content, comprising a device for mapping the boundaries of a region, memory for receiving numerical information defining the boundaries of the region, content associated with the region, a location aware device having means to determine a direction of orientation of the location aware device and its location, and means to relate the orientation and the location of the location aware device to said content.
- Example 43 may include the system of Example 42, further comprising access rules.
- Example 44 may include a system to provide content to a device, comprising means for locating the device in space, means for determining the orientation of the device, means for storing content and relating it to a location in space, and means for providing content to the device in dependence upon its location and orientation.
- Example 45 may include the system of Example 44, comprising means for controlling access to content.
- Example 46 may include a method of providing content, comprising obtaining the Global Positioning System (GPS) coordinates of a device, determining a device orientation from at least one of accelerometer, magnetometer, and gyroscope data obtained from the device, said device orientation defining a gaze direction, accessing a data base containing the GPS coordinates of at least one geo-pane and rules for permitting access to data associated with the geo-pane, determining whether the gaze direction points to a geo-pane, and providing content to the device that is relevant to the geo-pane provided the rules are met.
- GPS Global Positioning System
- Example 47 may include the method of Example 46, wherein the rules include limitations on the distance between the geo-pane and the device.
- Example 48 may include the method of Example 46, further comprising game play.
- Various embodiments may be implemented using hardware elements, software elements, or a combination of both.
- hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chipsets, and so forth.
- Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Methods and systems may provide for associating content in a cloud computing infrastructure with geo-panes and sending it to a location aware device based on user gaze. The content may be restricted by rules, including distance from user to the geo-pane. The geo-panes may be static or mobile in space.
Description
- This application is a continuation of U.S. patent application Ser. No. 14/229,561 filed on Mar. 28, 2014.
- Geo-location technologies such as GPS (Global Positioning System) may be used by smart phones and other GPS-equipped devices to obtain content on locations of interest to a user of that device based on the user's geographic location. In some applications, images selected by the user may be analyzed for their content in order to determine their identity. Such image analysis, however, may be computationally demanding and may remain prone to error. These challenges may hinder the development of content delivery based on geo-location technology.
- The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
-
FIGS. 1A, 1B and 1C show three examples of geo-pane geometries according to embodiments; -
FIG. 2 is a block diagram of an example illustrating the connections among a location aware device, a geo-pane, and a cloud computing infrastructure according to an embodiment; -
FIG. 3 illustrates an example of a geo-pane in conjunction with a gaze vector according to an embodiment; -
FIG. 4 is a flowchart of an example of a method of providing content based on user gaze according to an embodiment; -
FIG. 5 illustrates an example of an embodiment employing a 2D geo-pane; -
FIGS. 6A and 6B show examples of unidirectional and bidirectional 2D geo-panes, respectively, according to embodiments; -
FIGS. 7A and 7B show examples of embodiments using inbound only and outbound only 3D geo-panes, respectively; and -
FIGS. 8A and 8B illustrate examples of an embodiment employing geo-fencing. -
FIG. 9 is a block diagram of an example of a logic architecture according to an embodiment; -
FIG. 10 is a block diagram of an example of a processor according to an embodiment; and -
FIG. 11 is a block diagram of an example of a system according to an embodiment. - Various geo-location technologies exist to enable one to determine the location of a location aware device. A location aware device may be any device that has the capability of making use of geo-location technology to locate the device on a map or in a coordinate system. Examples of such technologies include those based on Global Positioning Systems (GPSs), such as are now in widespread use in smart phones; radar; sonar; indoor GPS systems; near-field communication (NFC); cellular tower triangulation systems; Wi-Fi and Wi-Fi triangulation systems; Radio Frequency Identification systems (RFID); laser positioning systems; and Bluetooth systems, which are used primarily in localized settings.
- These technologies may be employed by location aware devices. Examples of devices with such capabilities include tablets, notebook computers, smart phones, smart glasses, image capture devices, mobile internet devices (MIDs), game console, media players etc. More broadly, a location aware device may be any device that is aware of its location with respect to a coordinate system. It may have the ability to transmit its location or data determinative of its location to another device or to a system, which may be local to it or more distant, such as in the cloud. That coordinate system may be defined in some local space, such as a room or building, or it may cover wide swaths of the planet, such as GPS. It may be based on any known mathematical system, including Cartesian coordinates, cylindrical coordinates, spherical coordinates etc. With respect to GPS, which is amenable for use with embodiments disclosed herein, the coordinates are latitude, longitude, and altitude. While most of the embodiments here are described in terms of GPS, it is understood that embodiments may utilize these other systems or other mathematically acceptable coordinate system.
- At present, GPS is the most widely used of these systems. Although initially developed for the military and made available for civilian use only with hobbled capabilities, GPS is now available with a geographic resolution that is generally accurate within several meters and improving. GPS capabilities are now tightly woven into smart phones, and provide latitude and longitude measurements by which users of these devices locate themselves on city maps. Less well known may be that GPS systems may also provide altitude information. Hence, a GPS is an example of a geo-location system that is capable of locating the user by latitude, longitude, and altitude. Also, GPS accuracy may be enhanced by use of supplemental systems such as various ground-based augmented GPS systems. One such system, Nationwide Differential GPS System (NDGPS), offers accuracy to within 10 cm. Altitude measurements may be augmented by use of barometric pressure sensors and other devices available for measuring altitude. Alternatively, altitude may be determined via triangulation devices and by using Wi-Fi access points, which have been used to determine the particular floor of a multi-story building that a user may be in.
- GPS location may also form the origin of a locally defined three dimensional (3D) coordinate system.
- Many location aware devices may now be equipped with hardware that enables them to determine their orientation as well as position. For example, smart phones may now be equipped with an accelerometer, gyroscope and a magnetometer, and with the sensor data provided by these components, the direction in which the device is oriented may be determined using basic vector mechanics and for example, well known techniques employed in smart phone design.
- The embodiments disclosed herein make use of geo-location technology to provide the user of a location aware device pointing in a particular direction with content associated with that direction, as shall be explained further below.
- As used herein, the term “geo-pane” may refer to a two dimensional pane of space and the virtual frame, known as a “geo-frame,” that bounds it. A geo-pane may be defined by its coordinates, and these coordinates may be taken at whatever location on the geo-frame is most appropriate. One simple example of a two dimensional (2D) geo-pane is shown in
FIG. 1A , in which the geo-pane is a rectangle having fourcorners - A geo-pane may also be three dimensional volume of space, such as a cube 12 (
FIG. 1B ) having vertices 12 a,b,c,d,e,f, as well as a box, right cylinder, polyhedron, sphere, hemisphere 14 (FIG. 1C ), or any portion thereof that may be used to define a volume of space. Hence, there are 2D geo-panes that determine two dimensional regions and 3D geo-panes that enclose volumes. - A geo-fence may be a virtual fence in which GPS or other positioning system is used to define the boundaries of some physical space. It may be two dimensional, in that it is defined in terms of ground coordinates only, or it may be bounded above and below and be three dimensional. A geo-fence may be of any bounded shape. A geo-fence may be used to define a region of space that contains a 2D or 3D geo-pane, and it may be associated with a 2D or a 3D geo-pane whether or not the geo-pane is located inside the geo-fence.
- Geo-panes may be defined by location aware devices. For example, the 2D geo-pane of
FIG. 1A may be defined by placing a location aware device at its four corners and obtaining the coordinates there in GPS or other system. Similarly, a 3D geo-pane may be defined by placing the location aware device at its vertices or at points in space sufficient to bound some region of interest. According to another embodiment, a 3D geo-pane may be defined in terms of a single 2D geo-pane that has been mathematically thickened to provide depth. The located points may correspond to something physical, like the perimeter of a statue or the corners of a wall, or they may simply be points in space of interest to someone. - Once defined, a geo-pane may be regarded as property, having an owner to whom it is registered in a database. Alternatively, it may be dedicated to the public, or subject to an open license for use by others. Ownership of the geo-pane may be separate from ownership of any real object within its coordinates, or they may be bundled together.
- The process of defining and using geo-panes according to one embodiment is illustrated at a top-level form in
FIG. 2 . A locationaware device 210 forwards the geo-coordinates 270 that define the geo-pane to a geo-pane registration server 202 in the cloud 200 (e.g., a cloud computing infrastructure), where it may be registered to an owner. Also provided to the cloud, and by any device capable of accessing thecloud 200, such as a smart phone or a computer, may be such other identifying information as the owner of the geo-pane may regard as useful for subsequent operation, including billing information and any such other information as may be provided. Any content that the owner may wish to associate with that geo-pane may be stored in acontent server 204 for gaze-based access by adevice 250, which forwardsvarious sensor data 275 to the cloud and which may receivecontent 280 in return, as is explained in greater detail below. - This same technique may also be used to define geo-fences. A geo-fence may contain one or more geo-panes, either 2D or 3D, or be associated with a geo-pane that it does not contain. As with geo-panes, a geo-fence may be registered in the cloud to an owner.
- Within the
cloud 200, the creator/owner of the geo-pane may also provide downloadable content to be associated with that particular geo-pane. The content may take any form that may be made use of by the accessing location aware device, including text, video, music, narration, images, graphics, software, firmware, email, web pages, applications, e-services, voice, data, etc. It may be offered free of charge, or for a fee. - As noted above, in addition to obtaining its location in space, a location aware device may, using an internal/embedded accelerometer, gyroscope, magnetometer, etc., also establish its orientation in space. That is to say, taking the location of the device as an origin, the direction of orientation may be defined. In the example of
FIG. 3 , the location aware device is a pair ofsmart glasses 250 equipped with acamera 252 such that in use its orientation will be more or less coincident with the gaze direction of the wearer. The gaze direction may be expressed mathematically in terms of agaze ray 260 having itsorigin 264 at the GPS location of the device, or as agaze unit vector 261 pointing along that ray, or as a set of direction cosines, depending on the particular mathematical formalism one may wish to employ. Whatever the formalism employed, a location aware device may “know” both its location and its orientation. - More generally, device orientation will normally be determined in terms appropriate to the physical nature of the location aware device in question. Where the device is a camera or other optical device, one natural way of defining the orientation is to look to the orientation of the optical path as seen through the view finder. Recently, so-called “smart glasses” have been developed (such as those marketed by Google®). It is natural to define the orientation of such a device as the direction of the user's gaze when wearing them and looking at some location or object of interest. Other devices where there may be some preferred direction of orientation (which for lexicographical compactness is called “gaze” herein regardless of the device in question) are smart watches, wearables, helmet mounted devices, and smart phones where the user chooses to define some orientation. A wand might be adapted to include GPS, and then by pointing the wand in a specific direction one defines a gaze for the wand. Similarly, a game controller may be designed to have a gaze.
- The operation of one embodiment is further addressed by way of reference to the
flow chart 300 in FIG.4. Themethod 300 may be implemented as a module in set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown inmethod 300 may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. - At
start processing block 310, the user initiates a request for information on something the user sees. This may or may not correspond to a known geo-pane for which content exists. Typically, the user will initiate a request because the context of the user's location suggests that such information may be available through his or her directed gaze. Atillustrated block 320, the GPS coordinates as well as the orientation of gaze are forwarded to the cloud, where a database (which may be resident in the geo-pane registration server 202 ofFIG. 2 ) and/or additional computational or relational software determines atblock 330 whether his GPS coordinates are inside a known geo-fence. - If the user's GPS coordinates are inside a known geo-fence, then the system may calculate in
block 340 whether the gaze points to a geo-pane associated with the area inside the geo-fenced area. If it does, then inblock 350 the content associated with that geo-pane may be downloaded to the user's device. - Should it be determined in illustrated
block 330 that the user is not inside a geo-fenced area, or, inillustrated block 340, that he is but that there is no content available within it to match his gaze, then atblock 360 it may be determined whether the user's gaze is pointing to any geo-pane, no matter the distance. If it is not, then there is nothing to download and the process may terminate atblock 380 until it is invoked again, either by the user manually invoking it or automatically through the movement of the user some predefined distance. - On the other hand, if it is determined in
block 360 that any geo-panes may be identified as being in line with the user's gaze, certain distance rules may be brought into play atblock 370. The user may be free to define for the system the effective range of his gaze. For example, he may only be interested in geo-panes that are within five meters of his position. Or he may select a greater distance, out to his effective horizon or beyond. Alternatively, the owner of the geo-pane may have set up the registration of his geo-pane with a set of rules so that content is delivered only to those devices that are within a certain predetermined distance of the location of the geo-pane. Alternatively, the distance may be undefined by either party, and subject to rules determined by default to system rules in the cloud or the location aware device. If the distance rules are met, then the content is delivered to the device atblock 350. Otherwise, the process again terminates until the user moves some distance or manually requests a new search for content. - While the previous embodiment employs geo-fencing, according to another embodiment geo-fencing is not used. In this embodiment, the flow chart by-
passes blocks block 320, and it may be determined atblock 360 whether the gaze direction intercepts a geo-pane. If it does and if any defined distance limitations are met, then the content associated with the geo-pane may be downloaded to the device atblock 350 as before. - The illustrated arrangement may be, insofar as the user is concerned, simplicity itself Merely by gazing at an object, she may obtain additional content concerning it. When the user is wearing smart glasses, simply looking at an object suffices to provide additional information and content associated with that object.
- The owner or registrant of the geo-pane may be given broad authority to determine the circumstances under which content will be provided. Hence, according to another rule that may optionally be used, content is delivered only to paying customers or to those who have met some other standards defined by the owner of the geo-pane.
-
FIG. 5 illustrates an embodiment and its use in a museum-like setting. Auser 400 stands in front of apainting 430 mounted to awall 410. She is holding a location awaresmart phone 440 having an optical system that defines agaze vector 450. In this instance, thegaze vector 450 is pointed in the direction of thepainting 430. Thepainting 430 is located within a geo-pane 420 that is generally aligned with thewall 410 to which the painting has been mounted. The illustrated geo-pane 420 is defined by the coordinates of itscorners gaze vector 450, to the cloud. In this embodiment, thegaze vector 450 is determined in terms of adevice pitch 460 and adevice bearing 470, although any known approach for determining an orientation may be employed. - In the cloud, servers may determine if there is a geo-pane within some defined distance of the smart phone. That distance may be set by the user using their smart phone, or by the owner of the geo-pane in question. If the
smart phone 440 is located within the defined range of the geo-pane, and if thegaze vector 450 lies along a line that intersects the geo-pane, then the cloud may download content of interest to the smart phone. For example, in a museum setting, by pointing at a painting, the user could receive information on its history or other works by that same artist. - Geo-panes may be unidirectional, as shown in
FIG. 6A , or bidirectional as is shown inFIG. 6B . In the unidirectional case (FIG. 6A ), only users on one side of the geo-pane—in this instance, the woman on the right—receive content about it. In a bidirectional geo-pane (FIG. 6B ), users on either side of the geo-pane may receive the same content. Alternatively, each side may receive its own unique content, which may be of value in perspective-specific displays. If the user is wearing smart glasses, this content may further vary with the user's gaze direction as the user turns his head. One way of providing such selective directionality to the geo-pane is to note its frame orientation so that the cloud knows when the user is standing in front of it versus behind it. Such an approach may be useful in, for example, the museum setting in which wall-mounted paintings are viewed. While the person in the room with the painting would want to see that content, another person in the adjoining room (still within range of the painting) would not. - 3D stand-alone geo-panes, such as are shown in FIGS.7A and 7B, have volume. In the embodiment of
FIG. 7A , a statue 700 has been geo-paned, i.e. located in a box-shaped 3D geo-pane 710 so that users on any side looking in (here users 720, 721, 722) may see content or information relevant to the statue no matter which side they are on. The illustrated approach is an example of an inbound only geo-pane, in which content is only displayed if an individual is looking into the geo-pane. Someone standing on the statue platform and looking out would not see this content. In still other embodiments some sides may be active and provide content while others are not. - In the embodiment in
FIG. 7B there is shown an outbound-looking 3D geo-pane 730 in the shape of a capped cylinder, whereusers - Another embodiment combines geo-panes with geo-fences. Again consider the case of paintings hanging on display in a museum, as in
FIGS. 8A and 8B . Two people standing in adjoiningrooms common wall 820 on either side of which hang different paintings, in addition to other paintings that may be in each room. The two people may both be staring at the same wall and at the same distance from it, but as there is a different painting on each side they do not want to see any content concerning the unseen painting in the adjacent room. To provide each person with content specific to what his eyes actually see and not of what lies beyond on the other side of the wall, this embodiment includes 3D geo-fences, which are volumes with which 2D and 3D geo-panes may be associated, whether or not they are actually inside the boundaries of the geo-fence. In this example,room 1 has a geo-fence 1 androom 2 has a geo-fence 2. Any objects of interest in each room may be associated with a particular geo-pane, and the geo-fences are associated with these geo-panes in a data base. Hence, any user standing inroom 1 will also be inside geo-fence 1, and she will have access only to the data associated with the geo-panes that have been associated with the geo-fence in that room. And similarly, the user in the adjacent room would be inside geo-fence 2 and have access to the different geo-pane content associated with that geo-fence in that room. A user's gaze may intersect geo-panes in adjoining rooms but so long as he is in the correct geo-fenced area, he will have access to the correct content. Using wearable smart glasses or other augmented reality device, one may obtain such additional information about the painting he is gazing at as the owner of the museum may wish to provide. - The spatial resolution of this system may be affected by the particular geo-location technology employed, and one system may prove more useful in a given context than another. One may also combine systems, using GPS up to the limits of its resolution, and employing other systems with higher resolutions to augment it. The present embodiments are not limited to any one system for geo-location.
- In the preceding embodiments, the geo-panes are static, i.e., they may be fixed in space. However, geo-panes may be mobile, as may be geo-fences, provided the locating coordinates are updated as they move about. For example, using device location information in possession of a user, a 3D geo-pane may be automatically created around the user with public profile information as the content. This geo-pane would move with the person and thus in at least this sense could be considered to be mobile. Thus, one may create personalized geo-panes that accompany individuals as they move about. Then another user provided with a location aware device according to the embodiments here (e.g., Google Glasses®) would be able to gaze upon the other user within the geo-pane and swiftly download information from the cloud concerning them such as name, job title, etc. Such an approach may be much simpler than capturing an image of a person, and then forwarding it to a service for image analysis and identification. Also, it provides the owner of the geo-pane with control over his content, and he may limit access to devices having the proper permissions.
- By using a geo-location system with a spatial resolution sufficient to the task at hand, creation of geo-panes and geo-fences may be a simple matter of identifying, for example, the vertices, corners, edges, centers, or center faces of areas of interest and transmitting those coordinates to a server or other store of data where it may be associated with content.
- As used herein, gaze may refer to the direction a location aware device is pointed, or it may refer to a direction vector of unit magnitude (e.g., one meter, if MKS is used) having its origin (i.e., its tail end) at the coordinates of the device. Alternately, the term “gaze vector” may refer to a ray beginning at the coordinates of the location aware device and stretching out to infinity. Each may convey the same idea, that a direction is specified that points from the device (which is the user if he is wearing smart glasses) to something of interest to him. It is well within the knowledge of engineers familiar with vector mechanics to work with any or all of these formulations.
- Communication with the cloud may be conducted via one or more radios capable of transmitting and receiving signals using various suitable wireless communications techniques. Such techniques may involve communications across one or more wireless networks. Exemplary wireless networks include (but are not limited to) wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area network (WMANs), cellular networks, and satellite networks.
- Further examples of a location aware device may include computers that are arranged to be worn by a person, such as a wrist computer, finger computer, ring computer, eyeglass computer, belt-clip computer, arm-band computer, shoe computers, clothing computers, and other wearable computers. In embodiments, for example, a location aware device may be implemented as a smart phone capable of executing computer applications, as well as voice communications and/or data communications.
- Turning now to
FIG. 9 , alogic architecture 904 is shown, wherein the logic architecture may generally implement one or more aspects of the method 300 (FIG. 4 ) already discussed above. In the illustrated example, thelogic architecture 904 may include asensor processing module 905 that comprises a geo-location module 906 and a gaze-orientation module 907. Also provided are a geo-pane identification module 910 and a geo-fence identification module 912. A locationaware device 915, such as a smart phone, may include a number ofsensors 922 of use in determining the geo-location of the locationaware device 915. These sensors may include agyroscope 924, anaccelerometer 926, amagnetometer 928, and anair pressure sensor 930. Data provided by thesesensors 922 may be utilized withinsensor processing module 905 by the geo-location module 906 andgaze orientation module 907 to determine the geo-location and the gaze orientation of the device. Geo-pane identification module 910 utilizes this information to identify any geo-panes to associate with the direction of gaze, as discussed previously. Where geo-fences are employed, the geo-fence identification module uses the geo-location of the device to determine if there are any geo-fences to take into account. - The location
aware device 915 may include acontent request module 925 that requests content associated with geo-panes identified in thelogic architecture 904. If suitable content is identified, it is downloaded from thecontent server 930 to the locationaware device 915. -
FIG. 10 illustrates aprocessor core 2000 according to one embodiment. Theprocessor core 2000 may be the core for any type of processor, such as a micro-processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only oneprocessor core 2000 is illustrated inFIG. 10 , a processing element may alternatively include more than one of theprocessor core 2000 illustrated inFIG. 10 . Theprocessor core 2000 may be a single-threaded core or, for at least one embodiment, theprocessor core 2000 may be multithreaded in that it may include more than one hardware thread context (or “logical processor”) per core. -
FIG. 10 also illustrates amemory 2700 coupled to theprocessor core 2000. Thememory 2700 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. Thememory 2700 may include one ormore code 2130 instruction(s) to be executed by theprocessor core 2000, wherein thecode 2130 may implement the method (FIG. 4 ), already discussed. Theprocessor core 2000 follows a program sequence of instructions indicated by thecode 2130. Each instruction may enter afront end portion 2100 and be processed by one ormore decoders 2200. Thedecoder 2200 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustratedfront end 2100 also includesregister renaming logic 2250 andscheduling logic 2300, which generally allocate resources and queue the operation corresponding to the convert instruction for execution. - The
processor core 2000 is shown includingexecution logic 2500 having a set of execution units 2550-1 through 2550-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustratedexecution logic 2500 performs the operations specified by code instructions. - After completion of execution of the operations specified by the code instructions,
back end logic 2600 retires the instructions of thecode 2130. In one embodiment, theprocessor core 2000 allows out of order execution but requires in order retirement of instructions.Retirement logic 2650 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, theprocessor core 2000 is transformed during execution of thecode 2130, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by theregister renaming logic 2250, and any registers (not shown) modified by theexecution logic 2500. - Although not illustrated in
FIG. 10 , a processing element may include other elements on chip with theprocessor core 2000. For example, a processing element may include memory control logic along with theprocessor core 2000. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches. - Referring now to
FIG. 11 , shown is a block diagram of asystem 1000 embodiment in accordance with an embodiment. Shown inFIG. 11 is amultiprocessor system 1000 that includes afirst processing element 1070 and asecond processing element 1080. While twoprocessing elements system 1000 may also include only one such processing element. - The
system 1000 is illustrated as a point-to-point interconnect system, wherein thefirst processing element 1070 and thesecond processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated inFIG. 11 may be implemented as a multi-drop bus rather than point-to-point interconnect. - As shown in
FIG. 11 , each ofprocessing elements processor cores 1074 a and 1074 b andprocessor cores Such cores FIG. 10 . - Each
processing element cache cache cores cache memory cache - While shown with only two
processing elements processing elements first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor afirst processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between theprocessing elements processing elements various processing elements - The
first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, thesecond processing element 1080 may include aMC 1082 andP-P interfaces FIG. 7 , MC's 1072 and 1082 couple the processors to respective memories, namely amemory 1032 and amemory 1034, which may be portions of main memory locally attached to the respective processors. While theMC processing elements processing elements - The
first processing element 1070 and thesecond processing element 1080 may be coupled to an I/O subsystem 1090 viaP-P interconnects 1076 1086, respectively. As shown inFIG. 11 , the I/O subsystem 1090 includesP-P interfaces O subsystem 1090 includes aninterface 1092 to couple I/O subsystem 1090 with a highperformance graphics engine 1038. In one embodiment,bus 1049 may be used to couple thegraphics engine 1038 to the I/O subsystem 1090. Alternately, a point-to-point interconnect may couple these components. - In turn, I/
O subsystem 1090 may be coupled to afirst bus 1016 via aninterface 1096. In one embodiment, thefirst bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited. - As shown in
FIG. 11 , various I/O devices 1014 (e.g., cameras, sensors) may be coupled to thefirst bus 1016, along with a bus bridge 1018 which may couple thefirst bus 1016 to asecond bus 1020. In one embodiment, thesecond bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to thesecond bus 1020 including, for example, a keyboard/mouse 1012, network controllers/communication device(s) 1026 (which may in turn be in communication with a computer network), and adata storage unit 1019 such as a disk drive or other mass storage device which may includecode 1030, in one embodiment. Thecode 1030 may include instructions for performing embodiments of one or more of the methods described above. Thus, the illustratedcode 1030 may implement the method already discussed with respect toFIG. 4 or any embodiment herein, and may be similar to the code 2130 (FIG. 10 ), already discussed. Further, an audio I/O 1024 may be coupled tosecond bus 1020. - Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of
FIG. 11 , a system may implement a multi-drop bus or another such communication topology. Also, the elements ofFIG. 11 may alternatively be partitioned using more or fewer integrated chips than shown inFIG. 11 . - Example 1 may include a system to provide content to a device comprising a gaze orientation module to determine a gaze direction based on sensor-based information, and a geo-pane identification module to identify regions of space and associate content with said regions of space based on the gaze direction.
- Example 2 may include the system of Example 1, further comprising a geo-location module to determine device coordinates within a coordinate system, wherein content is associated with said regions of space based on the device coordinates.
- Example 3 may include the system of Example 1, wherein at least one of the modules is integrated into a device that is wearable about a portion of the human body.
- Example 4 may include the system of Example 1, wherein content is associated with geo-panes.
- Example 5 may include the system of Example 1, wherein said content and data defining geo-panes are registered to an owner.
- Example 6 may include the system of any of Examples 1-5, further comprising a device for use in game play.
- Example 7 may include the system of any of Examples 1 or 3-6, further comprising a content server, a geo-fence identification module to identify fenced regions of space, and a geo-location module to determine device coordinates within a coordinate system, wherein content is associated with said regions of space based on the device coordinates.
- Example 8 may include the system of any of Examples 1-6, further comprising a mobile device, wherein a user of the mobile device may select content based on an orientation of said device.
- Example 9 may include a method to provide content to a device, comprising obtaining spatial coordinates and an orientation of a device, said orientation defining a gaze direction, identifying a geo-pane based on the spatial coordinates and the gaze direction, and providing content to the device that is relevant to the geo-pane.
- Example 10 may include the method of Example 9, wherein the geo-pane is three dimensional.
- Example 11 may include the method of Example 9, further including determining whether the spatial coordinates of the device lie within a geo-fence, and determining whether there are any geo-panes associated with said geo-fence, wherein providing the content to the device includes providing content concerning the geo-pane associated with the geo-fence to which the device is pointing.
- Example 12 may include the method of any of Examples 9-11, wherein the spatial coordinates are obtained through a combination of geo-location systems, at least one of which is local to the device.
- Example 13 may include the method of Example 7, further including registering ownership of a geo-pane with a registrar.
- Example 14 may include the method of Examples 9 or 11, further including establishing a rule set to determine whether content is to be made available.
- Example 15 may include the method of Examples 7 or 9, wherein the rule set comprises rules that control content delivery to devices located within a predetermined range of the geo-pane.
- Example 16 may include the method of Example 9, wherein the geo-pane is mobile.
- Example 17 may include the method of Example 9, wherein the content that is relevant to a geo-pane varies with a gaze direction of a user.
- Example 18 may include the method of Examples 9 or 16, wherein the geo-panes are utilized in game play.
- Example 19 may include the method of Example 9, wherein content is associated with geo-panes.
- Example 20 may include at least one computer readable storage medium comprising a set of instructions which, when executed by one or more servers, cause the server to obtain spatial coordinates and an orientation of a device, said orientation defining a gaze direction, identify a geo-pane based on the spatial coordinates and the gaze direction, and provide content to the device that is relevant to the geo-pane.
- Example 21 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to obtain spatial coordinates through a combination of geo-location systems, at least one of which is local to the device.
- Example 22 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to restrict content delivery to devices located within a predetermined range of the geo-pane.
- Example 23 may include the at least one computer readable storage medium of any of Examples 20-22, wherein the instructions, when executed, cause a server to vary delivery of content with the gaze direction.
- Example 24 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause the server to determine whether the spatial coordinates of the device lie within a geo-fence, and determine whether there are any geo-panes associated with said geo-fence, wherein providing the content to the device includes providing content concerning a geo-pane associated with a geo-fence based on gaze direction.
- Example 25 may include a mobile device comprising one or more sensors, a geo-location module to use sensor-based information from the one or more sensors to determine the location of the mobile device, an orientation module to use information from the one or more sensors to determine an orientation of the mobile device, and a content request module to request the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 26 may include the mobile device of Example 25, wherein the sensor-based information is provided by at least one sensor selected from a group consisting of a gyroscope, magnetometer, barometer, pressure sensor, and accelerometer.
- Example 27 may include the mobile device of Example 25, wherein the sensor-based information can at least partially define coordinates of a geo-pane.
- Example 28 may include at least one computer readable storage medium comprising a set of instructions which, when executed by a mobile device, cause the mobile device to determine the location of the mobile device, determine an orientation of the mobile device, and request the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 29 may include the at least one computer readable storage medium of claim 28, wherein the mobile device comprises sensors.
- Example 30 may include a method of providing content to a mobile device, comprising obtaining spatial coordinates and orientation of a device, said orientation defining a gaze direction, determining whether the gaze direction points to a geo-pane, and providing content to the device that is relevant to the geo-pane.
- Example 31 may include the method of Example 30, wherein the spatial coordinates are obtained through a combination of geo-location systems, including global positioning system coordinates.
- Example 32 may include the method of Examples 30 or 31, further including providing for registration of a geo-pane, and associating content with the geo-pane.
- Example 33 may include the method of Example 30, further including establishing a rule set to determine whether content is to be made available.
- Example 34 may include the method of Example 33, wherein the rules comprise a set of permissions.
- Example 35 may include the method of Example 30, wherein the geo-pane is mobile.
- Example 36 may include the method of Examples 30 or 35, wherein the content that is associated with a geo-pane varies with the orientation of a direction of gaze of a user.
- Example 37 may include the method of Example 30, wherein the geo-fence is mobile.
- Example 38 may include the method of any of Examples 30, 35, or 37, wherein the geo-panes are utilized in game play.
- Example 39 may include the method of Example 38, wherein the game play includes using geo-panes associated with the layout of a city.
- Example 40 may include a mobile device comprising one or more sensors, means for using sensor-based information from the one or more sensors to determine the location of the mobile device, means for using information from the one or more sensors to determine an orientation of the mobile device, and means for requesting the delivery of content to the mobile device based at least partly on the orientation of the device and its location.
- Example 41 may include the mobile device of Example 40, further comprising means for controlling delivery of content.
- Example 42 may include a system for providing location specific content, comprising a device for mapping the boundaries of a region, memory for receiving numerical information defining the boundaries of the region, content associated with the region, a location aware device having means to determine a direction of orientation of the location aware device and its location, and means to relate the orientation and the location of the location aware device to said content.
- Example 43 may include the system of Example 42, further comprising access rules.
- Example 44 may include a system to provide content to a device, comprising means for locating the device in space, means for determining the orientation of the device, means for storing content and relating it to a location in space, and means for providing content to the device in dependence upon its location and orientation.
- Example 45 may include the system of Example 44, comprising means for controlling access to content.
- Example 46 may include a method of providing content, comprising obtaining the Global Positioning System (GPS) coordinates of a device, determining a device orientation from at least one of accelerometer, magnetometer, and gyroscope data obtained from the device, said device orientation defining a gaze direction, accessing a data base containing the GPS coordinates of at least one geo-pane and rules for permitting access to data associated with the geo-pane, determining whether the gaze direction points to a geo-pane, and providing content to the device that is relevant to the geo-pane provided the rules are met.
- Example 47 may include the method of Example 46, wherein the rules include limitations on the distance between the geo-pane and the device.
- Example 48 may include the method of Example 46, further comprising game play.
- Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chipsets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof
- Arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. The descriptions provided are to be regarded as illustrative instead of limiting.
- Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments may be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
Claims (1)
1. A system to provide content to a device comprising:
a gaze orientation module to determine a gaze direction based on sensor-based information; and
a geo-pane identification module to identify regions of space and associate content with said regions of space based on the gaze direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/184,712 US20170048667A1 (en) | 2014-03-28 | 2016-06-16 | Gaze-directed content delivery |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/229,561 US9398408B2 (en) | 2014-03-28 | 2014-03-28 | Gaze-directed content delivery |
US15/184,712 US20170048667A1 (en) | 2014-03-28 | 2016-06-16 | Gaze-directed content delivery |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/229,561 Continuation US9398408B2 (en) | 2014-03-28 | 2014-03-28 | Gaze-directed content delivery |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170048667A1 true US20170048667A1 (en) | 2017-02-16 |
Family
ID=54192317
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/229,561 Expired - Fee Related US9398408B2 (en) | 2014-03-28 | 2014-03-28 | Gaze-directed content delivery |
US15/184,712 Abandoned US20170048667A1 (en) | 2014-03-28 | 2016-06-16 | Gaze-directed content delivery |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/229,561 Expired - Fee Related US9398408B2 (en) | 2014-03-28 | 2014-03-28 | Gaze-directed content delivery |
Country Status (1)
Country | Link |
---|---|
US (2) | US9398408B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10311616B2 (en) | 2017-06-30 | 2019-06-04 | Intel Corporation | Methods and apparatus to define augmented content regions for augmented reality systems |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10536799B2 (en) * | 2014-12-11 | 2020-01-14 | Taiwan Semiconductor Manufacturing Co., Ltd. | Intelligent geo-fencing with tracked and fenced objects |
EP3217379A1 (en) | 2016-03-10 | 2017-09-13 | Nokia Technologies Oy | Avatar-enforced spatial boundary condition |
US20180184252A1 (en) * | 2016-12-22 | 2018-06-28 | Yen Hsiang Chew | Technologies for delivering content to a mobile compute device |
US10643485B2 (en) | 2017-03-30 | 2020-05-05 | International Business Machines Corporation | Gaze based classroom notes generator |
CN108535873B (en) * | 2018-04-10 | 2021-03-19 | 扬州大学 | Intelligence VR glasses based on physiology and mood characteristic |
EP3553629B1 (en) | 2018-04-12 | 2024-04-10 | Nokia Technologies Oy | Rendering a message within a volumetric data |
JP7424121B2 (en) * | 2020-03-10 | 2024-01-30 | 富士フイルムビジネスイノベーション株式会社 | Information processing device and program |
CN111556328A (en) * | 2020-04-17 | 2020-08-18 | 北京达佳互联信息技术有限公司 | Program acquisition method and device for live broadcast room, electronic equipment and storage medium |
US11832145B2 (en) * | 2021-02-19 | 2023-11-28 | Dumas Holdings LLC | Methods and systems for location-based features using partition mapping |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7096030B2 (en) | 2002-06-28 | 2006-08-22 | Nokia Corporation | System and method for initiating location-dependent applications on mobile devices |
US7917154B2 (en) | 2006-11-01 | 2011-03-29 | Yahoo! Inc. | Determining mobile content for a social network based on location and time |
US8744478B2 (en) | 2008-02-20 | 2014-06-03 | Qualcomm Incorporated | Method and apparatus for executing location dependent application in a mobile handset |
KR101735606B1 (en) | 2010-07-21 | 2017-05-15 | 엘지전자 주식회사 | Mobile terminal and operation control method thereof |
BR112014023539B1 (en) | 2012-03-24 | 2022-10-04 | Intel Corporation | WIRELESS MOBILE DEVICE AND LOCATION-BASED APPLICATION RECOMMENDATIONS GENERATION METHOD |
US9020537B2 (en) * | 2012-06-28 | 2015-04-28 | Experience Proximity, Inc. | Systems and methods for associating virtual content relative to real-world locales |
US9958939B2 (en) * | 2013-10-31 | 2018-05-01 | Sync-Think, Inc. | System and method for dynamic content delivery based on gaze analytics |
-
2014
- 2014-03-28 US US14/229,561 patent/US9398408B2/en not_active Expired - Fee Related
-
2016
- 2016-06-16 US US15/184,712 patent/US20170048667A1/en not_active Abandoned
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10311616B2 (en) | 2017-06-30 | 2019-06-04 | Intel Corporation | Methods and apparatus to define augmented content regions for augmented reality systems |
Also Published As
Publication number | Publication date |
---|---|
US20150281887A1 (en) | 2015-10-01 |
US9398408B2 (en) | 2016-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9398408B2 (en) | Gaze-directed content delivery | |
EP3629290B1 (en) | Localization for mobile devices | |
US8745090B2 (en) | System and method for exploring 3D scenes by pointing at a reference object | |
KR102635705B1 (en) | Interfaces for organizing and sharing destination locations | |
US9728007B2 (en) | Mobile device, server arrangement and method for augmented reality applications | |
US11343323B2 (en) | Augmented reality objects registry | |
US11977553B2 (en) | Surfacing augmented reality objects | |
US20150022555A1 (en) | Optimization of Label Placements in Street Level Images | |
KR20230079157A (en) | Augmented Reality Content Creators for Identifying Geolocations | |
KR20230076843A (en) | Augmented reality content creators for browsing destinations | |
KR20230076849A (en) | Augmented reality content creator for destination activities | |
US11985135B2 (en) | Stated age filter | |
US9488489B2 (en) | Personalized mapping with photo tours | |
KR20150077607A (en) | Dinosaur Heritage Experience Service System Using Augmented Reality and Method therefor | |
WO2019003182A1 (en) | System and method for matching a service provider to a service requestor | |
US11812335B2 (en) | Position service to determine relative position to map features | |
KR101132512B1 (en) | System for generating 3D mobile augmented reality based on the user | |
CN110320496B (en) | Indoor positioning method and device | |
US10311616B2 (en) | Methods and apparatus to define augmented content regions for augmented reality systems | |
US11935203B2 (en) | Rotational navigation system in augmented reality environment | |
US20240236106A1 (en) | Stated age filter | |
US9196151B2 (en) | Encoding location-based reminders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |