WO2014035367A1 - Generating augmented reality exemplars - Google Patents

Generating augmented reality exemplars Download PDF

Info

Publication number
WO2014035367A1
WO2014035367A1 PCT/US2012/052505 US2012052505W WO2014035367A1 WO 2014035367 A1 WO2014035367 A1 WO 2014035367A1 US 2012052505 W US2012052505 W US 2012052505W WO 2014035367 A1 WO2014035367 A1 WO 2014035367A1
Authority
WO
WIPO (PCT)
Prior art keywords
clusters
augmentations
properties
augmented reality
rendering
Prior art date
Application number
PCT/US2012/052505
Other languages
French (fr)
Inventor
Mark Malamud
Royce Levien
Original Assignee
Empire Technology Development Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Empire Technology Development Llc filed Critical Empire Technology Development Llc
Priority to JP2015529763A priority Critical patent/JP5980432B2/en
Priority to EP12883675.6A priority patent/EP2888876A4/en
Priority to PCT/US2012/052505 priority patent/WO2014035367A1/en
Priority to US13/879,594 priority patent/US9607436B2/en
Priority to KR1020157007902A priority patent/KR101780034B1/en
Publication of WO2014035367A1 publication Critical patent/WO2014035367A1/en
Priority to US15/469,933 priority patent/US20170263055A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • Augmented reality focuses on combining real world and computer- generated data, including computer graphics objects blended into real video and images in real time for display to an end-user.
  • the spread of personal electronic devices such as smartphones and accessibility to data networks and services via the Internet and other networks have enabled access and use of an increasing number of AR applications.
  • augmentations As AR applications and services become increasingly popular, the number of augmentations available in any given context will skyrocket. These augmentations may be visual, auditory, and haptic, and some augmentations may span different modalities. Whether the augmentations are for a particular place and time, a particular object or collection of objects, or for a person or collection of people, the number of augmentations can overwhelm a user's ability to process them.
  • the user can be overwhelmed when a large number of augmentations are displayed on an augmented reality device, impeding the user's ability to meaningfully and easily review desired augmentations.
  • a user walking through Times Square in New York can be bombarded by several million augmentations from businesses, government organizations, social groups, and end-users (e.g. virtual billboards, restaurant reviews, business placards, artwork, travel directions, messages, graffiti, etc.).
  • end-users e.g. virtual billboards, restaurant reviews, business placards, artwork, travel directions, messages, graffiti, etc.
  • a user walking through a city park can see tens of thousands of augmented reality avatars jostling for space on the grass.
  • a user leafing through a copy of Moby Dick may be unable to read a page scribbled over with annotations from thousands of others who have read the book.
  • systems, methods, and computer-readable media are disclosed for clustering and rendering of augmentations into one or more operational
  • exemplars or clusters that represent collections of augmentations.
  • an augmented reality system can receive a context associated with a user or a user's device.
  • the context may include physical and virtual information about the user's environment, such as the user's location, time of day, the user's personal preferences, the augmented reality services to which the user is subscribed to, an image or object the user is pointing at or selecting, etc.
  • the system can be associated with the user's device or with a service that the user is subscribed to.
  • the augmentation system can determine and retrieve augmentations based on the context. Further, the augmentation system can automatically group the retrieved augmentations into clusters, determine rendering formats for each cluster, remove the grouped augmentations from previously rendered augmentations, and render the clusters as exemplars to the user.
  • grouping augmentations into clusters and determining rendering formats can be based on the augmentations and the context.
  • the system can analyze the augmentations and the context and determine a conceptual clustering algorithm.
  • the conceptual clustering algorithm can group the augmentations into clusters and associate the clusters with a concept describing properties of the grouped augmentations.
  • the rendering formats of the clusters can be derived from the associated concepts.
  • the rendering formats can exhibit several aspects of the clusters, such as appearance, behavior, and interactivity of the grouped augmentations. As such, when the clusters are rendered to a user as exemplars, the exemplars can provide descriptive, rich, informative, and meaningful conceptual summaries of the grouped augmentations.
  • the avatars may be grouped in just ten exemplar avatars.
  • Each exemplar avatar can be dressed in a flag of a different nation and can be "standing in” for a much larger set of avatars from the indicated nation.
  • a user may see the ten exemplar avatars and decide to communicate with one of the exemplar avatars.
  • Clustering large numbers of augmentations into smaller sets of exemplars maintains the richness and meaning of an augmented reality environment along contextually or user-determined axes while reducing the sensorial and cognitive load on the user.
  • FIG. 1 depicts a block diagram illustrating an example computing device with which various embodiments of the present disclosure may be implemented.
  • FIG. 2 depicts an example network environment in which various embodiments of the present disclosure may be implemented.
  • FIG. 3 depicts an illustrative embodiment of an augmented reality system.
  • FIG. 4 depicts an example augmented reality display.
  • FIG. 5 depicts example augmentations displayed on a computing device.
  • FIG. 6 depicts an example grouping of augmentations into clusters.
  • FIG. 7 depicts an example rendering format of an exemplar.
  • FIG. 8 depicts an example of an augmented reality view before and after augmentations are clustered and the resulting exemplars are rendered.
  • FIG. 9 depicts an example operational procedure for grouping augmentations and rendering the resulting exemplars.
  • FIG. 1 depicts a block diagram illustrating an example computing device 100 with which various embodiments of the present disclosure may be implemented.
  • computing device 100 typically includes one or more processors 104 and a system memory 106.
  • a memory bus 108 may be used for communicating between processor 104 and system memory 106.
  • processor 104 may be of any type including but not limited to a microprocessor ( ⁇ ), a microcontroller ( ⁇ ), a digital signal processor (DSP), or any combination thereof.
  • Processor 104 may include one more levels of caching, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116.
  • An example processor core 114 may include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof.
  • An example memory controller 118 may also be used with processor 104, or in some implementations memory controller 118 may be an internal part of processor 104.
  • system memory 106 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof.
  • System memory 106 may include an operating system 120, one or more applications 122, and program data 124.
  • Application 122 may include an augmented reality process 126 that is arranged to perform functions as described herein including those described with respect to operations described in FIGs. 3-9.
  • Program data 124 may include augmentation data 128 that may be useful for operation with augmented reality grouping and rendering techniques as is described herein.
  • application 122 may be arranged to operate with program data 124 on operating system 120 such that augmentations can be grouped into clusters which are then rendered as exemplars using a conceptual format.
  • This described basic configuration 102 is illustrated in FIG. 1 by those components within the inner dashed line.
  • Computing device 100 may have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 102 and any required devices and interfaces.
  • a bus/interface controller 130 may be used to facilitate communications between basic configuration 102 and one or more data storage devices
  • Data storage devices 132 may be removable storage devices
  • non-removable storage devices 138 or a combination thereof.
  • removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few.
  • Example computer storage media may include volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 106, removable storage devices 136 and non-removable storage devices 138 are examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 100. Any such computer storage media may be part of computing device 100.
  • Computing device 100 may also include an interface bus 140 for facilitating communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to basic configuration 102 via bus/interface controller 130.
  • Example output devices 142 include a graphics processing unit 148 and an audio processing unit 150, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 152.
  • Example peripheral interfaces 144 include a serial interface controller 154 or a parallel interface controller 156, which may be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 158.
  • An example communication device 146 includes a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
  • the network communication link may be one example of a communication media.
  • Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
  • a "modulated data signal" may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media may include wired media such as a wired network or direct- wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media.
  • RF radio frequency
  • IR infrared
  • the term computer readable media as used herein may include both storage media and communication media.
  • Computing device 100 may be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • Computing device 100 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
  • FIG. 2 depicts an example network environment in which various embodiments of the present disclosure may be implemented.
  • FIG. 2 illustrates an example computing arrangement 200 comprised of computing devices 210 each of which may be adapted to provide augmented reality applications as described herein.
  • the computing devices 210 may comprise, for example, any of a desktop computer 210a, a laptop computer 210b, a phone 210c, a tablet computing device 210d, a personal digital assistant (PDA) 210e, and a mobile phone 21 Of, each of which may be adapted to process and display augmented reality data to a user.
  • PDA personal digital assistant
  • Each of the devices 210 may be adapted to communicate using a
  • the communications network 250 may be any type of network that is suitable for providing communications between the computing devices 210 and any servers 220 accessed by the computing devices 210.
  • the communications network 250 may comprise a combination of discrete networks which may use different technologies.
  • the communications network 250 may comprise local area networks (LANs), wide area networks (WAN's), cellular networks, or combinations thereof.
  • the communications network 250 may comprise wireless, wireline, or combination thereof.
  • the communications network 250 may comprise the Internet and may additionally comprise any networks adapted to communicate with the Internet.
  • the communications network 250 may comprise a wireless telephony network that is adapted to communicate video, audio, and other data between the computing devices 210 and the servers 220.
  • augmentation data can be processed by an augmented reality device, such as any of the computing devices 210.
  • the augmented reality device can be coupled to an analysis engine or an augmentation service hosted on a computing device, such as the server 220.
  • the augmented reality device 210 may be directed, for example, by a user to activate an augmented reality application.
  • the augmented reality device 210 may determine or be associated with a user's context, which may include information associated with physical and virtual environments of the user, such as the user's location, time of day, the user's personal preferences, the augmented reality services to which the user is subscribed to, an image or object the user is pointing at or selecting, etc.
  • the augmented reality device 210 can communicate with the server 220 over the communications network 250.
  • the server 220 can comprise a repository of augmentation data and can be adapted to provide augmentation services.
  • the server 220 can include a library of clustering and rendering models and algorithms adapted to perform real-time clustering and rendering of augmentations.
  • the augmented reality device 210 can query the server 220 to determine and receive augmentations based on the user's context.
  • the server 220 can transmit augmentations and corresponding rendering formats to the augmented reality device 210 which can render the received augmentations to the user.
  • the server 220 can render the augmentations and transmit the rendered augmentations to the augmented reality device 210.
  • augmentation data can be stored on the augmented reality device 210.
  • grouping and rendering the augmentation data can be processed locally on the augmented reality device 210 eliminating the need for the augmented reality device to query the server 220.
  • the augmented reality device 210 can be in communication with another computing device 210 to exchange augmentation data and services.
  • the tablet 210d can be adapted to provide an interface to a user and to provide the user's context to the desktop 210a.
  • the desktop 210a can be adapted to provide augmentation services to the user via the interface tablet 210d.
  • FIG. 3 depicts an illustrative embodiment of an augmented reality system 300.
  • a scene 310 may be viewed and captured by the augmented reality device 210.
  • the augmented reality device 210 can integrate an image or a video capture device.
  • the augmented reality device 210 can be adapted to retrieve an image of the scene
  • the image can be retrieved from data stored locally on the augmented reality device 210 or externally on another device 210 or the server 220 in
  • the scene 310 may be associated with a set of scene coordinates ( ⁇ , ⁇ , ⁇ ). Based on the image of the scene 310 and/or the user's context, augmentations 320 may be determined and retrieved 315.
  • the augmentations 320 can comprise virtual representations of the scene 310 and of objects or persons associated with the scene 310.
  • the augmentations 320 may comprise other images, metadata, information, or descriptions related to the scene 310.
  • the augmentations 320 may also be associated with a set of coordinates ( ⁇ , ⁇ , ⁇ ).
  • the image of the scene 310 can be merged 325 with the augmentations 320 to generate a virtual image of the scene 310.
  • the virtual image can be rendered 330 and displayed to the user.
  • the generation of the virtual image may be performed with a standard computer graphics system internal or external to the augmented reality device 210.
  • the graphics system may align the image of the scene 310 and the augmentations 320 based on the associated coordinates ( ⁇ , ⁇ , ⁇ ). Further, the graphics system may use real world information about the imaging of the scene 310 so that the virtual image can be correctly rendered.
  • the determination 315 of the augmentations 320, the merging 325 and aligning of the image and the augmentations 320 to create the virtual image, and the rendering 330 of the virtual image can be accomplished locally on the augmented reality device 210, externally on another device 210 or the server 220 in communication with the augmented reality device 210, or can be distributed between the augmented reality device 210, the other devices 210, and the server 220.
  • FIG. 4 depicts an example augmented reality display. Augmentations of the scene 310 of FIG. 3 can be displayed on the augmented reality device 210 of FIG. 2.
  • the augmentations can, for example, comprise a title 410 of an object contained in the scene and a text description 420 about the object.
  • the augmentations may be overlaid or merged with the image such that the real image and the augmentations may be combined in a single virtual image and presented to the user.
  • FIG. 5 depicts example augmentations displayed on a computing device.
  • FIG. 5 illustrates a virtual image 510 displayed on the augmented reality device 210 of FIG. 2.
  • the virtual image 510 can comprise an image of a scene, such as a panoramic view of Paris or a portion thereof, merged with augmentations 520a-d describing objects or monuments in the scene.
  • the augmentations 520a-d can comprise descriptive titles and comments created by tourists about the objects.
  • a comment can comprise a rating in a form of a one-to-five star scale and a feedback text field.
  • a user 530 can select and expand any number of the augmentations 520a-d.
  • the user 530 can shake the augmented reality device 210, mouse over, single-click, double-tap, or motion over the augmentations 520a-520d to retrieve additional information about the objects contained in the augmentations 520a-d.
  • Additional information can be retrieved from data stored locally or externally to the augmented reality device 210.
  • the additional information may comprise images taken by tourists, a list of attractions nearby, a list of restaurants with menus, prices, advertisement, etc.
  • augmented reality In a world where augmented reality has become commonplace, it would be useful to have a way to organize this increasing sensorial and cognitive data.
  • augmented reality applications and services become increasingly popular, the number of augmentations available in any given context will skyrocket. These augmentations may be visual, auditory, and haptic, and some augmentations may span modalities. Whether the augmentations are for a particular place and time, a particular object or collection of objects, or for a person or collection of people, the number of augmentations can overwhelm a user's ability to process them.
  • FIGs. 6-9 present embodiments of alternative or additional techniques to filtering that can maintain the richness and meaning of an augmented reality environment along contextually or user-determined axes while reducing the sensorial and cognitive load on the user.
  • These techniques may comprise automatic grouping of augmentations into clusters and rendering of the clusters based on conceptual formats representative of the augmentations grouped therein.
  • FIG. 6 depicts an example grouping of augmentations into clusters.
  • An illustrative example of grouping augmentations into clusters includes a user 610 walking through a park, such as Central Park in New York City, while using the augmented reality device 210.
  • the augmented reality device 210 can render or display a virtual image 650, such as a map of the park overlaid or merged with augmentations.
  • the user 610 can use the augmented reality device 210 to navigate around the virtual image 650, which in this example represents the augmented reality park.
  • real people can be also walking through the park, and can have augmentations in the form of avatars.
  • virtual visitor augmentations 630a-m in the virtual image 650 can be associated with virtual people.
  • people from around the world having augmentations in the form of avatars can also be visiting the park virtually at the same time as the user 610.
  • the virtual image 650 may comprise many other augmentations and types thereof, which are not represented in FIG. 6 for sake of clarity.
  • some or all of the augmentations 620-630 can be grouped into clusters and the clusters can be rendered as exemplars 622, 632, 640.
  • the augmentations 620-630 can be initially displayed on the augmented reality device 210.
  • the augmented reality device 210 can, locally or through another computing device, group the real visitor augmentations 620 into a real visitor cluster and the virtual visitor augmentations 630 into a virtual visitor cluster. In turn, the two generated clusters can be grouped into a higher layer visitor cluster.
  • the augmented reality device 210 can display the generated clusters as real visitor exemplar 622, virtual visitor exemplar 632, and virtual exemplar 640. Further, the augmented reality device 210 can remove the grouped augmentations from the initially displayed augmented reality output, i.e. the output comprising all the real visitor augmentations 620 and the virtual visitor augmentations 630 previously displayed, and display the ungrouped augmentations alongside the exemplars 622, 632, 640.
  • the augmentations 620-630 can be automatically grouped and presented to the user 610 in a simplified augmented reality presentation merged in the virtual image 650.
  • the user 610 can in turn access, communicate with, and expand the exemplars 622, 632, 640.
  • the clustering can be multi-layered and can comprise classes of clusters with hierarchical structures.
  • the visitor cluster can comprise the real visitor cluster and the virtual visitor cluster.
  • the real visitor cluster can group the real visitor augmentations 620a-m and the virtual visitor cluster can group the virtual visitor augmentations 630a-n.
  • FIG. 6 only depicts two clustering layers (i.e.
  • the visitor cluster as a first layer, and the real and virtual visitor clusters as a second layer).
  • embodiments are not limited to the exemplified layers. Additional or different clustering layers and sub-layers can be defined based on the augmentations, the user's context, and other factors.
  • Grouping augmentations into clusters can comprise analyzing properties of the augmentations available in a given user's context, generating one or more classes of clusters with possible hierarchical category structures based on the analyzed properties, associating each class with a concept description, and using the concept description to group the augmentations into the clusters within the appropriate classes.
  • clustering algorithms can be used to generate the clusters and group the augmentations therein.
  • Example clustering algorithms can include conceptual clustering algorithms, such as COBWEB and ITERATE.
  • the conceptual clustering algorithms can comprise machine learning paradigm for unsupervised classification of data that can be adapted to generate concept descriptions and hierarchical category structures associated with classes.
  • the conceptual clustering algorithms can consider properties exhibited by or inherent to the augmentations and other information available to the algorithm, such as the user's context, to generate concepts, classes, and clusters.
  • Other clustering algorithms can include the MICROSOFT CLUSTERING ALGORITHM available with SQL SERVER 2008 and the Balanced Iterative Reducing and Clustering (BIRCH) algorithm.
  • the clustering algorithms can be stored in a library. Based on the user's context and the augmentations, one or more clustering algorithms can be retrieved from the library and applied to the augmentations.
  • the library can comprise an avatar clustering algorithm adapted to group avatar-like augmentations based on the
  • the library can also comprise a facial recognition clustering algorithm that can group avatars by analyzing common facial traits found in images of people associated with the avatars.
  • the facial recognition clustering algorithm can further generate a description of the analyzed common facial traits.
  • the library can comprise a third conceptual clustering algorithm for grouping augmentations that represent venues by analyzing their locations, distance of a user to the locations, time of the day, nearby attractions, ratings, recommendations, feedback from other users, facts, menus, prices, activities, cuisines, required attire, etc.
  • the stored algorithms can be categorized.
  • the avatar and the facial recognition algorithms can be categorized as applicable to grouping avatars, whereas the venue clustering algorithm can be categorized as applicable to grouping activities.
  • Embodiments are not limited to the exemplified conceptual clustering algorithms. Additional algorithms and categories can be defined and stored in the library.
  • the augmentations and the user's context can be used to determine an appropriate category of clustering algorithms that can to be applied to the augmentations.
  • the library can be searched, and the appropriate clustering algorithms can be retrieved.
  • a determined category can comprise avatar grouping algorithms.
  • the library can be searched for an algorithm within that category, and the avatar and the facial recognition clustering algorithms can be retrieved. Other clustering algorithms need not be retrieved.
  • the retrieved clustering algorithms can be applied to analyze properties of the augmentations.
  • the analysis can comprise comparing the properties to criteria from the user's context. Based on the properties, the user's context, and the comparison, the clustering algorithms can generate classes of clusters with hierarchical structures and concept descriptions associated with the generated classes.
  • the concept descriptions can be used to group the augmentations into the clusters within the appropriate classes.
  • the grouping can comprise adding an augmentation to a generated cluster based on a comparison between the properties of the augmentation and the concept description of the class associated with the generated cluster.
  • the avatar clustering algorithm can be applied to generate two classes of clusters.
  • the corresponding concept descriptions can be avatars that can speak French and avatars that can speak only other languages.
  • the facial recognition clustering algorithm can be applied to create two other classes of clusters within the class of French speaking avatars.
  • the additional concept descriptions can be avatars that can speak French and have a moustache and avatars that can speak French and do not have a moustache.
  • a hierarchical structure of cluster classes associated with concept descriptions can be created.
  • a first hierarchy can comprise a class that groups avatars based on language skills
  • a second hierarchy can comprise a class that groups avatars based on facial hair traits.
  • a total of three clusters can be created: one for avatars that cannot speak French, one for avatars that can speak French but do not have a moustache, and one for avatars that can speak French and have a moustache.
  • the augmentations can be grouped into the clusters based on matches, such as language skills and facial hair traits, between the augmentation properties and the concept descriptions.
  • the concept descriptions can be further used to generate rendering formats as described herein below.
  • the language skills concept description can be used to render a cluster as an avatar holding a French flag
  • the facial hair concept description can be used to add a moustache to the rendered avatar.
  • FIG. 7 depicts an example rendering format of an exemplar.
  • Augmentation data can comprise augmentations and clusters of augmentations.
  • the augmentation data can be rendered as exemplars.
  • An augmented reality device such as any of the devices 210 of FIG. 2, can be used to render the augmentation data.
  • the augmented reality device can comprise graphics processing units or a computing system adapted to process and render graphics data.
  • an external computing device such as server 220 of FIG. 2, can receive, process, and render the augmentation data and transmit the rendered data to the augmented reality device for display to a user 730.
  • the external device can send rendering instructions, format, or information to the augmented reality device which in turn can render the augmentation data based on the received information.
  • An exemplar can be a conceptual representation used to render a cluster.
  • the conceptual representation can comprise sensorial representations such as visual, auditory, and haptic representations, of a concept description associated with the cluster.
  • the concept description can reflect properties of augmentations grouped into the cluster.
  • the exemplar can be created ad-hoc in response to properties of the augmentation data and can comprise semantics and presentation rules of such data.
  • each exemplar can be represented in relation to other exemplars.
  • the exemplar can provide a perceptual summary of the cluster's properties and content.
  • the exemplar can provide a means to deal with an overabundance of augmentations through intelligent indirection, reducing a user's sensorial and cognitive overload while maintaining the richness and meaning of the original augmentations.
  • a class for virtual billboards a class for restaurant reviews, and a class for avatars.
  • Each of the classes can provide its own rules for presentation.
  • the billboards can be grouped into a business-related cluster (e.g. an advertisement for a sale at a nearby sporting goods store) and a public service cluster (e.g. a high crime area warning).
  • Each exemplar can express not only distinct characteristics (e.g. appearance, behavior, and interactivity) of the cluster, but also common characteristics shared with others clusters within the same class.
  • the business-related cluster can be rendered as an exemplar comprising a 3D model of a billboard, which can be a common characteristic of the clusters within the class of virtual billboards.
  • the exemplar can comprise a "For Sale" sign across the 3D model, which can be a distinct characteristic of the business-related cluster.
  • Rendering the clusters can comprise determining Tenderers or rendering formats for the clusters and rendering the clusters as exemplars based on the determined formats. Similar rendering techniques can also be used to render the augmentations as exemplars.
  • the exemplar can comprise a 2D or 3D object providing a conceptual
  • the exemplar can comprise a 3D avatar customized to exhibit properties of the augmentation.
  • the object can comprise a male avatar having gray hair, wearing a suit, and carrying a briefcase.
  • the corresponding exemplars can be avatars holding flags of the different nations.
  • properties of the augmentations can be reflected at the different cluster hierarchal layers. For example, a top-level cluster of park visitors can comprise lower-level clusters of the park visitors grouped by nationalities.
  • the top-level cluster can be rendered as an avatar sitting on a park bench next to a globe of the world, whereas the lower-level clusters can be rendered as avatars carrying flags of the different nations.
  • Determining a renderer or a rendering format for a cluster can be accomplished without input from an end-user. The format can be automatically determined based on the augmentation data, the cluster, other clusters, and the user's context.
  • the format can be derived from a concept description of the class of clusters.
  • augmentations of visitors to a park can be represented as avatars.
  • the augmentations can be grouped in a multi-tier cluster hierarchy based on the activities of the avatars.
  • First tier clusters can be associated with the concept of an international baseball game between the avatars.
  • Second tier clusters can be associated with the concept of baseball teams from different nations.
  • Third tier clusters can be associated with the concept of active and substitute baseball players. Aspects of the concept descriptions can be used in determining the formats.
  • the first tier cluster can be rendered as an avatar wearing a baseball hat and holding a trophy.
  • the second tier clusters can be represented by avatars wearing national jerseys.
  • the third tier clusters can be displayed as avatars carrying a bat or sitting on a bench.
  • a similar conceptual analysis can be applied to the grouped augmentations to derive a cluster concept based on common properties of the grouped augmentations.
  • the common properties can comprise a certain range of shared characteristics.
  • the format can be derived from the cluster concept and can be adapted to exhibit several aspects, such as behavior, appearance, and interactivity, of the cluster or the grouped augmentations.
  • the cluster concept can be international avatars.
  • a cluster grouping visitors from Germany can be formatted as an avatar holding a German flag.
  • an analysis of the grouped augmentations e.g. the German visitors
  • the format of the cluster e.g. the avatar holding a German flag
  • the format of the cluster e.g. the avatar holding a German flag
  • the format of a cluster can be determined relatively to formats of other clusters. Multiple clusters can be comparatively analyzed. For example, ratios of cluster characteristics (e.g. number of augmentations, data size in bytes, etc.) may be calculated to derive relative cluster sizes. The relative cluster sizes can be used to update the formats of the compared clusters.
  • ratios of cluster characteristics e.g. number of augmentations, data size in bytes, etc.
  • the relative cluster sizes can be used to update the formats of the compared clusters.
  • the USA and German clusters can be compared. The comparison may reveal that the USA cluster has twice the number of visitors than the German cluster.
  • the avatar holding the USA flag can be displayed on a graphical user interface with a size twice as large as the avatar holding the German flag.
  • the format can be refined by also considering the user's context.
  • the user's context can be retrieved and analyzed to determine properties that can be applied to the format. For example, avatars holding flags can represent clusters of park visitors from different countries. The avatars can be further updated to reflect some or all aspects of the user's context.
  • a list of friends may be retrieved. As such, a cluster comprising an augmentation associated with a friend from the list can be updated.
  • the update can comprise adding the words "A Friend Is Here" to the avatar.
  • the rendering format of a cluster as an exemplar can be adjusted based on a wide range of factors and goals.
  • appearance of an exemplar may be adjusted to provide an aggregation of collapsed classes of attributes.
  • the appearance may comprise look, sound, and feel attributes exhibited by the cluster or the augmentation data grouped therein.
  • a format can use the height of the exemplar to indicate the number of grouped avatars into the cluster. Further, the format can use clothing and accessories of the exemplar to represent a nationality.
  • a French cluster 710 representing a small number of French tourists may be formatted to appear short, wearing a beret, and carrying a baguette
  • an American cluster 720 representing a large number of American businessmen may be formatted to appear tall, slim, and dressed in a suit.
  • the augmentation data may also exhibit behaviors responsive to the aggregated classes of attributes.
  • the behaviors can be analyzed to derive the format. For example, if most of the American avatars in the cluster are very active (e.g. moving around the park), the American cluster 722 may be updated or formatted to represent a jogging avatar. Similarly, if most of the French avatars were relaxing in the park (e.g. having a conversation), the French cluster 712 may be updated or formatted to represent a more sedentary avatar such as one sitting on a park bench.
  • the augmentation data may further exhibit interactivity or interactions between the augmentations, the clusters, and/or the user 730.
  • the interactions can be analyzed to derive or refine the formats.
  • the user 730 can be fluent in English and eager to learn French.
  • An interaction between the user 730 and the American cluster 722 can comprise an exchange of text messages in English with all the American avatars comprised in the American cluster 722.
  • An interaction between the user 730 and a French avatar contained in the French cluster 712 can comprise launching a translation application or program and translating text messages exchanged between the user 730 and the French avatar.
  • a user can access and expand the cluster and interact with one or more augmentations grouped therein.
  • a user interested in playing a game of chess can access a cluster representing avatars with the same interest, select one of the avatars, and start a chess game with the selected avatar.
  • a user can interact with the cluster or the class of clusters.
  • interactions received from the grouped augmentations contained in the cluster can be presented to the user in a raw format or can be rendered according to the clustering and rendering techniques described herein.
  • a user may be interested in improving his or her cooking skills and may interact with a cluster representing cooks from around the world. The user may ask the cluster for a recipe and some of the cooks may respond back to the request. The responses can be presented to the user in the received order. Additionally or alternatively, the responses can be grouped and rendered as exemplars representing clusters of cooks known to the user, special diet recipes, and recipes provided by restaurants.
  • the determination of the rendering format can be independent of input from the user and can be instead based on the augmentation data.
  • the format can also take advantage of properties derived from the user's context. In other words, the user need not specify the format of the augmentations or clusters because the exemplars can be derived automatically.
  • FIG. 8 depicts an example of an augmented reality view before and after augmentations are clustered and the resulting exemplars are rendered.
  • a smartphone 820 such as a GOOGLE ANDROID phone, a MICROSOFT WINDOWS phone, or an APPLE IPHONE, connected to the Internet can execute an application for an augmented reality exemplar as described herein.
  • a user 850 may be standing on a busy street corner in Washington D.C. around lunchtime, looking for something to eat, and is surrounded by restaurants but does not know which one to choose.
  • the user 850 may launch a restaurant finder application on the smartphone 820 and pan up and down the street using the smartphone 820.
  • the user 850 can receive and view an augmented reality version 800 of the street.
  • the user 850 may see a large number, maybe in the thousands, of restaurant review annotations 802A-N over the entire screen.
  • An annotation 802 may include other people's ratings and comments about each restaurant.
  • the large number of annotations 802A-N and their overlapping presentations can prevent the user 850 from even seeing the restaurants and reading many ratings and comments.
  • the user 850 can run the augmented reality exemplar application on the smartphone 820 and in few seconds, all the annotations 802A-N can start to cluster together in exemplars 812A-M.
  • the total number "M" of exemplars 812A-M can be substantially smaller than the total number "N" of annotations 802A-N.
  • the representation of an exemplar 812 can be much simpler than the representation of an annotation 802.
  • An exemplar 812 for a restaurant review can look like a simple one-to-five star flag with a color coding. As such, the street scene can be easier to understand immediately. Using this type of exemplars, the restaurants can have black-colored ratings floating above them.
  • the augmented reality exemplar application may display an exemplar 812A with the maximum rating of five stars over the nearest restaurant to the user 850.
  • the black stars may not quite be as dark as the black stars rendered above some of the other nearby restaurants.
  • the stars of the exemplar 812A may be displayed with a quite pale color.
  • the color shade can reflect the number of augmentations clustered together to generate the exemplar 812A.
  • darker displayed stars can indicate a larger number of people rating a restaurant.
  • the exemplar 812A with the pale color can indicate that the nearest restaurant with the five stars may not have had many people rate it.
  • the user 850 may not think that he or she should trust a rating, even a five star rating, from an establishment with so few reviews.
  • the color can provide additional cognitive information to the user 850.
  • black-colored stars can be ratings by people that the user 850 does not know
  • brightly-colored stars can be ratings by friends of the user 850.
  • the augmented reality exemplar application may display all black-colored stars for the nearest restaurant and brightly-colored stars above some of the other restaurants nearby. These more colorful stars can indicate ratings from friends of the user 850. So, the user 850 may turn the smartphone 820 away from the black five-star restaurant to find a place that has the most colorful stars.
  • the augmented reality exemplar application may display to the user 850 a bright green five-star restaurant located across the street. The user 850 may perceive that a lot of his or her friends positively enjoy the located restaurant and may decide to pocket the smartphone and rush across the street for lunch.
  • FIG. 9 depicts an example operational procedure for grouping augmentations and rendering the resulting exemplars including operations 900, 902, 904, 906, 908, 910, 912,
  • Operation 900 starts the operational procedure, where an augmented reality application or service may be activated on an augmented reality device, such as any of the devices 210 of FIG. 2. Operation 900 may be followed by operation 902. Operation 902
  • Operation 904 illustrates determining augmentations based on the received user's context.
  • the augmented reality device can access or connect to a repository of augmentations and use the user's context to retrieve the appropriate augmentations.
  • the augmented reality device may or may not render these augmentations to the user and may proceed to generate clusters and exemplars comprising some or all of the retrieved
  • Operation 904 may be followed by operation 906.
  • Operation 906 (Analyze the retrieved augmentations) illustrates analyzing the retrieved augmentations. Based on the received user's context, the retrieved augmentations, and other factors, one or more conceptual clustering algorithms can be retrieved from a library and applied to some or all of the retrieved augmentations. Operation 906 may be followed by operation 908. Operation 908 (Generate one or more classes of clusters) illustrates generating one or more classes of clusters. The classes of clusters can comprise hierarchical category structures based on the analyzed properties of the augmentations. Operation 908 may be followed by operation 910. Operation 910 (Associate clusters with concept descriptions) illustrates associating each class with a concept description. The concept descriptions may be derived from the analyzed properties. Operation 910 may be followed by operation 912.
  • Operation 912 (Group the analyzed augmentations into the clusters) illustrates grouping the analyzed augmentations into clusters within the appropriate classes. The grouping can be based on the concept descriptions. Operation 912 may be followed by operation 914.
  • Operation 914 illustrates removing the grouped augmentations from previously rendered augmentations.
  • Operation 914 can be optional and can depend on whether the available augmentations under operation 904 are initially rendered to the user. Operation 914 may be followed by operation 916.
  • Operation 916 (Determine rendering formats for the clusters) illustrates determining rendering formats for the generated clusters. For each cluster, operation 916 can determine a renderer or a format based on the associated concept description, properties of the grouped augmentations, the user's context, or formats of other clusters. Operation 916 can be independent of input from the user. Operation 916 may be followed by operation 918.
  • Operation 918 illustrates rendering the clusters as exemplars.
  • the rendering can be based on the determined formats and can comprise conceptual
  • any of the operations, processes, etc. described herein can be implemented as computer-readable instructions stored on a computer-readable medium.
  • the computer-readable instructions can be executed by a processor of a mobile unit, a network element, and/or any other computing device.
  • the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • a signal bearing medium examples include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non- volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities).
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
  • any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality.
  • operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
  • a range includes each individual member.
  • a group having 1-3 cells refers to groups having 1, 2, or 3 cells.
  • a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.

Abstract

Technologies are generally described for automatic clustering and rendering of augmentations into one or more operational exemplars in an augmented reality environment. In some examples, based on a user's context, augmentations can be retrieved, analyzed, and grouped into clusters. Exemplars can be used to render the clusters as conceptual representations of the grouped augmentations. An exemplar's rendering format can be derived from the grouped augmentations, the user's context, or formats of other exemplars. Techniques for grouping the augmentations into clusters and rendering these clusters as exemplars to a user can enhance the richness and meaning of an augmented reality environment along contextually or user- determined axes while reducing the sensorial and cognitive load on the user.

Description

GENERATING AUGMENTED REALITY EXEMPLARS
BACKGROUND
[0001] Augmented reality (AR) focuses on combining real world and computer- generated data, including computer graphics objects blended into real video and images in real time for display to an end-user. The spread of personal electronic devices such as smartphones and accessibility to data networks and services via the Internet and other networks have enabled access and use of an increasing number of AR applications.
[0002] As AR applications and services become increasingly popular, the number of augmentations available in any given context will skyrocket. These augmentations may be visual, auditory, and haptic, and some augmentations may span different modalities. Whether the augmentations are for a particular place and time, a particular object or collection of objects, or for a person or collection of people, the number of augmentations can overwhelm a user's ability to process them.
[0003] The user can be overwhelmed when a large number of augmentations are displayed on an augmented reality device, impeding the user's ability to meaningfully and easily review desired augmentations. For example, a user walking through Times Square in New York can be bombarded by several million augmentations from businesses, government organizations, social groups, and end-users (e.g. virtual billboards, restaurant reviews, business placards, artwork, travel directions, messages, graffiti, etc.). Similarly, a user walking through a city park can see tens of thousands of augmented reality avatars jostling for space on the grass. In another example, a user leafing through a copy of Moby Dick may be unable to read a page scribbled over with annotations from thousands of others who have read the book.
SUMMARY
[0004] In various embodiments, systems, methods, and computer-readable media are disclosed for clustering and rendering of augmentations into one or more operational
"exemplars" or clusters that represent collections of augmentations.
[0005] In one embodiment, an augmented reality system can receive a context associated with a user or a user's device. The context may include physical and virtual information about the user's environment, such as the user's location, time of day, the user's personal preferences, the augmented reality services to which the user is subscribed to, an image or object the user is pointing at or selecting, etc. The system can be associated with the user's device or with a service that the user is subscribed to.
[0006] In various embodiments, the augmentation system can determine and retrieve augmentations based on the context. Further, the augmentation system can automatically group the retrieved augmentations into clusters, determine rendering formats for each cluster, remove the grouped augmentations from previously rendered augmentations, and render the clusters as exemplars to the user.
[0007] In an embodiment, grouping augmentations into clusters and determining rendering formats can be based on the augmentations and the context. For example, the system can analyze the augmentations and the context and determine a conceptual clustering algorithm. The conceptual clustering algorithm can group the augmentations into clusters and associate the clusters with a concept describing properties of the grouped augmentations. The rendering formats of the clusters can be derived from the associated concepts. In a further embodiment, the rendering formats can exhibit several aspects of the clusters, such as appearance, behavior, and interactivity of the grouped augmentations. As such, when the clusters are rendered to a user as exemplars, the exemplars can provide descriptive, rich, informative, and meaningful conceptual summaries of the grouped augmentations.
[0008] For example, instead of displaying ten thousand augmented reality avatars crowded into a city park where the avatars represent users from countries around the globe, the avatars may be grouped in just ten exemplar avatars. Each exemplar avatar can be dressed in a flag of a different nation and can be "standing in" for a much larger set of avatars from the indicated nation. Thus, rather than being overwhelmed with ten thousand avatars, a user may see the ten exemplar avatars and decide to communicate with one of the exemplar avatars.
[0009] Clustering large numbers of augmentations into smaller sets of exemplars maintains the richness and meaning of an augmented reality environment along contextually or user-determined axes while reducing the sensorial and cognitive load on the user.
[0010] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[0011] The foregoing and other features of this disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the
accompanying drawings, in which:
[0012] FIG. 1 depicts a block diagram illustrating an example computing device with which various embodiments of the present disclosure may be implemented.
[0013] FIG. 2 depicts an example network environment in which various embodiments of the present disclosure may be implemented.
[0014] FIG. 3 depicts an illustrative embodiment of an augmented reality system.
[0015] FIG. 4 depicts an example augmented reality display.
[0016] FIG. 5 depicts example augmentations displayed on a computing device.
[0017] FIG. 6 depicts an example grouping of augmentations into clusters.
[0018] FIG. 7 depicts an example rendering format of an exemplar.
[0019] FIG. 8 depicts an example of an augmented reality view before and after augmentations are clustered and the resulting exemplars are rendered.
[0020] FIG. 9 depicts an example operational procedure for grouping augmentations and rendering the resulting exemplars.
DETAILED DESCRIPTION
[0021] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein.
[0022] This disclosure is generally drawn, inter alia, to methods, apparatus, systems, devices, and computer program products related to augmented reality. Briefly stated, technologies are generally described for a system for processing an augmented reality data, including automatically grouping a number of augmentations into clusters referred to as exemplars and rendering the exemplars in descriptive formats. [0023] FIG. 1 depicts a block diagram illustrating an example computing device 100 with which various embodiments of the present disclosure may be implemented. In a very basic configuration 102, computing device 100 typically includes one or more processors 104 and a system memory 106. A memory bus 108 may be used for communicating between processor 104 and system memory 106.
[0024] Depending on the desired configuration, processor 104 may be of any type including but not limited to a microprocessor (μΡ), a microcontroller (μθ), a digital signal processor (DSP), or any combination thereof. Processor 104 may include one more levels of caching, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. An example processor core 114 may include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. An example memory controller 118 may also be used with processor 104, or in some implementations memory controller 118 may be an internal part of processor 104.
[0025] Depending on the desired configuration, system memory 106 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. Application 122 may include an augmented reality process 126 that is arranged to perform functions as described herein including those described with respect to operations described in FIGs. 3-9. Program data 124 may include augmentation data 128 that may be useful for operation with augmented reality grouping and rendering techniques as is described herein. In some embodiments, application 122 may be arranged to operate with program data 124 on operating system 120 such that augmentations can be grouped into clusters which are then rendered as exemplars using a conceptual format. This described basic configuration 102 is illustrated in FIG. 1 by those components within the inner dashed line.
[0026] Computing device 100 may have additional features or functionality, and additional interfaces to facilitate communications between basic configuration 102 and any required devices and interfaces. For example, a bus/interface controller 130 may be used to facilitate communications between basic configuration 102 and one or more data storage devices
132 via a storage interface bus 134. Data storage devices 132 may be removable storage devices
136, non-removable storage devices 138, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media may include volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
[0027] System memory 106, removable storage devices 136 and non-removable storage devices 138 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 100. Any such computer storage media may be part of computing device 100.
[0028] Computing device 100 may also include an interface bus 140 for facilitating communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to basic configuration 102 via bus/interface controller 130. Example output devices 142 include a graphics processing unit 148 and an audio processing unit 150, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 152. Example peripheral interfaces 144 include a serial interface controller 154 or a parallel interface controller 156, which may be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 includes a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
[0029] The network communication link may be one example of a communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. A "modulated data signal" may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct- wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein may include both storage media and communication media. [0030] Computing device 100 may be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
Computing device 100 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
[0031] FIG. 2 depicts an example network environment in which various embodiments of the present disclosure may be implemented. In particular, FIG. 2 illustrates an example computing arrangement 200 comprised of computing devices 210 each of which may be adapted to provide augmented reality applications as described herein. The computing devices 210 may comprise, for example, any of a desktop computer 210a, a laptop computer 210b, a phone 210c, a tablet computing device 210d, a personal digital assistant (PDA) 210e, and a mobile phone 21 Of, each of which may be adapted to process and display augmented reality data to a user.
[0032] Each of the devices 210 may be adapted to communicate using a
communications network 250. The communications network 250 may be any type of network that is suitable for providing communications between the computing devices 210 and any servers 220 accessed by the computing devices 210. The communications network 250 may comprise a combination of discrete networks which may use different technologies. For example, the communications network 250 may comprise local area networks (LANs), wide area networks (WAN's), cellular networks, or combinations thereof. The communications network 250 may comprise wireless, wireline, or combination thereof. In an example embodiment, the communications network 250 may comprise the Internet and may additionally comprise any networks adapted to communicate with the Internet. The communications network 250 may comprise a wireless telephony network that is adapted to communicate video, audio, and other data between the computing devices 210 and the servers 220.
[0033] In an embodiment, augmentation data can be processed by an augmented reality device, such as any of the computing devices 210. The augmented reality device can be coupled to an analysis engine or an augmentation service hosted on a computing device, such as the server 220.
[0034] In an example scenario, the augmented reality device 210 may be directed, for example, by a user to activate an augmented reality application. The augmented reality device 210 may determine or be associated with a user's context, which may include information associated with physical and virtual environments of the user, such as the user's location, time of day, the user's personal preferences, the augmented reality services to which the user is subscribed to, an image or object the user is pointing at or selecting, etc.
[0035] The augmented reality device 210 can communicate with the server 220 over the communications network 250. The server 220 can comprise a repository of augmentation data and can be adapted to provide augmentation services. For example, the server 220 can include a library of clustering and rendering models and algorithms adapted to perform real-time clustering and rendering of augmentations. The augmented reality device 210 can query the server 220 to determine and receive augmentations based on the user's context. In one scenario, the server 220 can transmit augmentations and corresponding rendering formats to the augmented reality device 210 which can render the received augmentations to the user. In an alternative scenario, the server 220 can render the augmentations and transmit the rendered augmentations to the augmented reality device 210.
[0036] In yet another scenario, augmentation data can be stored on the augmented reality device 210. As such, grouping and rendering the augmentation data can be processed locally on the augmented reality device 210 eliminating the need for the augmented reality device to query the server 220. In a further scenario, the augmented reality device 210 can be in communication with another computing device 210 to exchange augmentation data and services. For example, the tablet 210d can be adapted to provide an interface to a user and to provide the user's context to the desktop 210a. In turn, the desktop 210a can be adapted to provide augmentation services to the user via the interface tablet 210d.
[0037] FIG. 3 depicts an illustrative embodiment of an augmented reality system 300.
A scene 310 may be viewed and captured by the augmented reality device 210. For example, the augmented reality device 210 can integrate an image or a video capture device. Alternatively or additionally, the augmented reality device 210 can be adapted to retrieve an image of the scene
310 based on the user's context. The image can be retrieved from data stored locally on the augmented reality device 210 or externally on another device 210 or the server 220 in
communication with the augmented reality device 210 as shown in FIG. 2. The scene 310 may be associated with a set of scene coordinates (Χ,Υ,Ζ). Based on the image of the scene 310 and/or the user's context, augmentations 320 may be determined and retrieved 315. The augmentations 320 can comprise virtual representations of the scene 310 and of objects or persons associated with the scene 310. For example, the augmentations 320 may comprise other images, metadata, information, or descriptions related to the scene 310. The augmentations 320 may also be associated with a set of coordinates (Χ,Υ,Ζ). The image of the scene 310 can be merged 325 with the augmentations 320 to generate a virtual image of the scene 310. The virtual image can be rendered 330 and displayed to the user. The generation of the virtual image may be performed with a standard computer graphics system internal or external to the augmented reality device 210. The graphics system may align the image of the scene 310 and the augmentations 320 based on the associated coordinates (Χ,Υ,Ζ). Further, the graphics system may use real world information about the imaging of the scene 310 so that the virtual image can be correctly rendered. The determination 315 of the augmentations 320, the merging 325 and aligning of the image and the augmentations 320 to create the virtual image, and the rendering 330 of the virtual image can be accomplished locally on the augmented reality device 210, externally on another device 210 or the server 220 in communication with the augmented reality device 210, or can be distributed between the augmented reality device 210, the other devices 210, and the server 220.
[0038] FIG. 4 depicts an example augmented reality display. Augmentations of the scene 310 of FIG. 3 can be displayed on the augmented reality device 210 of FIG. 2. The augmentations can, for example, comprise a title 410 of an object contained in the scene and a text description 420 about the object. The augmentations may be overlaid or merged with the image such that the real image and the augmentations may be combined in a single virtual image and presented to the user.
[0039] FIG. 5 depicts example augmentations displayed on a computing device. In particular, FIG. 5 illustrates a virtual image 510 displayed on the augmented reality device 210 of FIG. 2. The virtual image 510 can comprise an image of a scene, such as a panoramic view of Paris or a portion thereof, merged with augmentations 520a-d describing objects or monuments in the scene. The augmentations 520a-d can comprise descriptive titles and comments created by tourists about the objects. A comment can comprise a rating in a form of a one-to-five star scale and a feedback text field. A user 530 can select and expand any number of the augmentations 520a-d. For example, the user 530 can shake the augmented reality device 210, mouse over, single-click, double-tap, or motion over the augmentations 520a-520d to retrieve additional information about the objects contained in the augmentations 520a-d. Additional information can be retrieved from data stored locally or externally to the augmented reality device 210. For example, the additional information may comprise images taken by tourists, a list of attractions nearby, a list of restaurants with menus, prices, advertisement, etc.
[0040] In a world where augmented reality has become commonplace, it would be useful to have a way to organize this increasing sensorial and cognitive data. As augmented reality applications and services become increasingly popular, the number of augmentations available in any given context will skyrocket. These augmentations may be visual, auditory, and haptic, and some augmentations may span modalities. Whether the augmentations are for a particular place and time, a particular object or collection of objects, or for a person or collection of people, the number of augmentations can overwhelm a user's ability to process them.
[0041] One solution to the problem of "too many augmentations" is to allow the end- user to selectively hide or show them. Such techniques are typically referred to as "filtering." For example, an end-user can set up a filter to remove from sight all the advertisements overlaid on a scene, show only a professor's notes on a copy of Moby Dick, or turn off all audio commentaries during a performance of Swan Lake. However, the filtering techniques may require the end-user to know in advance when and what specific augmentations he or she wants to have available. FIGs. 6-9 present embodiments of alternative or additional techniques to filtering that can maintain the richness and meaning of an augmented reality environment along contextually or user-determined axes while reducing the sensorial and cognitive load on the user. These techniques may comprise automatic grouping of augmentations into clusters and rendering of the clusters based on conceptual formats representative of the augmentations grouped therein.
[0042] FIG. 6 depicts an example grouping of augmentations into clusters. An illustrative example of grouping augmentations into clusters includes a user 610 walking through a park, such as Central Park in New York City, while using the augmented reality device 210. The augmented reality device 210 can render or display a virtual image 650, such as a map of the park overlaid or merged with augmentations. The user 610 can use the augmented reality device 210 to navigate around the virtual image 650, which in this example represents the augmented reality park. There can be a number of real visitor augmentations 620a-m in the virtual image 650 associated with real people. For example, real people can be also walking through the park, and can have augmentations in the form of avatars. Additionally, there can be a large number of virtual visitor augmentations 630a-m in the virtual image 650 associated with virtual people. For example, people from around the world having augmentations in the form of avatars can also be visiting the park virtually at the same time as the user 610. Additionally, the virtual image 650 may comprise many other augmentations and types thereof, which are not represented in FIG. 6 for sake of clarity.
[0043] To avoid overwhelming the user 610 with the large number of augmentations 620-630, some or all of the augmentations 620-630 can be grouped into clusters and the clusters can be rendered as exemplars 622, 632, 640.
[0044] In an example, the augmentations 620-630 can be initially displayed on the augmented reality device 210. The augmented reality device 210 can, locally or through another computing device, group the real visitor augmentations 620 into a real visitor cluster and the virtual visitor augmentations 630 into a virtual visitor cluster. In turn, the two generated clusters can be grouped into a higher layer visitor cluster. The augmented reality device 210 can display the generated clusters as real visitor exemplar 622, virtual visitor exemplar 632, and virtual exemplar 640. Further, the augmented reality device 210 can remove the grouped augmentations from the initially displayed augmented reality output, i.e. the output comprising all the real visitor augmentations 620 and the virtual visitor augmentations 630 previously displayed, and display the ungrouped augmentations alongside the exemplars 622, 632, 640.
[0045] As such, the augmentations 620-630 can be automatically grouped and presented to the user 610 in a simplified augmented reality presentation merged in the virtual image 650. The user 610 can in turn access, communicate with, and expand the exemplars 622, 632, 640. The clustering can be multi-layered and can comprise classes of clusters with hierarchical structures. In this example, the visitor cluster can comprise the real visitor cluster and the virtual visitor cluster. In turn, the real visitor cluster can group the real visitor augmentations 620a-m and the virtual visitor cluster can group the virtual visitor augmentations 630a-n. For sake of clarity, FIG. 6 only depicts two clustering layers (i.e. the visitor cluster as a first layer, and the real and virtual visitor clusters as a second layer). However, embodiments are not limited to the exemplified layers. Additional or different clustering layers and sub-layers can be defined based on the augmentations, the user's context, and other factors.
[0046] Grouping augmentations into clusters can comprise analyzing properties of the augmentations available in a given user's context, generating one or more classes of clusters with possible hierarchical category structures based on the analyzed properties, associating each class with a concept description, and using the concept description to group the augmentations into the clusters within the appropriate classes.
[0047] In an embodiment, clustering algorithms can be used to generate the clusters and group the augmentations therein. Example clustering algorithms can include conceptual clustering algorithms, such as COBWEB and ITERATE. The conceptual clustering algorithms can comprise machine learning paradigm for unsupervised classification of data that can be adapted to generate concept descriptions and hierarchical category structures associated with classes. For example, the conceptual clustering algorithms can consider properties exhibited by or inherent to the augmentations and other information available to the algorithm, such as the user's context, to generate concepts, classes, and clusters. Other clustering algorithms can include the MICROSOFT CLUSTERING ALGORITHM available with SQL SERVER 2008 and the Balanced Iterative Reducing and Clustering (BIRCH) algorithm. [0048] In an embodiment, the clustering algorithms can be stored in a library. Based on the user's context and the augmentations, one or more clustering algorithms can be retrieved from the library and applied to the augmentations. For example, the library can comprise an avatar clustering algorithm adapted to group avatar-like augmentations based on the
augmentation properties such as social status (e.g. single, married, divorced, in a relationship, etc.), gender, age, activity (e.g. on vacation, running to a meeting, etc.), profession, hobbies, location, spoken languages, personal message, friends, etc. The library can also comprise a facial recognition clustering algorithm that can group avatars by analyzing common facial traits found in images of people associated with the avatars. The facial recognition clustering algorithm can further generate a description of the analyzed common facial traits. In addition, the library can comprise a third conceptual clustering algorithm for grouping augmentations that represent venues by analyzing their locations, distance of a user to the locations, time of the day, nearby attractions, ratings, recommendations, feedback from other users, facts, menus, prices, activities, cuisines, required attire, etc. The stored algorithms can be categorized. For example, the avatar and the facial recognition algorithms can be categorized as applicable to grouping avatars, whereas the venue clustering algorithm can be categorized as applicable to grouping activities. Embodiments are not limited to the exemplified conceptual clustering algorithms. Additional algorithms and categories can be defined and stored in the library.
[0049] The augmentations and the user's context can be used to determine an appropriate category of clustering algorithms that can to be applied to the augmentations. The library can be searched, and the appropriate clustering algorithms can be retrieved. For example, when the augmentations represent avatars and the user's context indicates an interest in communicating with people who can speak French and have a moustache, a determined category can comprise avatar grouping algorithms. As such, the library can be searched for an algorithm within that category, and the avatar and the facial recognition clustering algorithms can be retrieved. Other clustering algorithms need not be retrieved.
[0050] The retrieved clustering algorithms can be applied to analyze properties of the augmentations. The analysis can comprise comparing the properties to criteria from the user's context. Based on the properties, the user's context, and the comparison, the clustering algorithms can generate classes of clusters with hierarchical structures and concept descriptions associated with the generated classes. The concept descriptions can be used to group the augmentations into the clusters within the appropriate classes. The grouping can comprise adding an augmentation to a generated cluster based on a comparison between the properties of the augmentation and the concept description of the class associated with the generated cluster. Continuing with the avatar example, the avatar clustering algorithm can be applied to generate two classes of clusters. The corresponding concept descriptions can be avatars that can speak French and avatars that can speak only other languages. The facial recognition clustering algorithm can be applied to create two other classes of clusters within the class of French speaking avatars. The additional concept descriptions can be avatars that can speak French and have a moustache and avatars that can speak French and do not have a moustache. As such, a hierarchical structure of cluster classes associated with concept descriptions can be created. In this example, a first hierarchy can comprise a class that groups avatars based on language skills, and a second hierarchy can comprise a class that groups avatars based on facial hair traits. Thus, a total of three clusters can be created: one for avatars that cannot speak French, one for avatars that can speak French but do not have a moustache, and one for avatars that can speak French and have a moustache. The augmentations can be grouped into the clusters based on matches, such as language skills and facial hair traits, between the augmentation properties and the concept descriptions.
[0051] The concept descriptions can be further used to generate rendering formats as described herein below. For example, the language skills concept description can be used to render a cluster as an avatar holding a French flag, while the facial hair concept description can be used to add a moustache to the rendered avatar.
[0052] FIG. 7 depicts an example rendering format of an exemplar. Augmentation data can comprise augmentations and clusters of augmentations. The augmentation data can be rendered as exemplars. An augmented reality device, such as any of the devices 210 of FIG. 2, can be used to render the augmentation data. In an embodiment, the augmented reality device can comprise graphics processing units or a computing system adapted to process and render graphics data. In another embodiment, an external computing device, such as server 220 of FIG. 2, can receive, process, and render the augmentation data and transmit the rendered data to the augmented reality device for display to a user 730. In yet another embodiment, the external device can send rendering instructions, format, or information to the augmented reality device which in turn can render the augmentation data based on the received information.
[0053] An exemplar can be a conceptual representation used to render a cluster. The conceptual representation can comprise sensorial representations such as visual, auditory, and haptic representations, of a concept description associated with the cluster. The concept description can reflect properties of augmentations grouped into the cluster. Additionally, the exemplar can be created ad-hoc in response to properties of the augmentation data and can comprise semantics and presentation rules of such data. Further, each exemplar can be represented in relation to other exemplars. As such, the exemplar can provide a perceptual summary of the cluster's properties and content. In other words, the exemplar can provide a means to deal with an overabundance of augmentations through intelligent indirection, reducing a user's sensorial and cognitive overload while maintaining the richness and meaning of the original augmentations.
[0054] For example, in an augmented reality street scene that contains billboards, restaurants, and avatars, three different classes of exemplars can be created: a class for virtual billboards, a class for restaurant reviews, and a class for avatars. Each of the classes can provide its own rules for presentation. Considering the class of virtual billboards, the billboards can be grouped into a business-related cluster (e.g. an advertisement for a sale at a nearby sporting goods store) and a public service cluster (e.g. a high crime area warning). Each exemplar can express not only distinct characteristics (e.g. appearance, behavior, and interactivity) of the cluster, but also common characteristics shared with others clusters within the same class. As such, the business-related cluster can be rendered as an exemplar comprising a 3D model of a billboard, which can be a common characteristic of the clusters within the class of virtual billboards. Additionally, the exemplar can comprise a "For Sale" sign across the 3D model, which can be a distinct characteristic of the business-related cluster.
[0055] Rendering the clusters can comprise determining Tenderers or rendering formats for the clusters and rendering the clusters as exemplars based on the determined formats. Similar rendering techniques can also be used to render the augmentations as exemplars. As described herein above, the exemplar can comprise a 2D or 3D object providing a conceptual
representation of its content to a user. For example, when an augmentation is associated with an avatar, the exemplar can comprise a 3D avatar customized to exhibit properties of the augmentation. As such, if the augmentation represents a middle-aged business man, the object can comprise a male avatar having gray hair, wearing a suit, and carrying a briefcase. Similarly, when augmentations in the form of avatars visiting a park are grouped into clusters based on the avatar nationalities, the corresponding exemplars can be avatars holding flags of the different nations. Further, properties of the augmentations can be reflected at the different cluster hierarchal layers. For example, a top-level cluster of park visitors can comprise lower-level clusters of the park visitors grouped by nationalities. The top-level cluster can be rendered as an avatar sitting on a park bench next to a globe of the world, whereas the lower-level clusters can be rendered as avatars carrying flags of the different nations. [0056] Determining a renderer or a rendering format for a cluster can be accomplished without input from an end-user. The format can be automatically determined based on the augmentation data, the cluster, other clusters, and the user's context.
[0057] In an embodiment, the format can be derived from a concept description of the class of clusters. For example, augmentations of visitors to a park can be represented as avatars. The augmentations can be grouped in a multi-tier cluster hierarchy based on the activities of the avatars. First tier clusters can be associated with the concept of an international baseball game between the avatars. Second tier clusters can be associated with the concept of baseball teams from different nations. Third tier clusters can be associated with the concept of active and substitute baseball players. Aspects of the concept descriptions can be used in determining the formats. As such, the first tier cluster can be rendered as an avatar wearing a baseball hat and holding a trophy. The second tier clusters can be represented by avatars wearing national jerseys. The third tier clusters can be displayed as avatars carrying a bat or sitting on a bench.
[0058] In a further embodiment, a similar conceptual analysis can be applied to the grouped augmentations to derive a cluster concept based on common properties of the grouped augmentations. The common properties can comprise a certain range of shared characteristics. The format can be derived from the cluster concept and can be adapted to exhibit several aspects, such as behavior, appearance, and interactivity, of the cluster or the grouped augmentations. For example, when a cluster groups augmentations representing visitors to a park from different nations, the cluster concept can be international avatars. As such, a cluster grouping visitors from Germany can be formatted as an avatar holding a German flag. Additionally, an analysis of the grouped augmentations (e.g. the German visitors) may reveal that most augmentations are visiting the park to take pictures of objects therein. Based on this analysis, the format of the cluster (e.g. the avatar holding a German flag) can be further updated to incorporate a camera.
[0059] Similarly, the format of a cluster can be determined relatively to formats of other clusters. Multiple clusters can be comparatively analyzed. For example, ratios of cluster characteristics (e.g. number of augmentations, data size in bytes, etc.) may be calculated to derive relative cluster sizes. The relative cluster sizes can be used to update the formats of the compared clusters. In the previous example, if a cluster of USA visitors is rendered with an avatar holding a USA flag, the USA and German clusters can be compared. The comparison may reveal that the USA cluster has twice the number of visitors than the German cluster. As such, the avatar holding the USA flag can be displayed on a graphical user interface with a size twice as large as the avatar holding the German flag. [0060] In a further embodiment, the format can be refined by also considering the user's context. The user's context can be retrieved and analyzed to determine properties that can be applied to the format. For example, avatars holding flags can represent clusters of park visitors from different nations. The avatars can be further updated to reflect some or all aspects of the user's context. When analyzing the user's context, a list of friends may be retrieved. As such, a cluster comprising an augmentation associated with a friend from the list can be updated. The update can comprise adding the words "A Friend Is Here" to the avatar.
[0061] Referring to FIG. 7, the rendering format of a cluster as an exemplar can be adjusted based on a wide range of factors and goals. In an embodiment, appearance of an exemplar may be adjusted to provide an aggregation of collapsed classes of attributes. The appearance may comprise look, sound, and feel attributes exhibited by the cluster or the augmentation data grouped therein. For example, in an embodiment in which augmentations are represented as avatars, a format can use the height of the exemplar to indicate the number of grouped avatars into the cluster. Further, the format can use clothing and accessories of the exemplar to represent a nationality. Considering the example of visitors to a park, a French cluster 710 representing a small number of French tourists may be formatted to appear short, wearing a beret, and carrying a baguette, while an American cluster 720 representing a large number of American businessmen may be formatted to appear tall, slim, and dressed in a suit.
[0062] The augmentation data may also exhibit behaviors responsive to the aggregated classes of attributes. The behaviors can be analyzed to derive the format. For example, if most of the American avatars in the cluster are very active (e.g. moving around the park), the American cluster 722 may be updated or formatted to represent a jogging avatar. Similarly, if most of the French avatars were relaxing in the park (e.g. having a conversation), the French cluster 712 may be updated or formatted to represent a more sedentary avatar such as one sitting on a park bench.
[0063] Responsive to the aggregated classes of attributes, the augmentation data may further exhibit interactivity or interactions between the augmentations, the clusters, and/or the user 730. The interactions can be analyzed to derive or refine the formats. For example, the user 730 can be fluent in English and eager to learn French. An interaction between the user 730 and the American cluster 722 can comprise an exchange of text messages in English with all the American avatars comprised in the American cluster 722. An interaction between the user 730 and a French avatar contained in the French cluster 712 can comprise launching a translation application or program and translating text messages exchanged between the user 730 and the French avatar. [0064] In an embodiment, a user can access and expand the cluster and interact with one or more augmentations grouped therein. For example, a user interested in playing a game of chess can access a cluster representing avatars with the same interest, select one of the avatars, and start a chess game with the selected avatar. In a further embodiment, a user can interact with the cluster or the class of clusters. In such embodiment, interactions received from the grouped augmentations contained in the cluster can be presented to the user in a raw format or can be rendered according to the clustering and rendering techniques described herein. For example, a user may be interested in improving his or her cooking skills and may interact with a cluster representing cooks from around the world. The user may ask the cluster for a recipe and some of the cooks may respond back to the request. The responses can be presented to the user in the received order. Additionally or alternatively, the responses can be grouped and rendered as exemplars representing clusters of cooks known to the user, special diet recipes, and recipes provided by restaurants.
[0065] The determination of the rendering format can be independent of input from the user and can be instead based on the augmentation data. The format can also take advantage of properties derived from the user's context. In other words, the user need not specify the format of the augmentations or clusters because the exemplars can be derived automatically.
[0066] FIG. 8 depicts an example of an augmented reality view before and after augmentations are clustered and the resulting exemplars are rendered. In an embodiment, a smartphone 820, such as a GOOGLE ANDROID phone, a MICROSOFT WINDOWS phone, or an APPLE IPHONE, connected to the Internet can execute an application for an augmented reality exemplar as described herein. For example, a user 850 may be standing on a busy street corner in Washington D.C. around lunchtime, looking for something to eat, and is surrounded by restaurants but does not know which one to choose. The user 850 may launch a restaurant finder application on the smartphone 820 and pan up and down the street using the smartphone 820. On the smartphone 's 820 screen, the user 850 can receive and view an augmented reality version 800 of the street. At first, the user 850 may see a large number, maybe in the thousands, of restaurant review annotations 802A-N over the entire screen. An annotation 802 may include other people's ratings and comments about each restaurant. The large number of annotations 802A-N and their overlapping presentations can prevent the user 850 from even seeing the restaurants and reading many ratings and comments.
[0067] Luckily, the user 850 can run the augmented reality exemplar application on the smartphone 820 and in few seconds, all the annotations 802A-N can start to cluster together in exemplars 812A-M. The total number "M" of exemplars 812A-M can be substantially smaller than the total number "N" of annotations 802A-N. Further, the representation of an exemplar 812 can be much simpler than the representation of an annotation 802. An exemplar 812 for a restaurant review can look like a simple one-to-five star flag with a color coding. As such, the street scene can be easier to understand immediately. Using this type of exemplars, the restaurants can have black-colored ratings floating above them. The augmented reality exemplar application may display an exemplar 812A with the maximum rating of five stars over the nearest restaurant to the user 850. However, the black stars may not quite be as dark as the black stars rendered above some of the other nearby restaurants. In fact, the stars of the exemplar 812A may be displayed with a quite pale color.
[0068] The color shade can reflect the number of augmentations clustered together to generate the exemplar 812A. In other words, darker displayed stars can indicate a larger number of people rating a restaurant. As such, the exemplar 812A with the pale color can indicate that the nearest restaurant with the five stars may not have had many people rate it. The user 850 may not think that he or she should trust a rating, even a five star rating, from an establishment with so few reviews.
[0069] Further, the color can provide additional cognitive information to the user 850. For example, black-colored stars can be ratings by people that the user 850 does not know, whereas brightly-colored stars can be ratings by friends of the user 850. Continuing with the restaurant example, the augmented reality exemplar application may display all black-colored stars for the nearest restaurant and brightly-colored stars above some of the other restaurants nearby. These more colorful stars can indicate ratings from friends of the user 850. So, the user 850 may turn the smartphone 820 away from the black five-star restaurant to find a place that has the most colorful stars. The augmented reality exemplar application may display to the user 850 a bright green five-star restaurant located across the street. The user 850 may perceive that a lot of his or her friends positively enjoy the located restaurant and may decide to pocket the smartphone and rush across the street for lunch.
[0070] FIG. 9 depicts an example operational procedure for grouping augmentations and rendering the resulting exemplars including operations 900, 902, 904, 906, 908, 910, 912,
914, and 918. Operation 900 starts the operational procedure, where an augmented reality application or service may be activated on an augmented reality device, such as any of the devices 210 of FIG. 2. Operation 900 may be followed by operation 902. Operation 902
(Receive a user's context) illustrates receiving a user's context. The user's context may include information about the user's physical and virtual environments. Operation 902 may be followed by operation 904. Operation 904 (Retrieve augmentations) illustrates determining augmentations based on the received user's context. The augmented reality device can access or connect to a repository of augmentations and use the user's context to retrieve the appropriate augmentations. The augmented reality device may or may not render these augmentations to the user and may proceed to generate clusters and exemplars comprising some or all of the retrieved
augmentations. Operation 904 may be followed by operation 906.
[0071] Operation 906 (Analyze the retrieved augmentations) illustrates analyzing the retrieved augmentations. Based on the received user's context, the retrieved augmentations, and other factors, one or more conceptual clustering algorithms can be retrieved from a library and applied to some or all of the retrieved augmentations. Operation 906 may be followed by operation 908. Operation 908 (Generate one or more classes of clusters) illustrates generating one or more classes of clusters. The classes of clusters can comprise hierarchical category structures based on the analyzed properties of the augmentations. Operation 908 may be followed by operation 910. Operation 910 (Associate clusters with concept descriptions) illustrates associating each class with a concept description. The concept descriptions may be derived from the analyzed properties. Operation 910 may be followed by operation 912.
Operation 912 (Group the analyzed augmentations into the clusters) illustrates grouping the analyzed augmentations into clusters within the appropriate classes. The grouping can be based on the concept descriptions. Operation 912 may be followed by operation 914.
[0072] Operation 914 (Remove the grouped augmentations from previously rendered augmentations) illustrates removing the grouped augmentations from previously rendered augmentations. Operation 914 can be optional and can depend on whether the available augmentations under operation 904 are initially rendered to the user. Operation 914 may be followed by operation 916. Operation 916 (Determine rendering formats for the clusters) illustrates determining rendering formats for the generated clusters. For each cluster, operation 916 can determine a renderer or a format based on the associated concept description, properties of the grouped augmentations, the user's context, or formats of other clusters. Operation 916 can be independent of input from the user. Operation 916 may be followed by operation 918.
Operation 918 (Render the clusters as exemplars) illustrates rendering the clusters as exemplars. The rendering can be based on the determined formats and can comprise conceptual
representations of the clusters.
[0073] One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments.
[0074] The present disclosure is not to be limited in terms of the particular
embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods, reagents, compounds, compositions or biological systems, which can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
[0075] In an illustrative embodiment, any of the operations, processes, etc. described herein can be implemented as computer-readable instructions stored on a computer-readable medium. The computer-readable instructions can be executed by a processor of a mobile unit, a network element, and/or any other computing device.
[0076] There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[0077] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[0078] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non- volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems. [0079] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably couplable", to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
[0080] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0081] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B."
[0082] In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
[0083] As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as "up to," "at least," and the like include the number recited and refer to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth. [0084] From the foregoing, it will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

1. A system for displaying augmented data, comprising:
a processor; and
a memory communicatively coupled to the processor, the memory bearing processor instructions that, when executed by the processor, cause the system to at least:
determine augmentation data based on a context associated with a user device;
generate clusters and group the augmentation data into the generated clusters; and determine rendering formats for the generated clusters, wherein the rendering formats are indicative of conceptual representations of the generated clusters.
2. The system of claim 1 further comprising instructions that, when executed, cause the system to remove portions of previously transmitted augmentation data to define a subset of the previously transmitted augmentation data, and transmit the subset, rendering formats for the subset, the generated clusters, and the rendering formats for the generated clusters.
3. The system of claim 1, wherein the determine augmentation data comprises retrieving augmentation data from a repository, and wherein the context comprises information about physical and virtual environments associated with the user device.
4. The system of claim 1, wherein the generate clusters comprises:
analyzing properties of the augmentation data;
generating classes of clusters based on the analyzed properties;
associating the generated classes with descriptions; and
grouping the augmentation data into the clusters within the classes based on the description.
5. The system of claim 4, wherein the analyzing properties of the augmentation data comprises determining one or more clustering algorithms from a library of algorithms and applying the one or more clustering algorithms to the augmentation data to derive the properties and to generate the descriptions to be associated with the classes, and wherein determining the one or more conceptual clustering algorithms is based on the context and the augmentation data.
6. The system of claim 1, wherein the determine rendering formats for the generated clusters comprises deriving properties determined for the clusters based on the grouped augmentation data and using the derived properties to create the conceptual representations of the generated clusters.
7. The system of claim 6, wherein the conceptual representations of the generated clusters comprise visual, auditory, and haptic representations of the properties exhibited by the clusters.
8. The system of claim 6, wherein the deriving the properties comprises analyzing the grouped augmentation data to determine appearance, behavior, and interactivity properties of the grouped augmentation data.
9. The system of claim 8, wherein the appearance properties comprise look, sound, and feel characteristics common to the augmentation data.
10. The system of claim 8, wherein the behavior properties comprise a behavior common to the augmentation data.
11. The system of claim 8, wherein the interactivity properties comprise an activity common to the augmentation data.
12. A method for rendering augmented reality data on a computing device, comprising: sending, by the computing device to an augmented reality service, a context of the computing device;
receiving, from the augmented reality service, data representing clusters comprising one or more groupings of augmented reality data; and
rendering the clusters.
13. The method of claim 12 further comprising:
receiving, by the computing device, additional augmented reality data omitted from the received clusters; and
rendering the received clusters and the additional augmented reality data.
14. The method of claim 12, wherein the clusters are generated by determining a plurality of clustering algorithms based on the sent context and the augmented reality data and applying the plurality of clustering algorithms to the augmented reality data.
15. The method of claim 12, further comprising determining rendering formats for the generated clusters, wherein the rendering formats are indicative of conceptual representations of the generated clusters, and wherein the conceptual representations comprise sensory representations reflecting properties exhibited by the generated clusters.
16. The method of claim 15, wherein the sensory representations comprise visual, auditory, and haptic representations of the properties exhibited by the generated clusters.
17. The method of claim 15, wherein the properties exhibited by the generated clusters are derived from appearance, behavior, and interactivity of the augmented reality data grouped into the generated clusters.
18. A computer readable storage medium having stored thereon instructions that when executed by one or more processors, cause the one or more processors to perform:
determining augmentations in response to identifying a context of a user device;
grouping the augmentations into clusters;
deriving rendering information for the clusters, the rendering information providing conceptual representations of the clusters; and
rendering the clusters based on the rendering information.
19. The computer readable storage medium of claim 18, wherein the grouping the augmentations into clusters comprises:
determining a clustering algorithm based on at least the augmentations;
applying the clustering algorithm to the augmentations to analyze properties of the augmentations;
generating classes of clusters with hierarchical category structures based on the analyzed properties;
tagging each generated class of clusters with a concept description based on the analyzed properties; and using the concept description to sort the selected augmentations into the clusters within the classes.
20. The computer readable storage medium of claim 18, wherein the deriving rendering information for the clusters comprises deriving properties of the clusters, wherein the derived properties reflect at least behavior, appearance, and interaction of the augmentations grouped into the clusters, and wherein the conceptual representations comprise visual, auditory, and haptic information for rendering the clusters according to the derived properties.
PCT/US2012/052505 2012-08-27 2012-08-27 Generating augmented reality exemplars WO2014035367A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2015529763A JP5980432B2 (en) 2012-08-27 2012-08-27 Augmented reality sample generation
EP12883675.6A EP2888876A4 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars
PCT/US2012/052505 WO2014035367A1 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars
US13/879,594 US9607436B2 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars
KR1020157007902A KR101780034B1 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars
US15/469,933 US20170263055A1 (en) 2012-08-27 2017-03-27 Generating augmented reality exemplars

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/052505 WO2014035367A1 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US13/879,594 A-371-Of-International US9607436B2 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars
US15/469,933 Continuation US20170263055A1 (en) 2012-08-27 2017-03-27 Generating augmented reality exemplars

Publications (1)

Publication Number Publication Date
WO2014035367A1 true WO2014035367A1 (en) 2014-03-06

Family

ID=50184012

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/052505 WO2014035367A1 (en) 2012-08-27 2012-08-27 Generating augmented reality exemplars

Country Status (5)

Country Link
US (2) US9607436B2 (en)
EP (1) EP2888876A4 (en)
JP (1) JP5980432B2 (en)
KR (1) KR101780034B1 (en)
WO (1) WO2014035367A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9092818B2 (en) 2013-01-31 2015-07-28 Wal-Mart Stores, Inc. Method and system for answering a query from a consumer in a retail store
WO2015187897A1 (en) * 2014-06-06 2015-12-10 Microsoft Technology Licensing, Llc Augmented data view
EP3465620B1 (en) * 2016-05-31 2023-08-16 Microsoft Technology Licensing, LLC Shared experience with contextual augmentation

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101844395B1 (en) 2012-08-24 2018-04-02 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Virtual reality applications
US10210273B2 (en) * 2012-08-31 2019-02-19 Hewlett-Packard Development Company, L.P. Active regions of an image with accessible links
US20140168264A1 (en) 2012-12-19 2014-06-19 Lockheed Martin Corporation System, method and computer program product for real-time alignment of an augmented reality device
CN103886198B (en) * 2014-03-17 2016-12-07 腾讯科技(深圳)有限公司 Method, terminal, server and the system that a kind of data process
US20150317057A1 (en) * 2014-05-02 2015-11-05 Electronics And Telecommunications Research Institute Navigation apparatus for providing social network service (sns) service based on augmented reality, metadata processor, and metadata processing method in augmented reality navigation system
US9277180B2 (en) * 2014-06-30 2016-03-01 International Business Machines Corporation Dynamic facial feature substitution for video conferencing
US9204098B1 (en) 2014-06-30 2015-12-01 International Business Machines Corporation Dynamic character substitution for web conferencing based on sentiment
WO2016053228A1 (en) * 2014-09-29 2016-04-07 Aurasma Limited Targeting campaign in augmented reality
US20160112479A1 (en) * 2014-10-16 2016-04-21 Wipro Limited System and method for distributed augmented reality
US10565560B2 (en) * 2014-11-12 2020-02-18 Successfactors, Inc. Alternative people charting for organizational charts
US10042832B1 (en) 2015-01-16 2018-08-07 Google Llc Systems and methods for stacking marginal annotations
JP6344311B2 (en) * 2015-05-26 2018-06-20 ソニー株式会社 Display device, information processing system, and control method
US10169917B2 (en) 2015-08-20 2019-01-01 Microsoft Technology Licensing, Llc Augmented reality
US10235808B2 (en) 2015-08-20 2019-03-19 Microsoft Technology Licensing, Llc Communication system
US9779327B2 (en) * 2015-08-21 2017-10-03 International Business Machines Corporation Cognitive traits avatar for similarity matching
US10762132B2 (en) * 2015-10-29 2020-09-01 Pixured, Inc. System for referring to and/or embedding posts, videos or digital media within another post, video, digital data or digital media within 2D, 3D, 360 degree or spherical applications whereby to reach convergence or grouping
US10762429B2 (en) * 2016-05-18 2020-09-01 Microsoft Technology Licensing, Llc Emotional/cognitive state presentation
US10154191B2 (en) 2016-05-18 2018-12-11 Microsoft Technology Licensing, Llc Emotional/cognitive state-triggered recording
KR101865875B1 (en) * 2016-06-21 2018-07-11 한양대학교 에리카산학협력단 Augmented reality providing mehtod using social network service
KR101870407B1 (en) * 2016-06-21 2018-06-28 한양대학교 에리카산학협력단 Augmented reality providing system using social network service
US10002454B2 (en) 2016-09-16 2018-06-19 International Business Machines Corporation Reactive overlays of multiple representations using augmented reality
JP6808419B2 (en) * 2016-09-26 2021-01-06 キヤノン株式会社 Image processing system and its control method
WO2018118657A1 (en) 2016-12-21 2018-06-28 Pcms Holdings, Inc. Systems and methods for selecting spheres of relevance for presenting augmented reality information
CN110268448B (en) 2017-02-20 2023-11-24 交互数字Vc控股公司 Dynamically presenting augmented reality information to reduce peak cognitive demands
US10379606B2 (en) * 2017-03-30 2019-08-13 Microsoft Technology Licensing, Llc Hologram anchor prioritization
US20180300916A1 (en) * 2017-04-14 2018-10-18 Facebook, Inc. Prompting creation of a networking system communication with augmented reality elements in a camera viewfinder display
US10621417B2 (en) * 2017-04-16 2020-04-14 Facebook, Inc. Systems and methods for generating content
US10373390B2 (en) * 2017-11-17 2019-08-06 Metatellus Oü Augmented reality based social platform
US10846532B2 (en) * 2018-02-27 2020-11-24 Motorola Solutions, Inc. Method and apparatus for identifying individuals using an augmented-reality application
US10818093B2 (en) 2018-05-25 2020-10-27 Tiff's Treats Holdings, Inc. Apparatus, method, and system for presentation of multimedia content including augmented reality content
US10984600B2 (en) 2018-05-25 2021-04-20 Tiff's Treats Holdings, Inc. Apparatus, method, and system for presentation of multimedia content including augmented reality content
US11030811B2 (en) 2018-10-15 2021-06-08 Orbit Technology Corporation Augmented reality enabled layout system and method
GB201817061D0 (en) 2018-10-19 2018-12-05 Sintef Tto As Manufacturing assistance system
US11321411B1 (en) * 2018-12-28 2022-05-03 Meta Platforms, Inc. Systems and methods for providing content
US11048391B2 (en) * 2019-01-03 2021-06-29 International Business Machines Corporation Method, system and computer program for copy and paste operations
US11024089B2 (en) 2019-05-31 2021-06-01 Wormhole Labs, Inc. Machine learning curated virtualized personal space
US11886767B2 (en) 2022-06-17 2024-01-30 T-Mobile Usa, Inc. Enable interaction between a user and an agent of a 5G wireless telecommunication network using augmented reality glasses

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090167787A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Augmented reality and filtering
US20090177484A1 (en) * 2008-01-06 2009-07-09 Marc Eliot Davis System and method for message clustering
US20110310120A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Techniques to present location information for social networks using augmented reality
WO2012033768A2 (en) 2010-09-07 2012-03-15 Qualcomm Incorporated Efficient information presentation for augmented reality
US20120102050A1 (en) * 2009-07-01 2012-04-26 Simon James Button Systems And Methods For Determining Information And Knowledge Relevancy, Relevent Knowledge Discovery And Interactions, And Knowledge Creation
US20120113138A1 (en) * 2010-11-04 2012-05-10 Nokia Corporation Method and apparatus for annotating point of interest information
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20120162254A1 (en) * 2010-12-22 2012-06-28 Anderson Glen J Object mapping techniques for mobile augmented reality applications

Family Cites Families (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6037936A (en) 1993-09-10 2000-03-14 Criticom Corp. Computer vision system with a graphic user interface and remote camera control
US5874966A (en) 1995-10-30 1999-02-23 International Business Machines Corporation Customizable graphical user interface that automatically identifies major objects in a user-selected digitized color image and permits data to be associated with the major objects
US5835094A (en) 1996-12-31 1998-11-10 Compaq Computer Corporation Three-dimensional computer environment
US6252597B1 (en) 1997-02-14 2001-06-26 Netscape Communications Corporation Scalable user interface for graphically representing hierarchical data
US7073129B1 (en) 1998-12-18 2006-07-04 Tangis Corporation Automated selection of appropriate information based on a computer user's context
US8341553B2 (en) 2000-02-17 2012-12-25 George William Reed Selection interface systems and methods
US7076503B2 (en) * 2001-03-09 2006-07-11 Microsoft Corporation Managing media objects in a database
JP3871904B2 (en) * 2001-06-07 2007-01-24 日立ソフトウエアエンジニアリング株式会社 How to display a dendrogram
US7653212B2 (en) 2006-05-19 2010-01-26 Universal Electronics Inc. System and method for using image data in connection with configuring a universal controlling device
US7071842B1 (en) 2002-06-27 2006-07-04 Earthcomber, Llc System and method for locating and notifying a user of a person, place or thing having attributes matching the user's stated preferences
US20050039133A1 (en) 2003-08-11 2005-02-17 Trevor Wells Controlling a presentation of digital content
US7313574B2 (en) 2003-10-02 2007-12-25 Nokia Corporation Method for clustering and querying media items
US7565139B2 (en) 2004-02-20 2009-07-21 Google Inc. Image-based search engine for mobile phones with camera
US20050289158A1 (en) 2004-06-25 2005-12-29 Jochen Weiss Identifier attributes for product data stored in an electronic database
WO2006036442A2 (en) 2004-08-31 2006-04-06 Gopalakrishnan Kumar Method and system for providing information services relevant to visual imagery
US8370769B2 (en) 2005-06-10 2013-02-05 T-Mobile Usa, Inc. Variable path management of user contacts
US7728869B2 (en) 2005-06-14 2010-06-01 Lg Electronics Inc. Matching camera-photographed image with map data in portable terminal and travel route guidance method
US20070050468A1 (en) 2005-08-09 2007-03-01 Comverse, Ltd. Reality context menu (RCM)
US7836065B2 (en) 2005-11-01 2010-11-16 Sap Ag Searching multiple repositories in a digital information system
US7725077B2 (en) 2006-03-24 2010-05-25 The Invention Science Fund 1, Llc Wireless device with an aggregate user interface for controlling other devices
EP1840511B1 (en) 2006-03-31 2016-03-02 BlackBerry Limited Methods and apparatus for retrieving and displaying map-related data for visually displayed maps of mobile communication devices
JP5029874B2 (en) * 2006-12-28 2012-09-19 富士通株式会社 Information processing apparatus, information processing method, and information processing program
US7752207B2 (en) 2007-05-01 2010-07-06 Oracle International Corporation Crawlable applications
US8290513B2 (en) 2007-06-28 2012-10-16 Apple Inc. Location-based services
US8281240B2 (en) * 2007-08-23 2012-10-02 International Business Machines Corporation Avatar aggregation in a virtual universe
US8180396B2 (en) 2007-10-18 2012-05-15 Yahoo! Inc. User augmented reality for camera-enabled mobile devices
US9503562B2 (en) 2008-03-19 2016-11-22 Universal Electronics Inc. System and method for appliance control via a personal communication or entertainment device
US20090262084A1 (en) 2008-04-18 2009-10-22 Shuttle Inc. Display control system providing synchronous video information
US9870130B2 (en) 2008-05-13 2018-01-16 Apple Inc. Pushing a user interface to a remote device
US9311115B2 (en) 2008-05-13 2016-04-12 Apple Inc. Pushing a graphical user interface to a remote device with display rules provided by the remote device
US8711176B2 (en) 2008-05-22 2014-04-29 Yahoo! Inc. Virtual billboards
US20090322671A1 (en) 2008-06-04 2009-12-31 Cybernet Systems Corporation Touch screen augmented reality system and method
US8260320B2 (en) 2008-11-13 2012-09-04 Apple Inc. Location specific content
US9342231B2 (en) 2008-12-29 2016-05-17 Apple Inc. Remote control of a presentation
US7870496B1 (en) 2009-01-29 2011-01-11 Jahanzeb Ahmed Sherwani System using touchscreen user interface of a mobile device to remotely control a host computer
US20130124311A1 (en) 2009-03-23 2013-05-16 Sujai Sivanandan System and Method for Dynamic Integration of Advertisements in a Virtual Environment
EP2472447A4 (en) 2009-05-18 2014-08-20 Takatoshi Yanase Knowledge base system, logical operation method, program, and storage medium
US8352465B1 (en) * 2009-09-03 2013-01-08 Google Inc. Grouping of image search results
KR101595762B1 (en) 2009-11-10 2016-02-22 삼성전자주식회사 Method for controlling remote of portable terminal and system for the same
US8850342B2 (en) * 2009-12-02 2014-09-30 International Business Machines Corporation Splitting avatars in a virtual world
KR20110118421A (en) 2010-04-23 2011-10-31 엘지전자 주식회사 Augmented remote controller, augmented remote controller controlling method and the system for the same
KR101657565B1 (en) 2010-04-21 2016-09-19 엘지전자 주식회사 Augmented Remote Controller and Method of Operating the Same
US20110161875A1 (en) 2009-12-29 2011-06-30 Nokia Corporation Method and apparatus for decluttering a mapping display
US8725706B2 (en) 2010-03-26 2014-05-13 Nokia Corporation Method and apparatus for multi-item searching
US8990702B2 (en) 2010-09-30 2015-03-24 Yahoo! Inc. System and method for controlling a networked display
US9021354B2 (en) 2010-04-09 2015-04-28 Apple Inc. Context sensitive remote device
US20110316845A1 (en) 2010-06-25 2011-12-29 Palo Alto Research Center Incorporated Spatial association between virtual and augmented reality
KR101329882B1 (en) * 2010-08-12 2013-11-15 주식회사 팬택 Apparatus and Method for Displaying Augmented Reality Window
US9710554B2 (en) * 2010-09-23 2017-07-18 Nokia Technologies Oy Methods, apparatuses and computer program products for grouping content in augmented reality
US20120212405A1 (en) 2010-10-07 2012-08-23 Benjamin Zeis Newhouse System and method for presenting virtual and augmented reality scenes to a user
JP5257437B2 (en) 2010-10-20 2013-08-07 コニカミノルタビジネステクノロジーズ株式会社 Method for operating portable terminal and processing device
KR101357260B1 (en) * 2010-10-22 2014-02-03 주식회사 팬택 Apparatus and Method for Providing Augmented Reality User Interface
US8698843B2 (en) 2010-11-02 2014-04-15 Google Inc. Range of focus in an augmented reality application
US20120173979A1 (en) 2010-12-31 2012-07-05 Openpeak Inc. Remote control system and method with enhanced user interface
US20120182205A1 (en) 2011-01-18 2012-07-19 Schlumberger Technology Corporation Context driven heads-up display for efficient window interaction
US8929591B2 (en) 2011-03-08 2015-01-06 Bank Of America Corporation Providing information associated with an identified representation of an object
KR101829063B1 (en) * 2011-04-29 2018-02-14 삼성전자주식회사 Method for displaying marker in map service
US9727132B2 (en) 2011-07-01 2017-08-08 Microsoft Technology Licensing, Llc Multi-visor: managing applications in augmented reality environments
TWI452527B (en) 2011-07-06 2014-09-11 Univ Nat Chiao Tung Method and system for application program execution based on augmented reality and cloud computing
US9058331B2 (en) * 2011-07-27 2015-06-16 Ricoh Co., Ltd. Generating a conversation in a social network based on visual search results
KR101343609B1 (en) 2011-08-24 2014-02-07 주식회사 팬택 Apparatus and Method for Automatically recommending Application using Augmented Reality Data
JP5987299B2 (en) 2011-11-16 2016-09-07 ソニー株式会社 Display control apparatus, display control method, and program
US20130155108A1 (en) 2011-12-15 2013-06-20 Mitchell Williams Augmented Reality User Interaction Methods, Computing Devices, And Articles Of Manufacture
US10001918B2 (en) 2012-11-21 2018-06-19 Algotec Systems Ltd. Method and system for providing a specialized computer input device
US10025486B2 (en) 2013-03-15 2018-07-17 Elwha Llc Cross-reality select, drag, and drop for augmented reality systems

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090167787A1 (en) * 2007-12-28 2009-07-02 Microsoft Corporation Augmented reality and filtering
US20090177484A1 (en) * 2008-01-06 2009-07-09 Marc Eliot Davis System and method for message clustering
US20120102050A1 (en) * 2009-07-01 2012-04-26 Simon James Button Systems And Methods For Determining Information And Knowledge Relevancy, Relevent Knowledge Discovery And Interactions, And Knowledge Creation
US20110310120A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Techniques to present location information for social networks using augmented reality
WO2012033768A2 (en) 2010-09-07 2012-03-15 Qualcomm Incorporated Efficient information presentation for augmented reality
US20120113138A1 (en) * 2010-11-04 2012-05-10 Nokia Corporation Method and apparatus for annotating point of interest information
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20120162254A1 (en) * 2010-12-22 2012-06-28 Anderson Glen J Object mapping techniques for mobile augmented reality applications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2888876A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9092818B2 (en) 2013-01-31 2015-07-28 Wal-Mart Stores, Inc. Method and system for answering a query from a consumer in a retail store
WO2015187897A1 (en) * 2014-06-06 2015-12-10 Microsoft Technology Licensing, Llc Augmented data view
CN106462567A (en) * 2014-06-06 2017-02-22 微软技术许可有限责任公司 Augmented data view
EP3465620B1 (en) * 2016-05-31 2023-08-16 Microsoft Technology Licensing, LLC Shared experience with contextual augmentation

Also Published As

Publication number Publication date
EP2888876A4 (en) 2016-04-20
US20170263055A1 (en) 2017-09-14
US9607436B2 (en) 2017-03-28
EP2888876A1 (en) 2015-07-01
JP5980432B2 (en) 2016-08-31
KR101780034B1 (en) 2017-09-19
US20140204119A1 (en) 2014-07-24
KR20150046313A (en) 2015-04-29
JP2015534154A (en) 2015-11-26

Similar Documents

Publication Publication Date Title
US9607436B2 (en) Generating augmented reality exemplars
JP6706647B2 (en) Method and apparatus for recognition and matching of objects represented in images
US11443460B2 (en) Dynamic mask application
CN109952610B (en) Selective identification and ordering of image modifiers
JP5843207B2 (en) Intuitive computing method and system
US10032303B2 (en) Scrolling 3D presentation of images
US11676378B2 (en) Providing travel-based augmented reality content with a captured image
KR20120127655A (en) Intuitive computing methods and systems
US11769500B2 (en) Augmented reality-based translation of speech in association with travel
US20230300292A1 (en) Providing shared augmented reality environments within video calls
US20230091214A1 (en) Augmented reality items based on scan
WO2021252232A1 (en) Message interface expansion system
US20230368444A1 (en) Rendering customized video call interfaces during a video call
Grubert Mobile Augmented Reality for Information Surfaces
CN116781853A (en) Providing a shared augmented reality environment in a video call

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 13879594

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12883675

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012883675

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2015529763

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157007902

Country of ref document: KR

Kind code of ref document: A