CN116097314A - Content item management in an augmented reality environment - Google Patents

Content item management in an augmented reality environment Download PDF

Info

Publication number
CN116097314A
CN116097314A CN202180053806.5A CN202180053806A CN116097314A CN 116097314 A CN116097314 A CN 116097314A CN 202180053806 A CN202180053806 A CN 202180053806A CN 116097314 A CN116097314 A CN 116097314A
Authority
CN
China
Prior art keywords
virtual container
enhancement
artificial reality
contextual
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180053806.5A
Other languages
Chinese (zh)
Inventor
米哈尔·赫拉瓦克
巴雷特·福克斯
梅林·邓
托德·哈里斯
格雷戈里·阿尔特
亚历克斯·马科利娜
海登·舍恩
亚瑟·兹维金库
詹姆斯·蒂切诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Publication of CN116097314A publication Critical patent/CN116097314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

Aspects of the present invention relate to providing an artificial reality environment with augmentation and surfaces. An "enhancement" is a virtual container in 3D space that may contain presentation data, context, and logic. The augmented reality system may use augmentation as a basic building block to display 2D and 3D models in the artificial reality environment. For example, augmentation may represent people, places, and things in an artificial reality environment, and may be responsive to context such as current display mode, time of day, type of surface on which the augmentation is located, relationships with other augmentation, and so forth. The enhancements may be located on a certain "surface" that has a layout and properties that cause the enhancements on the surface to be displayed in different ways. Augmentation and other objects (real or virtual) may also interact, where these interactions may be controlled by rules for the objects that are evaluated based on information from the shell program.

Description

Content item management in an augmented reality environment
Cross Reference to Related Applications
This application is related to U.S. patent application Ser. No. 17/008,478, entitled "Artificial reality augmentation and surface (ARTIFICIAL REALITY AUGMENTS AND SURFACES)" filed on 8/31/2020, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to a new artificial reality control and interaction mode with augmentation as a basic object.
Background
Interactions with a computing system are typically based on a set of core concepts that define how users can interact with the computing system. For example, early operating systems provided a text interface to interact with file directories. The interaction is later established by adding a "windowing" system whereby the levels in the file directory and executing applications are displayed in multiple windows, each window assigning a portion of the 2D display that is populated with content selected for that window (e.g., all files from the same level in the directory, an application generated graphical user interface, an operating system menu or control, etc.). With the reduction in size of computing form factors and the increase in integrated hardware functionality (e.g., cameras, GPS, wireless antennas, etc.), the core concept has evolved again, turning to "applications" that each include the functionality of a computing system.
Existing artificial reality (XR) systems provide models, such as 3D virtual objects and 2D panels, with which a user can interact in 3D space. Existing XR systems typically support these models by extending application core computing concepts. For example, a user may instantiate the models by activating an application and informing the application to create the models and using the models as an interface back to the application. Such methods typically require simulating the type of interaction traditionally performed with the mobile device in virtual space and require continuous execution of applications to keep the model in an artificial reality environment.
Disclosure of Invention
According to the present invention, there is provided a method of generating a virtual container for an artificial reality environment, the method comprising: receiving a request for a virtual container, wherein the request is associated with an inventory specifying one or more parameters of the virtual container; creating a data structure for the virtual container in a manner that a first portion of an initialization procedure is performed for the data structure, the creating a data structure comprising setting one or more attributes based on the parameters specified in the manifest, the virtual container comprising a plurality of context response display modes and context response logic; providing a handle to the data structure in response to the request, the handle enabling positioning of the virtual container in the artificial reality environment; and executing a second portion of the initialization program, wherein the second portion of the initialization program is executed when the handle is used to locate the virtual container in the artificial reality environment.
Optionally, after the initialization procedure, the virtual container receives one or more contextual factors.
Optionally, the virtual container enables one of the plurality of contextual responsive display modes in response to evaluating a corresponding condition employing the one or more contextual factors; or, optionally, the virtual container invokes at least a portion of the contextual response logic in response to evaluating a corresponding condition employing the one or more contextual factors.
Optionally, the handle further enables the addition of content items to the virtual container.
Optionally, one or more content items are added to the virtual container using the handle while executing the second portion of the initialization program.
The second portion of the initialization procedure may optionally include registering the virtual container to receive contextual factors.
The second portion of the initialization procedure may alternatively or additionally include identifying contextual factors that the virtual container should receive by determining which contextual factors to use in a condition corresponding to one or more of the plurality of contextual response display modes or portions of the contextual response logic, and registering the virtual container to receive the identified contextual factors.
Optionally, the parameters of the virtual container specified in the manifest include at least: one of the plurality of context responsive display modes; and optionally one or more of the following: a virtual container type, a container shape, a spatial orientation, a location in the artificial reality environment, a location type consistent with placement of the virtual container, or any combination thereof. For example, the manifest-specified display mode may include one or more contextual factor values that must occur for the display mode to be enabled, such as: the type of surface on which the virtual container may be placed in the display mode; whether the virtual container is movable; volume for audio presentation; name, ID, virtual container type; all parties of the virtual container; initial position of virtual container, etc.
Optionally, parameters are specified for the virtual container in different display modes or contexts. Optionally, the specified parameters may include contextual factors that the virtual container needs to learn in order to enable display modes or invoke logic.
Optionally, default and/or inherited properties and/or logic may be specified for the virtual container. For example, all virtual containers may be specified to have certain display modes (e.g., audio only, minimized, interacted with, etc.) that correspond to the artificial reality system mode. Optionally, the display modes specified in the manifest may be extended by these default display modes, optionally allowing the default display modes to set constraints for the extended display modes. Examples of the contextual factors that the conditions of the various display modes may specify include surface features, relationships to other real or virtual objects, movement characteristics, lighting, time, date, user interactions, detected sounds, artificial reality system modes, and the like. Examples of predefined virtual container types may include individuals, 2D objects, 3D objects, stickers, events, or free forms.
Optionally, the virtual container type may correspond to a type of node in a social graph.
Optionally, a particular type of virtual container may have default values for that type, such as default properties, logical and/or automatically added content items. For example, a "post" virtual container may have predefined visual structures, controls, and content automatically added based on nodes in the social graph that correspond to the post.
Optionally, the artificial reality system provides the virtual container as a data volume having the attributes specified in the manifest for placement by a user in the artificial reality environment.
Optionally, the virtual container is a data volume in three-dimensional space that may be filled with content and/or output presentation data. Optionally, the virtual container may hold data; responding to the current virtual container context; and/or with logic.
Optionally, the request may be generated as a result of a user performing an interaction with a content item located in another previously created virtual container and associated with another manifest, wherein optionally the interaction is predefined as signaling the creation of a new virtual container based on the content item.
Optionally, the request may be further associated with an indication of the gaze direction of the user.
The virtual container may optionally be placed in the artificial reality environment initially based on the gaze direction of the user.
Optionally, the enabling of the one of the plurality of context-responsive display modes causes the virtual container to be set to a maximum size and moved to a particular location corresponding to the current mode of the artificial reality system.
Optionally, the virtual container receives contextual factors specifying a value of a current mode of the artificial reality system.
Optionally, one of the plurality of context-responsive display modes corresponds to a condition that evaluates to true when the value of the current mode of the artificial reality system is provided.
Optionally, the virtual container enables the one of the plurality of context responsive display modes in response to evaluating the corresponding condition as true.
Optionally, the one or more extended context response display modes are extensions of other ones of the plurality of context response display modes.
Optionally, each of the one or more extended context response display modes references another one of the plurality of context response display modes.
Optionally, the condition for enabling a particular extended context response display mode is that the condition associated with the context response display mode referenced by the particular extended context response display mode evaluates to true.
Optionally, at least one of the plurality of context-responsive display modes is added to the data structure by creating the data structure for one of the predefined types of virtual containers specified in the request.
Optionally, each data structure of the predefined type is configured to include the at least one of the plurality of context responsive display modes.
Optionally, the data structure is created for a virtual container of some type specified in the request.
Optionally, default values, type-based values, and/or other values other than the manifest may be set in the virtual container data structure.
Optionally, the method further comprises automatically adding one or more content items to the virtual container of the specified type based on rules for adding content items to the virtual container.
According to the present invention there is further provided a computing system for generating a virtual container for an artificial reality environment, the system comprising: one or more processors; and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to perform a process comprising: receiving a request for a virtual container, wherein the request is associated with an inventory specifying one or more parameters of the virtual container; creating a data structure for the virtual container by performing a first portion of an initialization procedure for the data structure based on the manifest; providing a handle of the data structure to enable addition of a content item to the virtual container in response to the request; and executing a second portion of the initialization program, wherein the second portion of the initialization program is executed when the handle is used to add one or more content items to the virtual container.
Optionally, the initializing procedure includes setting an attribute in the data structure based on the manifest.
Optionally, the attributes include at least: a context response display mode; and one or both of a container shape and a location type conforming to the placement of the virtual container.
Optionally, the virtual container comprises a predefined type for which the manifest comprises a corresponding default value. For example, the virtual container may have types of individuals, 2D media, post, events, or 3D models, and the manifest may have default layouts and content corresponding to these types.
Optionally, the request is generated as a result of a user performing an interaction with a content item located in another previously created virtual container and associated with another manifest. Optionally, the interaction is predefined established as creating a new virtual container based on the signaling of the content item.
Optionally, the virtual container receives contextual factors specifying a value of a current mode of the artificial reality system. Optionally, the context response display mode included in the data structure corresponds to a condition that evaluates to true when the value of the current mode of the artificial reality system is provided. Optionally, the virtual container enables the context response display mode in response to evaluating the corresponding condition as true.
Optionally, the data structure is created for a virtual container of a type specified in the request, and optionally the process further comprises automatically adding one or more content items to the virtual container based on rules for adding content items to the virtual container of the specified type.
Optionally, the system includes at least one mediator configured to mediate resources between the computer system hardware and the dedicated components. Optionally, the computer system hardware may include the one or more processors and the one or more memories.
Optionally, the mediator may comprise: an operating system; a service; a driver; a basic input/output system (BIOS); a controller circuit; and/or other hardware and/or software systems.
Optionally, the specialized component may include software and/or hardware configured to perform operations for creating and managing virtual containers and surfaces in an artificial reality environment. The specialized component may optionally include at least one of: a virtual container creator; a context tracker; registering situation factors; a surface creator; and/or components and APIs. Optionally, the components and/or APIs may be used to provide a user interface, to communicate data, and/or to control specialized components.
Optionally, the specialized component may be located in a computing system distributed across multiple computing devices. Optionally, the computing system may be an interface to a server-based application executing one or more of the specialized components.
The dedicated components may be logical or other non-physical functional differentiators and/or may be sub-modules or code blocks of one or more applications.
Optionally, the virtual container creator may receive a new virtual container request associated with a manifest that the virtual container creator uses to create a new virtual container data structure. Optionally, the virtual container data structure includes attributes and functions ("logic").
Optionally, the virtual container data structure characteristics may include attributes such as virtual container ID, virtual container location, virtual container type, current display mode, ID associated with the parent virtual container, and the like. Optionally, the virtual container data structure functions may include functions related to adding and deleting content, setting locations, setting display modes, and the like. Other attributes and functions may be included in the virtual container data structure and may be defined differently. Optionally, the virtual container data structure may include imperatively defined functions. Optionally, the virtual container data structure may include declarative logic.
Providing the user with a handle to the virtual container data structure allows the user to manipulate/interact with the virtual container, e.g., by filling the virtual container with content items, placing the virtual container in an artificial reality environment, setting parameters, etc.
The user is provided with a handle to the virtual container data structure before the virtual container is fully initialized (i.e., in the first portion of the initialization program). This permits the user to perform initial virtual container manipulations (e.g., filling and/or placement) simultaneously before the virtual container is fully formed, while the artificial reality system completes virtual container initialization. For example, when a user manipulates a virtual container, the virtual container creator may simultaneously complete initialization by, for example, registering the virtual container to receive the logic of the virtual container and/or the contextual factors required for the display mode in order to determine whether to execute the logic or enable the display mode.
Optionally, the virtual container request is set as a shell program of an artificial reality system having a virtual container manifest. Optionally, the shell creates a virtual container based on the manifest and returns a handle to the user. Optionally, the user may then write the content to the virtual container (including, for example, writing different versions for different display attributes specified in the manifest) before the virtual container is fully formed or fully initialized. For example, the virtual container may have default or type-specific content still written to it by the shell of the artificial reality system, may not have been registered for contextual factor notifications, or may have other initialization procedures not yet completed when the handle is provided. These additional portions of the virtual container initialization may be completed when the user uses the provided handle. This provides the advantage that the placement and content filling of the virtual container can be performed simultaneously with the initialization procedure, thereby providing significant efficiency.
Optionally, the shell maintains control over where virtual containers may be displayed and optionally which display attributes to invoke for the virtual containers, according to the attributes specified in the manifest. For example, the manifest may specify allowable positioning and display attributes, e.g., if the artificial reality system enters a no-disturbance mode, all virtual containers may enter a minimized mode and move out of the user's visual center. Such a system provides the advantage that the virtual container may be instantiated by the shell program or another virtual container, but may then continue to exist independently.
Optionally, the process may register the virtual container for contextual factor notifications, which may include assigning an identifier of the virtual container to each contextual factor on the list of contextual factors that the virtual container is registering. The contextual factors registered for the virtual container may be selected based on the context specified in the manifest, or may be selected based on other parameters set for the virtual container. For example, the process may analyze the display modes defined for the virtual container to determine which contextual factors must be checked to determine whether each display mode is enabled, and the process may further register the virtual container to learn about changes in any of these contextual factors.
Optionally, additional initialization of the virtual container is performed. For example, the shell program may add content items of the specified virtual container type to the virtual container.
Optionally, the process executes a virtual container placement program. Optionally, an initial virtual container placement is set for the user to update the placement before the handle is provided to the user. The placement procedure optionally includes setting a default location or a location specified in the request, such as a location based on the user's area of interest, a location relative to the requesting entity (e.g., the same surface as the requesting virtual container is attached), or a surface defined for the user's hand or face associated with the request.
The placement procedure optionally includes making the virtual container invisible until the user selects the location. Optionally, in case the user will manually select the placement position, the artificial reality system highlights the active position of the current display mode of the virtual container. For example, if the current display mode of the virtual container specifies that the virtual container must be placed on a vertical surface, the surface established on the wall in an artificial reality environment may be highlighted to the user as a visual indicator so that the user knows where they can place the virtual container.
According to the present invention there is further provided a computer readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform a process for providing contextual factors to a virtual container in an artificial reality environment, the process comprising: identifying a change or establishment of one or more contextual factors; identifying one or more virtual containers registered to receive notifications of changes or establishment of the one or more contextual factors; and providing a notification of the one or more contextual factors to the registered one or more virtual containers in response to identifying the registered one or more virtual containers; wherein providing the notification causes at least one of the one or more virtual containers to invoke corresponding logic or enable a corresponding display mode.
The disclosed techniques may include or be implemented in connection with an artificial reality system as described. As used herein, "artificial reality," "super reality," or "XR" refers to any one of VR, AR, MR, or any combination or mixture thereof.
The systems and methods presented herein have several technical advantages over existing XR systems that provide application-centric rules for rendering and interacting with virtual objects. However, these artificial reality systems provide limited functionality, just mimicking the traditional user experience of using "applications" and/or "windows" to supervise object presentation, functionality, placement, and interaction. These existing systems shift the user experience from focused people and objects by relying on application program structures designed for desktop computing and mobile device interactions. For example, by requiring the user to pull up a window or application interface to create and modify an object, the user loses the perception that the virtual object is real.
Furthermore, existing systems rely on a central application to control all 3D objects created by the system waste processing resources because they constantly execute unnecessary aspects of the application, not just aspects required to maintain the objects. This may be particularly wasteful when some of the objects are in an inactive state. If the application is closed, the reliance on the original application to maintain the object can also result in the object disappearing, thereby reducing the flexibility and usability of the system.
In addition, when an application controls an object, the object can only react to contextual factors that are known to the application. However, to maintain security, many systems may not provide contextual factor access to the application because sharing limitations of these contextual factors are not guaranteed.
The application may also be computationally expensive as a second level of abstraction between the operating system and the object, requiring coordination to provide contextual factors to the application for subsequent delivery of the application to the object.
In addition, existing systems do not provide an appropriate method to organize objects placed in an artificial reality environment and control which objects may interact with each other.
The artificial reality systems and processes described herein overcome these problems associated with conventional artificial reality systems using virtual containers (also referred to as "augmentations") as basic objects separate from the entities that create them. In particular, the artificial reality system and process described herein removes the interaction layer with application controls, allowing for more realistic interactions in an artificial reality environment by allowing a user to treat virtual objects more than real world objects.
In addition, the artificial reality system and process conserve processing resources by enabling enhancements to exist independently without having to maintain execution of the applications that created them. Furthermore, artificial reality systems and processes are expected to increase availability and flexibility by allowing individual virtual containers/enhancements to exist when the entity creating the virtual containers/enhancements is closed.
Moreover, these artificial reality systems and processes are expected to provide greater security by directly controlling which augmented receiving contextual factors, while also reducing the overhead of coordinating the contextual factor distribution by removing additional coordination layers. The artificial reality system and process is also expected to provide greater usability by providing security for surface organization methods and information dissemination controlling surface groupings.
In addition to providing these benefits in terms of availability, flexibility, security, and conserving processing resources, the artificial reality systems and processes described herein are rooted in computerized artificial reality systems, providing a new core concept specifically designed for object control and interaction in an artificial reality environment.
Furthermore, while the artificial reality systems and processes described herein provide a user experience that interacts with virtual objects in a similar manner to real objects, the disclosed systems and processes are implemented with specialized data structures and interaction rules that are neither analogous to traditional computing interactions nor to interactions with real objects.
Drawings
Examples of the invention will now be described with reference to the accompanying drawings, in which:
fig. 1 is a block diagram illustrating an overview of an apparatus on which some embodiments of the present technology may operate.
Fig. 2A is a line drawing illustrating a virtual reality headset that may be used in some implementations of the present technology.
Fig. 2B is a line drawing illustrating a mixed reality headset that may be used in some implementations of the present technology.
FIG. 3 is a block diagram illustrating an overview of an environment in which some embodiments of the present technology may operate.
Fig. 4 is a block diagram illustrating components that may be used in systems employing the disclosed technology in some embodiments.
FIG. 5 is a block diagram illustrating an exemplary enhanced data structure used in some embodiments of the present technology.
FIG. 6A is a flow chart illustrating a process for a shell program to respond to a request for new enhancements in some embodiments of the present technology.
Fig. 6B is a flow chart illustrating a process for submitting a request for new augmentation to an artificial reality system shell program in some embodiments of the present technology.
FIG. 7 is a flow chart illustrating a process for enabling enhancements to respond to a context by providing context factors to relevant enhancements in some embodiments of the present technology.
FIG. 8 is a flow chart illustrating a process for enhancing interaction with a virtual surface in some embodiments of the present technology.
FIG. 9 is a flow chart illustrating a process for enhancing interactions with other enhancements in some embodiments of the present technology.
Fig. 10 is a conceptual diagram illustrating an example of augmentation in an artificial reality space controlled by an artificial reality system.
Fig. 11 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where the artificial reality system identifies a virtual surface.
Fig. 12 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where an artificial reality system receives a request for new photo augmentation based on initial placement of user gaze.
Fig. 13 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where new empty augmentation (initial placement on a wall) is provided with additional placement visual availability for surface layout.
Fig. 14 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where the new augmentation has been filled and moved to a second wall according to the available surface layout.
FIG. 15 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where augmentation moves to a horizontal surface and a different display mode is selected in response to corresponding placement context factors.
Fig. 16 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where the augmentation selects a different display mode in response to moving back to a vertical wall surface with a second augmentation.
FIG. 17 is a conceptual diagram that continues to illustrate an example of augmentation in an artificial reality space, where the augmentation and the second augmentation select different display modes in response to receiving social graph context factors registered with the artificial reality system.
Fig. 18 and 19 are conceptual diagrams that continue to illustrate examples of augmentations in an artificial reality space, where the artificial reality system creates new augmentations in response to a user pulling a generatable element from an existing augment.
The technology introduced herein may be better understood by referring to the following detailed description in conjunction with the accompanying drawings in which like reference numerals identify identical or functionally similar elements.
Detailed Description
Aspects of the present disclosure relate to an artificial reality system that provides an artificial reality environment with augmentation and surfaces. "augmentation", also referred to herein as "virtual container", is a 2D or 3D volume of data (volume) in an artificial reality environment, which may contain presentation data, context, and logic. An artificial reality system may use augmentation (augmentations) as a basic building block for displaying 2D and 3D content in an artificial reality environment. For example, augmentation may represent people, places, and things in an artificial reality environment, and may be responsive to context such as current display mode, date or time of day, type of surface on which augmentation is located, relationship to other augmentation, and so forth. A controller (sometimes referred to as a "shell") in an artificial reality system may control how artificial reality environment information is surfaced to a user, what interactions may be performed, and what interactions are provided to an application. Enhancements may exist on a "surface" with contextual properties and layouts that would cause the enhancements to be presented or acted upon in different ways. Augmentation and other objects (real or virtual) may also interact with each other, which interactions may be mediated by the shell program and controlled by rules in the augmentation based on the contextual information evaluation from the shell program.
The augmentation may be created by an augmentation request from an artificial reality system shell program, wherein the request provides a manifest specifying initial properties of the augmentation. The manifest may specify parameters such as an enhanced name, an enhanced type, enhanced display properties (size, orientation, location, type of location eligible, etc.) in different display modes or contexts, contextual factors that need to be enhanced for enabling display modes or invoking logic, etc. The artificial reality system may provide the augmentation as a data volume having attributes specified in the manifest for the requestor to place in the artificial reality environment and write presentation data into the augmentation. Additional details regarding creation enhancements are provided below with reference to fig. 6A and 6B.
The enhancement "presentation data" may include any content that may be output by the enhancement, including visual presentation data, auditory presentation data, tactile presentation data, and the like. In some embodiments, the presentation data may be "real-time" such that it matches the external data by pointing to the external data or as a copy of the external data that is updated periodically. Presentation data may also be shared so that changes to external data by another user or system may be propagated to the enhanced output. For example, the enhancements may display real-time services and data upon accepting interaction from a user or other enhancements. As a more specific example, a user may select a photo to share on a social media platform to add the photo as presentation data to an enhancement positioned on the user's wall. The owner of the decal may modify the photograph and the modified version may be presented in the enhancement. Additional real-time social media content associated with the photograph may also be included in the enhanced presentation data, such as an indication of "praise" or comments for the photograph. The owner of the photograph may also change the access rights so that the photograph is no longer displayed in the enhancement.
The augmentation may track the current context based on context factors signaled to the augmentation by the artificial reality system. The context may include various contextual factors such as the current mode of the artificial reality system (e.g., interactive mode, minimized mode, pure audio mode, etc.), other objects in the artificial reality environment or within an enhanced threshold distance (real or virtual), features of the current user, social graph elements related to the current user and/or the artificial reality environment objects, artificial reality environment conditions (e.g., time, date, lighting, temperature, weather, graphical mapping data), surface properties, motion features of the enhanced or other objects, sound, user commands, etc. As used herein, an "object" may be a real or virtual object, and may be an inanimate or animate object (e.g., a user). Contextual factors may be identified and signaled to the relevant augmentation by the artificial reality system. Some contextual factors (e.g., the current mode of the artificial reality system) may be automatically provided to all augmentation. Other contextual factors may be registered for delivery to certain enhancements (e.g., via manifest at creation time or through subsequent contextual factor registration calls). Enhancements may have variables that contain the contextual factors for which the enhancement has logic. All enhancements may inherit some of the variables from the underlying enhancement category, some of which may be defined in the enhancement category's extensions (e.g., for various pre-established enhancement types), or some of which may be added to the individual enhancements at the time the enhancement creation is created (e.g., with a manifest) or by later declaration. In some cases, certain contextual factors may be tracked by the artificial reality system, and the augmentation may examine the contextual factors without the artificial reality system having to push data to the various augmentation. For example, the artificial reality system may maintain a global time/date variable that the augmentation may access without the artificial reality system continually pushing the value of the variable to the augmentation.
The enhanced logic (defined declaratively or imperatively) may cause the enhancement to change its presentation data, attributes, or perform other actions in response to contextual factors. Similar to variables containing contextual factors, the logic of enhancements may be specified in the base category, the logic of enhancements for the type of enhancement in the extension of the base category, or the logic of enhancements individually (e.g., in a manifest). For example, all enhancements may be defined as having logic to redraw itself for different display modes for which different sizes or forms of data volumes (volumes) are provided to write the enhancements. As another example, all enhancements of the "person" type may have logic to provide notification of a posting from the person or an incoming message from the person. As yet another example, a particular enhancement may be configured with logic responsive to an area_type (area_type) contextual factor for which the enhancement is registered to receive updates, wherein the enhancement is responsive to a contextual factor having an "external" value by checking whether the temporal contextual factor indicates between 6 am and 7 pm, and if so, switching to a darker display mode.
Additional details regarding enhanced structures (e.g., presentation data, attributes, and functionality) are provided below with reference to fig. 5, 6A, and 6B. Additional details regarding providing contextual factors to enhancements and causing enhancements to invoke corresponding display modes and/or logic are provided below with reference to fig. 5 and 7.
In some embodiments, the enhancement exists independent of the enhancement that created it. Thus, when a parent enhancement is turned off, the child enhancement is not necessarily turned off. However, such hierarchical functionality may be performed, for example, where a child enhancement is registered to receive contextual factors of the state of the parent enhancement and has logic to turn itself off upon receipt of a signal that the parent enhancement is turned off.
The augmentation may be located in an artificial reality environment by attachment to a surface. A "surface" may be a point, 2D area (area), or 3D volume (volume) that may be attached with one or more enhancements. The surface may be world-locked (or located) with respect to the user or other object. The surface may be defined by shape, positioning, and in some cases orientation. In some embodiments, the surface may be of a specified type, such as a point, wall (e.g., vertical 2D area), floor or counter (e.g., horizontal 2D area), face, volume, and the like. The surface may be created in various contexts, such as a synthetic surface, a semantic surface, or a geometric surface.
The synthetic surface may be generated without using object recognition or room mapping. Examples of synthetic surfaces include bubbles (e.g., body locking surfaces that are positioned relative to a user as the user moves in an artificial reality environment, regardless of real world objects); a surface attached to the device (e.g., the artificial reality system may contain a controller, external processing elements, etc., whose location is periodically updated to the artificial reality system, allowing the surface to be placed relative to the device); a floating surface (e.g., a world locking surface whose position is specified in a manner related to the positioning of the artificial reality system, but adjusts to appear fixed as movement of the artificial reality system is detected, so there is no need to understand the physical world to be positioned (rather than the artificial reality system movements)).
The semantic surface may be located based on an identified (real or virtual) object, such as a face, hand, chair, refrigerator, table, etc. The semantic surfaces may be world-locked, adjusting their display in the field of view to be displayed with a constant relative positioning with respect to the identified objects. The semantic surface may be molded to fit the identified object or may have other surface shapes that are positioned relative to the identified object.
The geometric surface may map to a structure in the world (e.g., a portion of a wall or floor), or may specify a single point in space. While in some cases the geometric surface may be of the type of semantic surface, in other cases the geometric surface may exist independent of the object recognition in progress, as it is unlikely that the geometric surface will be repositioned. For example, portions of walls may be mapped using an instant localization and mapping ("SLAM") system. Such surfaces may then be used with the same or another artificial reality system by determining the positioning of the artificial reality system in the map without having to actively determine other object locations. Examples of geometric surfaces may include points, 2D areas (e.g., portions of floors, counters, walls, doors, windows, etc.), or volumes relative to structures (e.g., rectangular solids, spheres, etc., positioned relative to floors, walls, room interiors, etc.).
In various embodiments, the surface may be created manually, semi-automatically, or automatically. Manual surface creation allows a user to explicitly define a surface, for example, by tracking a portion of a wall, placing a hand or controller on a flat surface, indicating the center point and radius of the surface, and the like. Automatic surface creation may include identifying objects that are of a particular type (e.g., face, table) or that are of a particular nature (e.g., a flat surface of a threshold size, a 2D surface with which a user has interacted a threshold amount, etc.). In some implementations, automatic surface creation may be aided by a machine learning model trained to identify surfaces (e.g., automatically identifying surfaces as training data using manually identified surfaces or user corrections). Semi-automatic surface creation may include automatically detecting a surface that suggests user authentication and/or modification.
In some implementations, the surface may have a specified layout that controls the enhanced possible placement locations added to the surface. The layout assigned to the surface may be selected by a user or automatically applied (e.g., based on a mapping of surface characteristics such as size and shape to the layout). The layout may be static (specifying particular locations in the layout where enhancements may be placed) or dynamic (slots for enhancements are adjusted according to the size, number, type, etc. of enhancements placed on the surface). Examples of layouts include: a list layout in which enhancements are uniformly spaced along a horizontal line; a stacked layout wherein the reinforcements are uniformly spaced along a vertical line; grid layout, which uses a defined grid to place enhancements (which may be dynamic, by specifying x, y, and/or z counts for the grid based on the number of enhancements on the surface); and free form layouts in which enhancements remain in the position where they were originally placed.
Once the surface has been created, enhancements may be added to it. In some cases, the enhancements may be automatically attached to the surface, for example by creating the enhancements on the same surface as the surface on which the enhancement was when the creation of the new enhancement was requested. In other cases, the enhancements may have a logical or display mode that specifies the surface or surface type (e.g., response to various contextual factors) to which the enhancements are to be attached. In other cases, the augmentation may be manually attached to the surface, such as by a user selecting the augmentation and indicating to the artificial reality system that the augmentation is to be attached to the surface. The enhancements may be placed manually in particular slots in the surface layout, or may be placed on the surface and allow the surface to have enhancements placed in layout slots (e.g., select the next slot in the order defined for the layout slots, select the slot based on where the enhancement is placed on the surface, select the slot most suitable for the enhancement, combine or resize the slots to accommodate the enhancement, etc.). When a particular enhancement is attached to a surface, the enhancement or other enhancements on the surface may be provided with corresponding contextual factors, such as the properties of the surface (e.g., type, orientation, shape, size), count or detailed information of other enhancements on the surface, layout locations assigned to the enhancement, and so forth. Additional details regarding surface creation, layout configuration, and adding enhancements to a surface are provided below with reference to FIG. 8.
In some embodiments, enhancements may interact with each other, for example, by having defined logic that employs parameters of contextual factors defined by other enhanced attributes. Augmentation may be registered with the artificial reality system to receive contextual factors specifying other augmented attributes (e.g., location, size, content defined by metadata tags, etc.). In various embodiments, enhancements may control which other enhancements have access to the various attributes of the enhancements. In some cases, for enhancements to register to receive properties of other enhancements, there must be a specific relationship between the enhancements, such as prototypes of one enhancement as another, attachment of the enhancements to the same surface, or explicit user interaction to associate the enhancements (e.g., drag one enhancement onto another). In some embodiments, the augmentation is registered with the artificial reality system to receive other augmented attributes such that the artificial reality system mediates attribute sharing and provides the attributes by identifying changes in those contextual factors that allow for augmented sharing. In other embodiments, such registration of enhancements may be accomplished, allowing enhancements to pull/push attributes relative to each other. Enhancements may have defined logic for reacting to other enhanced properties. In some implementations, this logic may be defined for a particular enhancement or may be defined for a certain enhancement type. For example, all "persona" enhancements may be defined as performing a particular action when placed in proximity to another persona enhancement, where the social graph defines a "friends" relationship between the personas for which the enhancement is intended. Additional details regarding interactions between enhancements are provided below with reference to FIG. 9.
Embodiments of the disclosed technology may include or be implemented in conjunction with an artificial reality system. Artificial reality or super reality (XR) is a form of reality that has been somehow regulated before being presented to a user, which may include, for example, virtual Reality (VR), augmented Reality (AR), mixed Reality (MR), mixed reality, or some combination and/or derivative thereof. The artificial reality content may include entirely generated content or generated content combined with captured content (e.g., real world photos). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or multiple channels (e.g., stereoscopic video that produces a three-dimensional effect for a viewer). Additionally, in some embodiments, the artificial reality may be associated with an application, product, accessory, service, or some combination thereof, e.g., for creating content in the artificial reality and/or for use in the artificial reality (e.g., performing an activity therein). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a Head Mounted Display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, a "cave" environment, or other projection system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
As used herein, "virtual reality" or "VR" refers to an immersive experience in which visual input of a user is controlled by a computing system. "augmented reality" or "AR" refers to a system in which a user views an image of the real world after the image has been passed through a computing system. For example, a tablet with a camera on the back may capture an image of the real world, which is then displayed on a screen on the opposite side of the tablet from the camera. The tablet may process and adjust or "enhance" the image as it passes through the system (e.g., by adding virtual objects). "Mixed reality" or "MR" refers to a system in which light rays entering a user's eye consist in part of light rays generated by a computing system and reflected in part by objects in the real world. For example, the MR headset may be shaped as a pair of glasses with a pass-through display that allows light from the real world to pass through a waveguide that simultaneously emits light from a projector in the MR headset, allowing the MR headset to present a virtual object that is intermixed with real objects that are visible to the user. As used herein, "artificial reality," "super reality," or "XR" refers to any one of VR, AR, MR, or any combination or mixture thereof.
Several embodiments are discussed in more detail below with reference to the accompanying drawings. Fig. 1 is a block diagram illustrating an overview of an apparatus on which some embodiments of the disclosed technology may operate. The apparatus may include hardware components of a computing system 100 that provides an artificial reality environment with enhancements and surfaces. In various implementations, computing system 100 may include a single computing device 103 or multiple computing devices (e.g., computing device 101, computing device 102, and computing device 103) that communicate over wired or wireless channels to distribute processing and share input data. In some implementations, the computing system 100 may include a stand-alone headset that is capable of providing a computer-created or enhanced experience for a user without requiring external processing or sensors. In other implementations, the computing system 100 may include multiple computing devices, such as a headset and a core processing component (e.g., a console, mobile device, or server system), with some processing operations performed on the headset and other processing operations transferred to the core processing component. An example headset is described below with reference to fig. 2A and 2B. In some implementations, the positioning data and the environmental data may be collected only by sensors incorporated into the headset device, while in other implementations, one or more non-headset computing devices may include sensor components that can track the environmental data or the positioning data.
The computing system 100 may include one or more processors 110 (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Holographic Processing Unit (HPU), etc.). Processor 110 may be a single processing unit, or multiple processing units located in a single device or distributed across multiple devices (e.g., distributed across two or more of computing devices 101-103).
The computing system 100 may include one or more input devices 120 that provide input to the processor 110 to inform the processor of actions. These actions may be mediated by a hardware controller that interprets the signals received from the input device and communicates information to the processor 110 using a communication protocol. Each input device 120 may include, for example, a mouse, keyboard, touch screen, touch pad, wearable input device (e.g., a haptic glove, bracelet, ring, earring, necklace, watch, etc.), camera (or other light-based input device such as an infrared sensor), microphone, or other user input device.
For example, the processor 110 may be coupled to other hardware devices using an internal or external bus, such as a PCI bus, a SCSI bus, or a wireless connection, etc. The processor 110 may be in communication with a hardware controller for the device (e.g., for the display 130). Display 130 may be used to display text and graphics. In some implementations, the display 130 includes the input device as part of the display, such as when the input device is a touch screen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: an LCD display screen; an LED display screen; projection, holographic, or augmented reality displays (e.g., heads-up display devices or head-mounted devices); etc. Other I/O devices 140 may also be coupled to the processor, such as a network chip or card, video chip or card, audio chip or card, USB, firewire or other external devices, cameras, printers, speakers, CD-ROM drives, DVD drives, disk drives, etc.
Computing system 100 may include communication devices capable of communicating wirelessly or by wire with other local computing devices or network nodes. The communication device may communicate with another device or server over a network using, for example, the TCP/IP protocol. Computing system 100 may utilize communication devices to distribute operations among multiple network devices.
The processor 110 may access a memory 150, which may be included on one of a plurality of computing devices of the computing system 100 or may be distributed across a plurality of computing devices of the computing system 100 or other external devices. The memory includes one or more hardware devices for volatile and nonvolatile storage devices, and may include read-only and writable memory. For example, the memory may contain one or more of the following: random Access Memory (RAM), various caches, CPU registers, read-only memory (ROM), and writable nonvolatile memory such as flash memory, hard disk drive, floppy disk, CD, DVD, magnetic storage, tape drive, etc. The memory is not a propagated signal separate from the underlying hardware; thus, the memory is non-transitory. Memory 150 may include a program memory 160 that stores programs and software, such as an operating system 162, an artificial reality system 164, and other application programs 166. Memory 150 may also include a data store 170, which may include enhanced data structures, surface data structures, enhanced contextual factor registration, artificial reality environment information, other enhanced and/or surface support data, social graph data, configuration data, settings, user options or preferences, etc., which may be provided to program memory 160 or any element of computing system 100.
Some embodiments may operate in conjunction with many other computing systems, environments, or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, XR headsets, personal computers, server computers, hand-held or laptop devices, cellular telephones, wearable electronics, gaming machines, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Fig. 2A is a line drawing of a virtual reality Head Mounted Display (HMD) 200 according to some embodiments. HMD200 includes a front rigid body 205 and a belt 210. The front rigid body 205 contains one or more electronic display elements of an electronic display 245, an Inertial Motion Unit (IMU) 215, one or more positioning sensors 220, a positioner 225, and one or more computing units 230. The positioning sensor 220, IMU 215, and computing unit 230 may be located inside the HMD200 and may not be visible to the user. In various implementations, the IMU 215, the positioning sensor 220, and the positioner 225 may track movement and position of the HMD200 in the real world and in the virtual environment in three degrees of freedom (3 DoF) or six degrees of freedom (6 DoF). For example, the locator 225 may emit an infrared beam that creates a spot of light on a real object surrounding the HMD 200. One or more cameras (not shown) integrated with HMD200 may detect the light points. The computing unit 230 in the HMD200 may use the detected light points to infer the location and movement of the HMD200 and identify the shape and location of real objects surrounding the HMD 200.
The electronic display 245 may be integrated with the front rigid body 205 and may provide image light to the user according to the instructions of the computing unit 230. In various embodiments, electronic display 245 may be a single electronic display or multiple electronic displays (e.g., one display per user's eye). Examples of electronic display 245 include: a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, an active matrix organic light emitting diode display (AMOLED), a display containing one or more quantum dot light emitting diode (QOLED) sub-pixels, a projector unit (e.g., micro LED, laser, etc.), some other display, or some combination thereof.
In some embodiments, HMD 200 may be coupled to a core processing component, such as a Personal Computer (PC) (not shown) and/or one or more external sensors (not shown). The external sensor may monitor the HMD 200 that may be used by the PC (e.g., via light emitted from the HMD 200) and along with the output from the IMU 215 and the positioning sensor 220 to determine the position and movement of the HMD 200.
In some implementations, the HMD 200 may communicate with one or more other external devices, such as a controller (not shown) that a user may hold in one or both hands. The controller may have its own IMU unit, positioning sensor and/or may emit further light spots. The HMD 200 or an external sensor may track these controller light points. The computing unit 230 or core processing component in the HMD 200 may use this tracking, along with IMU and positioning output, to monitor the user's hand positioning and movement. The controller may also include various buttons that a user may actuate to provide input and interact with the virtual object. In various implementations, HMD 200 may also include additional subsystems, such as an eye tracking unit, an audio system, various network components, and so forth. In some embodiments, instead of or in addition to the controller, one or more cameras included in the HMD 200 or one or more cameras external to the HMD 200 may monitor the position and pose of the user's hand to determine gestures and other hand and body movements.
Fig. 2B is a line drawing of a mixed reality HMD system 250 including a mixed reality HMD 252 and a core processing component 254. The mixed reality HMD 252 and the core processing component 254 may communicate via a wireless connection (e.g., a 60GHz link) as indicated by link 256. In other implementations, the mixed reality system 250 includes only a head mounted device and no external computing device, or other wired or wireless connection between the mixed reality HMD 252 and the core processing component 254. The mixed reality HMD 252 includes a see-through display (pass-through display) 258 and a mirror frame 260. The frame 260 may house various electronic components (not shown), such as light projectors (e.g., lasers, LEDs, etc.), cameras, eye tracking sensors, MEMS components, networking components, and the like.
The projector may be coupled to the see-through display 258, for example, via an optical element, to display media content to a user. The optical element may include one or more waveguide assemblies, reflectors, lenses, mirrors, collimators, gratings, etc. for directing light from the projector to the user's eye. Image data may be transmitted from the core processing component 254 to the HMD 252 via the link 256. A controller in HMD 252 may convert image data into light pulses from a projector, which may be transmitted as output light to a user's eye via an optical element. The output light may be mixed with light passing through the display 258, allowing the output light to present virtual objects that appear as if they were present in the real world.
Similar to HMD 200, HMD system 250 may also include a motion and position tracking unit, cameras, light sources, etc., which allows HMD system 250 (e.g., tracking itself in 3DoF or 6 DoF) to track a user's body parts (e.g., hands, feet, head, or other body parts), map virtual objects to appear stationary as HMD 252 moves, and react virtual objects to gestures and other real-world objects.
Fig. 3 is a block diagram illustrating an overview of an environment 300 in which some embodiments of the disclosed technology may operate. Environment 300 may include one or more client computing devices 305A-D, examples of which may include computing system 100. In some implementations, some of the plurality of client computing devices (e.g., client computing device 305B) may be HMD 200 or HMD system 250. Client computing device 305 may use a logical connection to one or more remote computers (e.g., server computing devices) over network 330 to operate in a networked environment.
In some embodiments, server 310 may be an edge server that receives client requests and coordinates the implementation of those requests through other servers, such as servers 320A-C. Server computing devices 310 and 320 may include a computing system, such as computing system 100. Although each server computing device 310 and 320 is logically shown as a single server, each server computing device may be a distributed computing environment, including multiple computing devices located at the same physical location or geographically disparate physical locations.
Client computing device 305 and server computing devices 310 and 320 may each act as servers or clients to other server/client devices. Server 310 may be connected to database 315. Servers 320A-C may each be connected to a corresponding database 325A-C. As discussed above, each server 310 or 320 may correspond to a set of servers, and each of these servers may share a database or may have their own database. Although databases 315 and 325 are logically shown as a single unit, databases 315 and 325 may each be a distributed computing environment that includes multiple computing devices that may be located within their corresponding servers or may be located in the same physical location or geographically different physical locations.
The network 330 may be a Local Area Network (LAN), wide Area Network (WAN), mesh network, hybrid network, or other wired or wireless network. The network 330 may be the internet or some other public or private network. The client computing device 305 may connect to the network 330 through a network interface, such as through wired or wireless communication. Although the connections between server 310 and server 320 are shown as separate connections, these connections may be any type of local area network, wide area network, wired network, or wireless network, including network 330 or a separate public or private network.
In some embodiments, one or more of servers 310 and/or 320 may be used as part of a social network. The social network may maintain a social graph and provide aspects thereof to an artificial reality system that may perform various actions based on the social graph. The social graph may include a set of nodes (representing social networking system objects, also referred to as social objects) interconnected by edges (representing interactions, activities, or dependencies). The social networking system object may be a social networking system user, a non-personal entity, a content item, a group, a social networking system page, a location, an application, a theme, a conceptual representation, or other social networking system object, such as a movie, band, book, or the like. The content item may be any digital data such as text, images, audio, video, links, web pages, 3D or 2D models, details such as logos (e.g., emotional indicators, status text segments, location indicators, etc.) provided from the client device, or other multimedia. In various embodiments, the content item may be a social network item or a portion of a social network item, such as a posting, praise, mention, news item, event, share, comment, message, other notification, and the like. In the context of a social graph, topics and concepts include nodes representing any person, place, thing, or idea.
The social networking system may enable the user to type and display information related to the user's interests, age/date of birth, location (e.g., longitude/latitude, country, region, city, etc.), educational information, life stage, relationship status, name, commonly used device model, language identified as easy for the user to use, occupation, contact information, etc., or other demographic or biographical information in the user's profile. In various embodiments, any such information may be represented by nodes or edges between nodes in the social graph. The social networking system may enable users to upload or create pictures, videos, documents, songs, 3D objects, or other content items, and may enable users to create and schedule events. In various embodiments, the content items may be represented by nodes or edges between nodes in the social graph.
The social networking system may enable the user to perform uploading or creating content items, interacting with content items or other users, expressing interests or perspectives, or performing other actions. The social networking system may provide various ways to interact with non-user objects within the social networking system. In various embodiments, the actions may be represented by nodes or edges between nodes in the social graph. For example, users may form or join groups, or be fans of pages or entities within a social networking system. In addition, a user may create, download, view, upload, link to, mark, edit, or play social networking system objects. The user may interact with social-networking system objects that are outside of the context of the social-networking system. For example, an article on a news website may have a "praise" button that a user may click. In each of these examples, interactions between the user and the object may be represented by edges in the social graph that connect the user node with the object node. As another example, a user may "register" with a particular location using location detection functionality (e.g., a GPS receiver on a mobile device), and an edge may connect user nodes with location nodes in the social graph.
The social networking system may provide various communication channels to the user. For example, the social networking system may enable the user to email with one or more other users, send instant messages, communicate in an artificial reality environment, send text/SMS messages, and so forth. The social networking system may enable a user to post a message to or in a wall or profile of a user or to or in a wall or profile of another user or interact with a virtual object created by or present in the other user's artificial reality environment. The social networking system may enable the user to paste the message to a group page or a fan page. The social networking system may enable the user to comment on an image, wall post, or other content item created or uploaded by the user or another user. The social networking system may allow the user to interact with (via their personalized avatars) objects or other avatars in the artificial reality environment, and so forth. In some embodiments, the user may post a status message to the user profile indicating the current event, mood, thought, feel, activity, or any other current time-related communication. The social networking system may enable users to communicate within the social networking system and outside of the social networking system. For example, a first user may send a message within a social networking system, an email through a social networking system, an email external to but originating from a social networking system, an instant message within a social networking system, an instant message external to but originating from a social networking system to a second user, provide voice or video messages between users, or provide a virtual environment in which users may communicate and interact via an avatar or other digital representation of themselves. Further, the first user may comment on the second user's profile page, or may comment on an object associated with the second user, such as a content item uploaded by the second user.
The social networking system enables users to associate themselves and establish connections with other users of the social networking system. When two users (e.g., social graph nodes) explicitly establish a social connection in a social networking system, they become "friends" (or "associations") in the context of the social networking system. For example, a friend request made by "John Doe" to "Jane Smith" that is accepted by "Jane Smith" is a social connection. The social connection may be an edge in the social graph. Becoming friends or within a threshold number of edges of friends on the social graph may allow users to access more information about each other than would otherwise be available to users who have not established contact. For example, becoming friends may allow a user to view another user's profile, view another user's friends, or view another user's photos. Similarly, becoming friends within the social networking system may allow the user to better access to communicate with another user, such as through email, instant messaging, text messaging, telephone, or any other communication interface (both internal and external to the social networking system). Becoming friends may allow a user to access to view, comment on, download, support, or otherwise interact with content items uploaded by another user. Establishing connections, accessing user information, communications, and interactions in a social networking system context may be represented by edges between nodes representing two social networking system users.
In addition to explicitly establishing connections in a social networking system, users having common characteristics may be considered to have established connections (e.g., soft or implicit connections) for purposes of determining a social context that is used to determine a communication topic. In some embodiments, users belonging to a common network are considered established contacts. For example, users who are reading from a common school, working for a common company, or belonging to a common group of social networking systems may be considered established connections. In some embodiments, users with common biographical features are considered established contacts. For example, the geographical area in which the user is born or living, the user's age, the user's gender, and the user relationship status may be used to determine whether the user has established a contact. In some embodiments, users with common interests are considered established contacts. For example, a user's movie preferences, music preferences, political views, religious views, or any other interests that may be used to determine whether the user has established a contact. In some embodiments, users that have taken common actions within the social networking system are considered to have established connections. For example, users supporting or recommending a common object, users commenting on a common content item, or users replying to a common event (RSVP) may be considered established contacts. The social networking system may utilize the social graph to determine users that establish a connection with or are similar to a particular user in order to determine or evaluate social context between the users. The social networking system may utilize such social context and common attributes to facilitate the content distribution system and content caching system to predictably select content items for caching in a caching device associated with a particular social networking account.
Fig. 4 is a block diagram illustrating a component 400 that may be used in a system employing the disclosed technology in some embodiments. The component 400 may be included in one device of the computing system 100 or may be distributed across multiple devices of the computing system 100. Component 400 includes hardware 410, mediator 420, and specialized component 430. As discussed above, a system implementing the disclosed techniques may use various hardware including a processing unit 412, working memory 414, input and output devices 416 (e.g., cameras, displays, IMU units, network connections, etc.), and storage memory (storage memory) 418. In various implementations, the storage memory 418 may be one or more of a local device, an interface to a remote storage device, or a combination thereof. For example, the storage memory 418 may be one or more hard disk drives or flash drives accessible via a system bus, or may be a cloud storage provider (e.g., located in storage 315 or 325) or other network storage device accessible via one or more communication networks. In various implementations, the component 400 may be implemented in a client computing device, such as the client computing device 305, or on a server computing device, such as the server computing device 310 or 320.
Mediator 420 may include components that mediate resources between hardware 410 and specialized components 430. For example, mediator 420 may include an operating system, services, drivers, basic Input Output System (BIOS), controller circuitry, or other hardware or software systems.
Specialized component 430 may contain software or hardware configured to perform operations for creating and managing augmentations and surfaces in an artificial reality environment. Specialized components 430 may include an enhancement creator 432, a context tracker 434, a contextual factor registry 438, a surface creator 440, and components and APIs (e.g., interface 432) that may be used to provide a user interface, transfer data, and control specialized components. In some implementations, the component 400 may be located in a computing system distributed across multiple computing devices, or may be an interface to a server-based application executing one or more specialized components 430. Although depicted as separate components, dedicated component 430 may be a logical or other non-physical differentiation of functions and/or may be a sub-module or block of code for one or more applications.
The enhancement creator 432 may receive a new enhancement request associated with a manifest that the enhancement creator may use to create a new enhancement data structure. Additional details regarding the enhanced data structure are provided below with reference to fig. 5. The augmentation creator 432 may return a handle to the augmentation data structure to the requestor, allowing the requestor to populate the augmentation with content items, place the augmentation in an artificial reality environment, set parameters, and the like. In some embodiments, a handle to the augmented data structure may be provided to the requestor prior to full initialization of the augmentation, allowing the requestor to perform initial augmentation manipulations (e.g., padding and/or placement) while the artificial reality system completes the initialization of the augmentation. For example, when the requestor manipulates the enhancement, the enhancement creator 432 may complete initialization, such as by registering the enhancement to receive logic for the enhancement and/or the contextual factors needed for the display mode to determine whether to execute the logic or enable the display mode. Additional details regarding the artificial reality system creating enhancements and allowing the requestor to manipulate the enhancements are provided below with reference to fig. 6A and 6B.
Context tracker 434 may track a set of factors that define a context in an artificial reality environment. Examples of such contextual factors include the current mode of the artificial reality system, lighting, enhanced positioning in the artificial reality environment, positioning of real world objects in the artificial reality environment, time, date, user interactions, detected sounds, and the like. In some embodiments, context tracker 434 may identify when a contextual factor is set or a certain threshold amount of change occurs, and may provide those contextual factors to the relevant enhancements.
In some cases, the context tracker 434 provides those contextual factors to the augmentation based on the augmentation having been registered to receive those contextual factors, such as those during the creation of the augmentation or later through an invocation of registration of the artificial reality system by the augmentation. Additional details regarding enhancement creation including registering for contextual factor signals and providing contextual factor notifications to an enhancement are provided below with reference to FIGS. 6A and 7. In other cases, the augmentation may be selected to receive the contextual factors in response to the augmentation being placed on the surface. For example, the augmentation may be provided with attributes of a surface or attributes of other augmented or real world objects on the surface. Additional details regarding providing contextual factors to enhancements based on surface dependencies are provided below with reference to FIG. 8. In still other cases, a relationship between enhancements may be specified, wherein events related to the relationship may cause one or more enhancements to receive contextual factors. For example, when one enhancement moves to contact another enhancement, a relationship is established when one enhancement has a logical or display mode that is conditioned on the properties of the other enhancement, or in response to a command issued for multiple enhancements. Additional details of providing contextual factors to enhancements based on events related to specified relationships between enhancements are provided below with reference to FIG. 9. In any of these cases where the enhancement is provided with a contextual factor or other attribute, the enhancement may use the value of the contextual factor to evaluate the condition that invokes the logic or enables the display mode. For example, when the contextual factor of the artificial reality system state indicates "audio only mode", the display mode may have a condition evaluated as true, which causes the enhancement to enter a display mode in which no content is displayed by the enhancement and only audio presentation data is output. Additional details regarding invoking logic and/or display modes based on received contextual factors are provided below with reference to block 708 of FIG. 7, block 808 of FIG. 8, and block 906 of FIG. 9.
The contextual factor registry 438 may store a mapping between enhancements and contextual factors specifying which enhancements receive the contextual factor signals when the contextual factors are set or when the contextual factors change. As discussed above, these mappings may be set based on registration of enhancements, enhancements placed on surfaces, or identification of relationships between enhancements. Additional details regarding registration enhancements to receive contextual factors are provided below with reference to block 606 of FIG. 6A, block 806 of FIG. 8, and block 902 of FIG. 9.
The surface creator 440 may create a surface for use in an artificial reality environment. In various embodiments, the surface may be (i) synthetic (automatically generated by an artificial reality system without regard to the environment, e.g., not world-locked), (ii) semantic (detected by a machine learning identifier, e.g., hand, face, table, or other specific object, etc.), or (iii) geometric (geometric shapes identified in the environment, e.g., floors, walls, etc.). Thus, the surface creator 440 may define a surface positioned relative to the artificial reality system, may identify a surface geometry or object type specified for creating the surface, or may create the surface relative to user input (e.g., in response to a user performing an over-the-air click, outlining the surface with gestures, placing a hand, controller, or other peripheral device on the surface, defining a plane in the air with the user's hand as the surface, attaching the surface to an object associated with a detected user interaction, etc.). Additional details regarding surface creation are provided below with reference to fig. 8.
Fig. 5 is a block diagram illustrating an example of an enhanced data structure 500. Enhancement is a data volume that is filled with content available in 3D space or that can output presentation data. The augmentation may retain data, be responsive to a current augmentation context, and have logic. The augmentation is created by sending a request to an artificial reality system shell program having an augmented manifest. The manifest may specify basic properties of the augmentation, such as the type of surface or the location where the augmentation may be placed, as well as the manner in which the augmentation will display itself in different modes, initial location, size, orientation, etc. In some implementations, enhancements may be of a particular predefined type (e.g., personal, 2D media, post, event, or 3D model), and the manifest will have default values (e.g., layout, content) for that type. The shell creates an enhancement based on the manifest and returns a handle to the requestor for the enhancement. The requestor may then write the content into the enhancement (including writing different versions for different display attributes specified in the manifest). Based on the allowed positioning and display attributes specified in the manifest, the shell may maintain control over where enhancements may be displayed and which display attributes may be invoked. For example, if the artificial reality system enters a no-disturbance mode, all enhancements may be placed in a minimized mode and moved out of the user's visual center. The enhancements may be instantiated by the shell or another enhancement, but may then exist independently of the requestor.
Enhancement data structure 500 includes attributes 502 and functions ("logic") 504. Each item listed in enhancement attribute 502 and function 504 is an instance of an item that may be included in an enhancement data structure; in various embodiments, more, fewer, or other attributes and functions may be included in the enhanced data structure. In addition, the enhanced data structure may contain attributes and logic defined in a different manner than that shown in FIG. 5. For example, instead of, or in addition to, imperative definition functions, the enhancement logic may include declarative logic.
In example 500, attribute 502 includes an enhancement ID 506, an enhancement name 508, an enhancement type 510, a parent_enhancement 512, a display_schema 514, a current location 524, a current_display_schema 526, a current_dimension 528, and an owner 530.
When an enhancement is created, the enhancement ID 506 may be set to the next available ID. The enhancement name 508, enhancement type 510, and display_mode 514 may be specified in a manifest for the enhancement provided in the initial request for the enhancement. The enhanced name 508 may be set as a string. Enhancement type 510 may be set to one of a predefined set of available types, such as individuals, 2D media, stickers, events, 3D models, free forms (freeforms), and the like. In some embodiments, the enhancement type may control the class of the enhancement such that the enhancement contains the attributes and/or logic defined for the enhancement type. In some cases, these attributes may be automatically set by selecting data corresponding to the types and parameters set in the enhanced manifest. For example, the enhancement manifest may specify enhancement types of "person" and person ID 488923. Upon receiving this request, the artificial reality system may create an instance of the personal augmentation class with the preset display_mode by executing a constructor that pulls data from the social graph that is relevant to the node with personal ID 488923, such as an avatar or profile picture, and a default UI element (e.g., instant messaging control). The resulting personal enhancements may also contain predefined personal logic, such as functionality to send an IM (instant message) to the person in the enhancement when an instant message UI control is activated, or to mark the person in the enhancement in a patch when the enhancement is positioned to contact an enhancement having a patch type.
Enhanced display mode 514 may include a display mode defined in the enhanced manifest, a display mode defined in a default display mode, or a display mode defined in a type-specific display mode. The display mode may specify a condition that activates the display mode. In some cases, the condition may specify the value that certain contextual factors must have, for example, using logical operators such as AND, OR, NOT, EQUAL _to, leave_than, GREATER_THAN, and the like. The display mode may also specify features that the surface must have in order to add enhancements to the surface in the display mode, such as whether the surface is vertical or horizontal or the type of object (e.g., table, hand, face, etc.) with which the surface is associated. The display mode may specify an enhanced shape (e.g., outline and size) in the display mode in which it is located. The display mode may contain content variables in that mode in which the presentation data for augmentation is stored. For example, each content item added to an enhancement may specify in which display modes (or a set of display modes) the content item is to be output as presentation data, and how the content item is to be displayed in the modes, e.g., offset from the origin of the enhancement, orientation, size, shape, volume, etc. The display mode may also specify whether the enhancement is movable when in that display mode. The display mode may contain many other attributes not shown here, defining how to output enhancements when the conditions for the display mode are true.
In some embodiments, all enhancements may have a default set of display modes, such as display modes corresponding to an artificial reality system mode (e.g., an audio-only mode, a minimized or "blinking" mode in which the enhancements are reduced to a maximum size, an active mode in which the enhancements are moved to one side of the user's field of view, or an interactive mode in which the enhancements may use their full size and be positioned anywhere in the artificial reality environment). These display modes may inherit from the underlying enhanced categories. In some embodiments, particular types of enhancements may have display modes defined for these types, such as display modes inherited from one of the enhancement type categories that extend the base enhancement category. When an enhancement is requested, additional display modes may be provided in the provided manifest. In some implementations, the enhanced type of display mode may extend the default enhanced display mode. In some cases, the display modes in the manifest may be extended from a default enhanced display mode or enhanced type display mode. Operation of the extended display mode may set additional conditional factors for the occurrence of the extended display mode and/or set additional features for configuring the enhanced output. For example, interactive display mode 516 is enabled when the artificial reality system mode contextual factors indicate an "interactive" display mode, which sets the augmentation to be capable of being placed on a vertical or horizontal surface and sets the shape of the augmentation. The vertical display mode 520 expands the interactive display mode 516, meaning that the precondition for enabling the vertical display mode 520 is that the condition of the interactive display mode 516 is also true. When the vertical display mode is enabled, the display properties of the interactive mode are enabled, while the vertical display mode further limits the enhancement to only on the vertical surface and sets another shape of the enhancement (not exceeding the shape defined by the interactive display mode 516). As shown in this example, a display mode extended by another display mode may set constraints on display mode parameters (e.g., shape objects, surface on which enhancements are located, or whether enhancements are movable) that cannot be changed or exceeded by operation of the extended display mode.
In some embodiments, when creating an augmentation, the artificial reality system may examine the contextual factors used in each display mode to determine which contextual factor changed (in order to register the received augmentation). Additional details regarding registration enhancements for contextual factors are provided below with reference to FIG. 6A.
The enhancement ID 506 may be set to the next available ID when the enhancement is created. Parent_enhancement 512 may be set to the ID of the element requesting creation of the new enhancement. The currently_display_mode 526 may indicate which display mode 514 is currently being used is enhanced. This may be initially set to a default display mode, a particular (e.g., first) display mode provided in the enhanced manifest, or any display mode that matches the current contextual factors. The currently_dimension 528 may be set based on the shape specified in the current display mode. The owner 530 may be initially set to the entity requesting the enhancement or to the entity specified in the enhancement request. In some embodiments, the ownership of the enhancements may be changed later by changing one or more entities indicated by the owner 530 variable, and the ownership may be given certain rights (e.g., setting certain enhancement properties and/or invoking certain enhancement logic). In some embodiments, other rights may be set for enhanced attributes and/or functions that specify which entities have read/write/execute rights to them. In some embodiments, the artificial reality system maintains a hierarchical structure of augmentations based on which augmentations create other augmentations and/or which augmentations are owners of other augmentations, wherein the root of the hierarchical structure is the shell program. When an enhancement is turned off as another enhancement owner, the ownership may be passed to the turned off enhancement owner.
The current location 524 may be initially set as an initial location (initial location set in the manifest), a default location (e.g., attached to a surface defined by the hand of the requesting user, allowing the user to further place the augmentation), a location indicated when the new augmentation request is made (e.g., a location that is the point of interest of the user at the time or that is related to the location where the augmentation is requested), or may be initially canceled (such that the augmentation is hidden) before the requesting party sets the location. The current location of the enhancement may vary depending on whether the enhanced display mode allows enhanced repositioning ("movable").
In example 500, function 504 includes a function 532 for adding content to an enhancement, a function 534 for deleting content from an enhancement, a function 536 for setting a location of an enhancement, a function 540 for setting a current display mode, and a function 542 for updating an owner of an enhancement.
In example 500, an add content (addconnection) function employs a manifest generated (manifest parameter). When this parameter is provided, the content added to the enhancement is operable to generate a new enhancement, e.g., a user touching the enhancement and pulling out the generatable content item initiates a request for the new enhancement. The manifest set in the manifest generated (manifest) parameter for the generatable content item is then used in the request for new enhancements. In some embodiments, where a content item may generate a new enhancement, the content item may be given a particular visual visibility to indicate to the user that it may be selected to create the new enhancement (e.g., a particular highlighting, coloring, animation, etc.). As with attribute 502, enhancements may have other logic elements (not shown) for setting or retrieving enhancement attributes or otherwise causing enhancement actions. Also similar to attribute 502, enhanced functionality or other logic may be specified in the enhanced manifest, as may logic specified in the enhancement category for the enhancement type or in the base enhancement category.
Those skilled in the art will appreciate that the components shown in each of the flowcharts described above in fig. 1-4 and discussed below may be altered in a variety of ways. For example, the order of the logic may be rearranged, sub-steps may be performed in parallel, the illustrated logic may be omitted, other logic may be included, and so forth. In some embodiments, one or more of the components described above may perform one or more of the processes described below.
Fig. 6A is a flow chart illustrating a process 600 for an artificial reality system shell program to respond to a request for new augmentation in some embodiments of the present technology. In various embodiments, process 600 may be performed by an artificial reality system shell program in response to an augmentation request (e.g., generated with process 650 of fig. 6B). For example, the enhancement request can be generated from user interactions with the UI control for creating the enhancement (e.g., the category of the enhancement can be selected from an initiator menu) or utilizing existing enhancements (e.g., pulling a generatable element from an existing enhancement). As another example, the request may be performed in response to logic performed by an existing enhancement (e.g., when a certain context occurs, the enhancement has logic to generate a manifest and request the enhancement). In some embodiments, the enhancements may set attributes and/or logic in the manifest, or there may be a predefined manifest that the enhancements may retrieve (e.g., a particular enhancement type may have a predefined manifest).
At block 602, the process 600 may receive a request for new enhancements associated with a clear page. As discussed above with respect to FIG. 5, the manifest may specify newly enhanced properties and/or logic such as a display mode (e.g., one or more contextual factor values that must occur to enable the display mode, the type, location, shape, or orientation of surfaces in which the enhancement may be placed, whether the enhancement is movable, the data volume of the audio presentation data, etc.), a name, ID, and type of enhancement, the owner of the enhancement, the initial location of the enhancement, etc. In some implementations, default or inherited properties and/or logic may also be specified for the enhancements, such as having all enhancements contain certain display modes (e.g., audio only, minimized, interacted with, etc.) that correspond to the artificial reality system mode. In some implementations, the display modes in the manifest may extend these default display modes, allowing them to set constraints for the extended display modes. Examples of the contextual factors that the conditions of the various display modes may specify include characteristics of the surface, relationships to other real or virtual objects, movement characteristics, lighting, time, date, user interactions, detected sounds, artificial reality system modes, and the like. Examples of predefined enhancement types may include individuals, 2D objects, 3D objects, patches, events, or free forms (freeforms). In some embodiments, the enhancement type corresponds to a type of node in the social graph. In some implementations, a particular type of enhancement may have default values for the type, such as default properties, logical, and/or automatically added content items (e.g., a posting enhancement may have predefined visual structures, controls, and automatically added content based on nodes in the social graph corresponding to the posting).
At block 604, process 600 may generate an enhancement using values and/or logic set based on the manifest associated with the new enhancement request. In some implementations, this may include creating an enhanced data structure similar to data structure 500 (fig. 5). In some embodiments, default, type-based values or other values separate from the manifest may also be set in the enhanced data structure.
At block 606, the process 600 may return the generated enhanced handle in response to the request. This handle may enable the requestor to begin populating the augmentation and/or placing the augmentation in an artificial reality environment. Notably, this allows the requestor to begin filling and/or placing the enhancement before it is fully formed. For example, when providing a handle, the augmentation may have default or type-specific content still written therein by the artificial reality system shell program, may not have registered for contextual factor notifications, or may have other initialization procedures not yet completed. These additional portions of the initialization may be completed when the requestor uses the provided enhanced handle. Thus, enhanced placement and content population may be performed concurrently with the initialization procedure, thereby providing significant efficiency.
At block 608, the process 600 may notify registration enhancements for contextual factors. Notifying registration enhancements for contextual factors may include assigning an enhanced identifier to each contextual factor on the list of contextual factors for which registration is being enhanced. Additional details regarding notifying enhancements of changes in contextual factors are provided below with reference to FIG. 7. The contextual factors for which enhanced registration is to be performed may be selected based on the context specified in the manifest (or in other parameters set for enhancement). For example, process 600 may analyze the display modes defined for enhancements to determine which contextual factors must be checked to determine whether each display mode is enabled, and the process may register the enhancements to learn about any changes in these contextual factors. Additional initialization of the enhancement may also be performed at block 608, such as the shell adding content items specifying the type of enhancement to the enhancement.
At block 610, the process 600 may execute an enhanced placement procedure. In some embodiments, block 610 may be performed prior to block 606, such as setting an initial enhanced placement for the requestor to update the placement before providing the handle to the requestor. The placement procedure may include setting a default location or setting a location specified in the request received at block 602 (e.g., based on the location of the user's region of interest, relative to the requesting entity (e.g., the same surface as the surface of the requested enhanced attachment or a surface defined for the user's hand or face associated with the request)). In some implementations, the placement procedure may include making the enhancements invisible until the requestor selects a good location. In some embodiments where the user involves manually selecting a placement location, the artificial reality system may highlight the active location of the enhanced current display mode. For example, if the current display mode of the augmentation specifies that the augmentation must be placed on a vertical surface, the surface established on a wall in an artificial reality environment may be highlighted as visual visibility so that the user knows where she can place the augmentation. After the placement procedure of block 610 (or after the contextual factor registration of block 608 in the case where block 610 is performed earlier), process 600 may end.
Fig. 6B is a flow chart illustrating a process 650 for submitting a request for new augmentation to an artificial reality system shell program in some embodiments of the present technology. Process 650 may be performed by a shell or existing enhancements, any of which may monitor the enhancement creation event based on user interactions or contextual factors. At block 652, process 650 may identify an enhancement creation event. For example, a user may perform interactions such as activating a shell program control (e.g., activating an interface for enhancing selection and identifying one or more enhancements for instantiation) or selecting a content item defined in an existing enhancement to generate a new enhancement (e.g., visual availability may be given to a content item when selection of the content item may cause the new enhancement to be generated (e.g., when the content item is associated with a definition). As a more specific example, the content item may be a photograph in a social media posting enhancement that may be generated when a user performs a grab gesture on the photograph and pulls it from the posting enhancement.
Existing enhancements may also execute logic based on the current context to generate new enhancement requests. For example, the 3D model augmentation is located on a cooktop surface and depicts a clock whose display can be updated based on global time variables. The clock augmentation may be registered with the artificial reality system to receive a contextual factor notification of an object when the user moves the clock augmentation into contact with the object. When such contextual factors are provided to the clock augmentation, the clock augmentation may invoke internal logic to determine that another object is marked as food and request that a new 3D model timer augmentation be placed over the object. Upon receiving the handle of the timer enhancement, the clock enhancement may populate the timer enhancement with a countdown timer model whose amount of time is set based on a mapping of food type to cooking time.
At block 654, the process 650 may obtain a manifest corresponding to the enhanced creation event identified at block 652. In some embodiments, the manifest may be a predefined manifest specified for the enhancement type corresponding to the enhancement creation event. In other embodiments, the enhanced creation event may be associated with an existing manifest or logic for generating a manifest. For example, content items added to enhancements may be associated with a manifest for use in selecting the content items to generate new enhancements. As another example, the logic for enhancing may have various manifest templates, one of which may be selected and populated based on user actions and/or other contextual factors.
At block 656, process 650 may send a request to a shell program of the artificial reality system for new augmentations associated with the definition acquired at block 654. The shell of the artificial reality system may respond to this request using process 600 (fig. 6A) by returning the handle received at block 658 to the new augmentation.
Although, as discussed above, in various embodiments, blocks of the process described herein may be deleted or rearranged, block 660 is shown in dashed lines to indicate that block 660 may not be performed in some cases. For example, block 660 may not be performed in the case where a location is provided for a new augmentation request of the shell program to set the augmentation location, in the case where the provided augmentation is not movable, in the case where the shell program of the artificial reality system facilitates augmented positioning via process 600, and the like. When performing block 660, at block 660, process 650 may place the new augmentation (e.g., position and/or orientation) in the artificial reality environment. In some embodiments, the enhancements may be initially attached to the user's hand, may be in a default position, in a certain position relative to the requested enhancements, etc. The requested augmentation and/or the user may invoke the functionality of the new augmentation to place the new augmentation in the artificial reality environment, for example, in the case where the user performs a gesture to move the augmentation (causing a function of the set-up location of the augmentation to be invoked).
Additionally, once the process 650 receives the enhanced handle, it may begin adding content to the enhancement. In some embodiments, the augmentation may be pre-populated with some content, e.g., added by a shell program of the artificial reality system. In some implementations, the process 650 can add content to the enhancement by calling a function of the enhancement (accessible via a handle), passing information such as a reference to the content, a location within the enhancement where the content should be displayed, in which display modes the content is presented, whether the content can generate a new enhancement, and so forth. In some cases, adding content to an enhancement may include adding new logic to the enhancement and/or registering the enhancement to receive additional contextual factor notifications. In some embodiments, the shell of the artificial reality system may provide an enhanced handle prior to full initialization of the augmentation, allowing for the placement of the augmentation and/or the filling of the content to be performed prior to the shell of the artificial reality system completing the creation of the augmentation. After adding the enhancement content and/or enhancement placement, process 650 may end.
FIG. 7 is a flow chart illustrating a process 700 for enabling enhancements to respond to a context by providing context factors to relevant enhancements in some embodiments of the present technology. The process 700 may be performed by portions of an artificial reality system that receive artificial reality environment and sensor data (e.g., SLAM data, data from IMUs or other sensors connected to the artificial reality system, camera data), and systems that analyze and classify such data (e.g., gesture recognizers, object identification systems, environment mapping systems, surface identification systems, etc.). The artificial reality system may thus perform process 700 in response to periodic environmental factor checks or based on triggers occurring on certain sensors, data analysis, or artificial reality environmental events.
At block 702, the process 700 may identify a change (or establishment) in one or more contextual factors. In some implementations, this may include determining that the change in the value of the contextual factor is above a threshold established for the contextual factor. For example, when it is determined that an object has moved at least one half inch, the movement of the object may be considered a contextual factor. As another example, when the volume of the audio change is above 15 db, the audio contextual factor change may be identified.
At block 704, process 700 may identify enhancements to registration for notifications of changed contextual factors. This may be determined based on a mapping of contextual factors to existing enhancements, such as a mapping created by performing the iterations of block 608 (FIG. 6A). At block 706, the process 700 may notify enhancements identified at block 704 of the contextual factor changes identified at block 702 for which the enhancements have registered to receive signals of the contextual factor changes.
In some embodiments, providing those notifications to the enhancements may cause the enhancements to invoke corresponding logic. For example, the augmentation may determine whether the contextual factors provided to the augmentation match the conditional statement defined for the display mode, and whether the values provided for these contextual factors would cause the conditional statement to evaluate to true when used to evaluate the conditional statement, thereby enabling the corresponding display mode. This may include applying contextual factors to an ordered set of display modes. For example, this may include finding the deepest display mode among the hierarchical display modes that extend from each other, or traversing the hierarchical display modes to enable each successive display mode whose conditional statement evaluates to true and whose parent display mode (which extends from that display mode) has been enabled. In some embodiments, providing the contextual factors may also cause the invocation logic to be enhanced, such as performing a function that maps to a conditional statement or takes as a parameter the contextual factors having a known value.
Although, as discussed above, in various embodiments, blocks of the processes described herein may be deleted or rearranged, block 708 is shown in dashed lines to indicate that block 708 may not be performed in some cases. For example, where the shell of the artificial reality system does not perform a separate step to ensure that the augmentation meets certain constraints (e.g., constraints for a particular mode application), block 708 may not be performed. If the artificial reality system shell has created an enhancement in such a way that all enhancements must enable a matching display mode upon receipt of contextual factors and by ensuring that the enhancements contain the desired logic and attributes that cannot be altered (e.g., display modes for a particular artificial reality system mode) (e.g., when the display mode is extended, only the additional display modes are allowed to extend the desired display mode without overriding the constrained display attributes), then block 708 need not be used. However, in other cases, block 708 may be performed, for example, where the enhanced owner may provide a display mode that may be activated but does not conform to system constraints. At block 708, process 700 may invoke enhanced properties for the desired display change. This may include disabling the enhancement, causing the enhancement to switch to a different display mode, or not allowing the enhancement to output a portion of its current presentation data (e.g., changing the enhancement shape or clipping the output outside of the boundary region). For example, an artificial reality system may switch from an "interactive mode" in which augmentation may use its entire volume in any allowed location, to an "active user mode" in which the center of the user's field of view will remain free and the virtual object moves to a surface that rests on one side of the user's field of view. In this mode, the artificial reality system can crop any augmentation that shows content beyond the maximum augmentation size defined for this mode and can ensure that all augmentation locations are set to the parked surface side. After block 708 (or after block 706 if block 708 is not performed), process 700 may end.
The surface is an area that can be attached with augmentation in a 3D artificial reality environment. The surface may be a planar space (e.g., a wall, a table, etc.), an area or volume around the object (e.g., a user's face, a monitor, a book, etc.), or an area or volume in space (e.g., a point, a plane floating in space, or a volume fixed somewhere). The surface may be defined by the user with various gestures, or may be defined automatically by the system (e.g., upon identifying certain specific objects or objects with which the user has interacted). For example, the surface may be an area (bubble) around the user, an identified flat surface, or a volume. In some embodiments, the surface may be determined both automatically and by user action, such as the system identifying the space and the user updating or modifying the surface properties. Some attributes, such as size, orientation, shape, or meta-tag (based on object recognition) may be automatically specified for a surface, and other attributes may be defined by a user, which may be from a predefined set of attribute categories.
The reinforcement may be added to the surface, for example, by placing the reinforcement on or near the surface. The surface may include a layout for controlling the arrangement of enhancements attached to the surface when placed on the surface. The surface layout may be user-selected or automated (e.g., based on surface size, shape, or other surface features and/or based on the number, size, or type of enhancements placed on the surface). In some cases, surfaces may nest, with one surface added to the other. When an enhancement is placed on a surface (or when an enhancement is generated on a surface-e.g., when an application is opened while a user is focused on the surface), attributes of the surface may be provided to the enhancement, which may use these attributes to configure its display or action. Creating an enhanced application may define rules that enhance how the enhancement displays itself or how the enhancement functions in different surface contexts.
FIG. 8 is a flow chart illustrating a process 800 for enhancing interaction with a virtual surface in some embodiments of the present technology. In some implementations, process 800 may be performed when an enhancement is associated with a surface, such as when the enhancement is initially created on the surface or later placed on the surface by user interaction, another enhanced orientation, execution of logic for the enhancement, or a change in the enhanced display mode so that the enhancement can be added to a particular surface.
At block 802, the process 800 may receive an identification of a surface. This may be an existing surface or a surface created to accommodate the new augmentation. In various embodiments, the artificial reality system may have created one or more surfaces. In various embodiments, the surface may be (i) synthetic (automatically generated by an artificial reality system without regard to the environment, e.g., not world-locked), (ii) semantic (detected by a machine learning identifier, e.g., hand, face, table, or other specific object, etc.), or (iii) geometric (geometric shapes identified in the environment, e.g., floors, walls, etc.). Thus, an artificial reality system may create a surface by: defining a surface positioned relative to an artificial reality system, identifying an artificial reality environment surface geometry or object type specified for creating the surface, or responding to a user-defined surface (e.g., the user-defined surface is in the manner of performing an over-the-air click, outlining the surface with gestures, placing a hand, controller, or other peripheral device on the surface, defining a plane in the air with the user's hand as the surface, interacting with the object type (e.g., a handle on a bicycle) to define the surface, etc.).
At block 804, process 800 may determine surface properties and/or associated context. The surface may have attributes such as the type of object that may be placed on the surface, the shape of the surface, the location in an artificial reality environment, an enhanced list on the surface, meta-tags (e.g., machine learning tags (e.g., tags used to identify real world objects on the surface), surface types, etc.), layout, or other features. The surface layout may be selected by a user or automatically, for example, based on the surface size; a surface shape; the number, size, or type of enhancements placed on the surface, etc. In some embodiments, the layout may be dynamic, e.g., the first enhancement added to the surface is in the middle; the second added enhancement moves the first enhancement so the layout is two side-by-side elements; the third added enhancement moves the first enhancement and the second enhancement so that the layout is three equally spaced side-by-side elements, etc. An exemplary dynamic layout includes: a list, e.g., a uniform spacing of enhanced horizontal arrangements from each other; stacking, e.g., a reinforced vertical arrangement evenly spaced from one another; a grid, e.g., a 2D or 3D grid for augmentation, having x, y (and z) counts specified according to the number of augmentation on the surface; and free-form surfaces, such as those where no layout grooves are provided (where the enhancements are located). In some embodiments, one or more surface properties may be set to default values or default values selected based on a mapping of specified surface features (e.g., orientation, object type, shape) to other surface features (e.g., layout, meta-tag, etc.).
At block 806, the process may identify one or more enhancements associated with the surface. This may be an enhancement of attachment to the surface or an enhancement within a threshold distance of the surface. In some embodiments, the enhancements may be associated with the surface in response to a user placing the enhancements on the surface (or performing interactions to connect the enhancements to the surface, e.g., based on the user's attention, voice commands, etc.). In other cases, the enhancement may be generated on a surface, such as by another enhancement on the same surface. In yet other cases, enhanced logic or enhanced display modes may cause enhancement to be attached to a particular type of surface or a particular type of closest surface.
In some embodiments, the surfaces may be some type of augmentation in an artificial reality environment, and some surfaces may be added to other surfaces, allowing the surfaces to nest within one another. In some embodiments, the surface may have enhanced logic that automatically fills itself, e.g., based on surface type, parameters, and known user information, etc. For example, the refrigerator surface may automatically fill itself with recipes because it has a "food" meta-tag, and recipes may be based on a history of certain types of foods "praise" by the user identified in the social graph.
At block 808, process 800 may provide the attributes and/or contextual factors determined at block 804 to the associated enhancements identified at block 806 such that the identified enhancements may invoke corresponding logic or display modes. For example, the surface may indicate enhanced positioning, such as open slots in the surface layout, into which the enhancement may move itself by setting its positional properties. As another example, the property of the indication surface, e.g., whether it is a vertical surface or a horizontal surface, may be enhanced such that the enhancement selects the corresponding display mode. As yet another example, other objects associated with the surface may be indicated to the augmentation, allowing the augmentation to invoke logic corresponding to those objects or the types assigned to those objects. As a more specific example, when a social media "posting" enhancement is placed on a surface, the "posting" enhancement may be informed that a "personal" enhancement is also located on the surface, which may invoke logic defined for the posting enhancement to mark those people in the posting when they have an enhancement assigned to the same surface. After providing the determined enhancements with attributes and/or contextual factors at block 808, process 800 may end.
In some embodiments, augmentation may interact with other objects (reality and/or other augmentation) or events. For example, a "cook" surface may have an egg timer enhancement that automatically starts counting when the enhancement is dragged onto a real world pot of water. The enhancements may have logic and/or display modes defined by an enhancement creator and/or logic and/or display modes defined for types of enhancements that may be triggered when certain parameters or contextual factors are provided to the enhancements. FIG. 9 is a flow chart illustrating a process 900 for enhancing interactions with other objects in some embodiments of the present technology. Process 900 may be performed in response to various events, such as creation of an enhancement, addition of an enhancement to a surface, user-specified interactions between the enhancement, or another event forming a relationship between the enhancement and another object.
In some implementations, the enhancements may receive parameters related to other enhancements, such as location, type, shape, content items written in the other enhancements, handles to invoke logic in the other enhancements, and so forth. At block 902, the process 900 may register a relationship for sharing such parameters between two or more enhancements.
In some cases, security and privacy policies may limit which parameters of one enhancement may be displayed to another enhancement, or may specify that such sharing may occur. For example, access rights can only be provided to enhancements such that: a) access features of enhancements generated by the same parent level, B) only respond to user actions indicating interactions between enhancements (e.g., touching multiple enhancements or dragging something out of one enhancement and into another enhancement), and/or C) provide access rights to enhancements when they are assigned to the same surface. In various embodiments, the type of relationship may control which features may be provided between enhancements. For example, a user action that causes two enhanced contacts may cause a wide range of enhanced properties to be shared, while two enhancements located on the same surface may only share location and object type properties between enhancements.
In some cases, the augmentation may register with a shell program of the artificial reality system to receive information about other augmentations or parameters from other augmentations. In other cases, the shell may determine which such attributes should be provided to the enhancement (e.g., based on an analysis of the manifest provided to the shell to create the enhancement, to determine which parameters the enhancement logic and/or display mode needs to access). In some implementations, the shell may include security and privacy policies that will control which of the attributes that provide the enhancement with the enhanced request access. For example, enhancements may register to receive illumination status, user location, and identification of objects within their threshold distance. The shell program may determine whether enhancement is allowed to receive such information and, if so, complete registration to provide parameters when setting or altering them. In some embodiments, the shell of the artificial reality system may have one or more global parameters, and the augmentation may pull the values of these global parameters, such as a list of surfaces, the current mode of the artificial reality system, the current time or date, etc.
At block 904, the process 900 may identify events related to one or more enhancement parameters or contextual factors based on the relationships identified at block 902. For example, an event may be identified in the following cases: when the registered context factor of enhanced reception is set or its value is changed; when one enhancement contacts another enhancement (or is within a threshold distance of a near-being enhancement); when a new enhancement is created; or any other event that indicates augmentation may be responsive to another object.
At block 906, the process 900 may provide one or more enhancement parameters or contextual factors to one or more enhancements to which the event is related for the event identified at block 904. This may allow for receiving enhancements to invoke logic corresponding to received parameters or contextual factors and/or enable display modes corresponding to received parameters or contextual factors. As discussed above, the enhancement creator may create a logical or display mode for the enhancement that may be invoked or enabled upon receipt of a contextual factor or other information related to or that makes a condition evaluation for the logical or display mode true. In some embodiments, such logic or display patterns may be inherited, for example, based on enhancements to the instance of the category created as an enhancement type with predefined logic or display patterns (e.g., a personal enhancement category that extends the enhancement category may have predefined logic for interacting with the depicted personal social media profile after certain events occur). After the parameters or contextual factors have been provided to the enhancement, process 900 may end.
Fig. 10 is a conceptual diagram illustrating an example 1000 of augmented reality system controlled augmentation in an artificial reality space. The artificial reality environment shown in example 1000 includes a room 1002 having walls 1004 and 1006 and a floor 1008. The artificial reality environment in example 1000 is a mixed reality environment, including real world table 1010 and real world football 1014, as well as dog 1016, apples 1012 on table 1010, and virtual objects of photos 1018. In example 1000, attributes may be assigned to real and virtual objects, as shown by bounding boxes around objects 1012-1018, indicating that they have been identified (e.g., specified location, shape, etc.) by an artificial reality system that maintains a corresponding data structure.
Fig. 11 is a conceptual diagram that continues to illustrate an example 1000 of augmentation in an artificial reality space in which an artificial reality system identifies a virtual surface. In example 1000, the artificial reality system identifies a geometric surface by automatically locating at least a sized flat surface. In this way, the artificial reality system has automatically identified surfaces 1102 and 1104. The artificial reality system also identifies surfaces on the floor, but the user instructs the system to divide the surfaces into surfaces 1110 and 1112. The user also causes (not shown) the artificial reality system to create a surface 1108 by placing her hand on the surface. Existing real and virtual objects located on these surfaces are automatically added to the surfaces.
Fig. 12 is a conceptual diagram that continues to illustrate an example 1000 of an augmented reality space in which an artificial reality system receives a request for new photo augmentation based on initial placement of a user's gaze. In response to a user command (e.g., activate a UI element—not shown), the artificial reality system attaches the virtual tablet 1204 into the user's hand 1202, allowing the user to make various selections. In this example, the user has selected a picture option 1206 from the tablet 1204 and selected (not shown) a picture to add to the artificial reality environment. Based on this selection, the tablet 1204 (and the augmentation) creates a manifest of picture augmentation and sends a request to the shell program of the artificial reality system (e.g., by performing process 6B).
In fig. 13, the artificial reality system creates an augmentation 1302 (e.g., by performing process 6A) in response to the new augmentation request. The artificial reality system also tracks the gaze direction 1208 (fig. 12) of the user by monitoring head positioning using IMU sensors and modeling user eye positioning using cameras. Based on the monitored gaze direction 1208 of the user, the system automatically identifies the surface 1104 as a surface to which to add a new tile enhancement 1302 in slots 1324 (slots 1324 in slots 1304-1326 of the layout of the surface 1104). The shell program of the artificial reality system provides a new empty augmentation 1302 that is initially placed in the slot 1324. The handle of the enhancement 1302 is provided prior to populating the enhancement 1302 with content, allowing the user to make additional placement selections when the enhancement requester writes content (selected pictures) into the enhancement 1302. Additionally, while the empty augmentation 1302 is initially placed in the slot 1324, the artificial reality system has identified additional surfaces and layout slots where augmentation may be placed, and has provided visual visibility to the user (1304-1350), indicating which locations are available to place augmentation. In example 1000, these slots are identified because the manifest for enhancement 1302 has indicated that the enhancement can be placed on any flat vertical surface or any flat horizontal surface that does not have the designation "floor". In fig. 14, the new enhancement 1302 has been filled with the user selected picture 1402 in fig. 12, while also being moved to the slot 1344, which was selected by the user's gaze 1208 resting on the slot 1344 for a threshold amount of time (e.g., three seconds).
FIG. 15 is a conceptual diagram that continues to illustrate an example 1000 of augmentation in artificial reality space, where augmentation 1302 is moved to a horizontal surface and a different display mode is selected in response to corresponding placement context factors. The user has previously placed the enhancement 1302 on a surface on the wall 1004 (fig. 14). The user may further move the augmentation, for example, by using the user's gaze, by performing a gesture (e.g., "drag" the augmentation to a new location), using voice commands, etc., to indicate a new surface. In example 1000, the user has selected to move enhancement 1302 to surface 1108. In response to the augmented movement, the artificial reality system provides surface details to the augmentation 1302 (e.g., by invoking the process 800 of fig. 8). The enhancement 1302 has multiple display modes. Because the enhancement 1302 is located on the vertical surface 1102, the first display mode is enabled such that the enhancement is shaped as a wall hanging. The second display mode has an alternate condition that is true when the enhancement 1302 is located on a horizontal surface (e.g., surface 1108). By enabling this second display mode in response to being placed on horizontal surface 1108, enhancement 1302 reconfigures itself as a standing picture frame. When the enhancement 1302 is placed on the surface 1108, the surface 1108 may identify a slot for the enhancement. In this example, surface 1108 has a layout that includes slots 1346, 1348, and 1350. The object 1012 is already in slot 1350 and slot 1348 is too small for the enhancement 1302 in the current frame shape, so surface 1108 selects slot 1346 for enhancement 1302 and informs enhancement 1302 of its location on surface 1108, allowing enhancement 1302 to set its location in the artificial reality environment as in slot 1346.
Fig. 16 is a conceptual diagram that continues to illustrate an example 1000 of augmentation in an artificial reality space, where the augmentation 1302 returns to a first display mode in response to a user moving the augmentation 1302 back to the vertical wall surface 1102. The user also selects a second picture 1604 of the user to add to the artificial reality environment and places the resulting picture enhancement 1602 in a slot 1336 on the surface 1102. Selection and placement of enhancements 1602 is performed in a similar manner to selection and placement of enhancements 1302 (fig. 12-14) by selecting the same picture option 1206 from virtual plane 1204 (fig. 12). In example 1000, enhanced shared information created by the same requestor is allowed. In addition, the request from select option 1206 contains a manifest that specifies logic for enhancing interactions in some cases. One such function specifies that if two pictures are placed within a threshold distance of each other and it is known from the social graph that the two pictures depict a married user, the enhancements attach themselves together to display a single enhancement with a heart-shaped border. To determine when to execute such logic, the augmentations register themselves with the artificial reality system to receive information about the placement and content of other picture augmentations placed on the same surface (e.g., using one or more of the processes 700-900 of fig. 7-9). In this example, when the augmentations 1602 and 1302 are placed adjacent to each other on the surface 1102, the artificial reality system determines that the augmentations are registered as learning about surrounding augmentations created by the same requestor and provides each augmentations with these contextual factors, thereby providing the identity of the users 1402 and 1604 described in each augmentations and access to the social graph of the augmentations to determine the type of relationship between the described users.
FIG. 17 is a conceptual diagram that continues to illustrate an example 1000 of augmentation in artificial reality space, where augmentation 1302 and augmentation 1602 are responsive to received contextual factors. Logic executed by the enhancements 1302 and 1602 causes the enhancement 1602 to move itself to share the slot 1338 with the enhancement 1302 and causes each enhancement to change its display mode to display as half of the wall-mounted picture 1702 describing the users 1604 and 1402, and half of the surrounding heart 1704.
Fig. 18 and 19 are conceptual diagrams that continue to illustrate an example 1000 of an augmentation in an artificial reality space (focused on surface 1102), where the artificial reality system creates a new augmentation 1902 in response to a user pulling a generatable content item 1402 from an existing augmentation 1702. In example 1000, a parent enhancement may have content written to it that a user may select to make a new child enhancement, for example by performing a gesture that pulls content from the parent enhancement. In some implementations, the write-enhanced content is associated with a manifest (or a process for creating a manifest in response to a content item selection). This may be a manifest created by the parent enhancement, a predefined manifest selected for the type of enhancement selected by the parent enhancement (e.g., a default manifest for "personal" enhancement, providing a personal identifier), or a default generic manifest. In some implementations, such content, which may generate other enhancements, as well as visibility, may be provided in the parent enhancement, which may be signaled to be pulled from the parent enhancement. When a user pulls content, the associated manifest may be sent to a shell program of the artificial reality system, which returns child enhancements, and then the parent enhancements may write or set attributes for the child enhancements (e.g., by providing the attributes associated with the pulled content).
In FIG. 18, the combined enhancement 1702 has been selected such that visual visibility is displayed on the content item, which may be selected to generate other enhancements, here illustrated as content items drawn with dashed lines, but other visibility may be possible, such as color changes, hue changes, shading changes, adding animations, adding icons, etc. The user's hand 1202 grasps the generatable content item 1402 and pulls it from the enhancement 1702. This causes the enhancement 1302 (which is the right half of the enhancement 1702, as described above) to request a new enhancement, providing a default manifest for the personal enhancement, the default manifest having a specified personal identifier associated with the content item 1402. In this example, pulling a content item from a parent enhancement does not delete the content item from the parent enhancement. In other embodiments, the enhancement may be configured to delete the content item from the enhancement when the content item is pulled. The artificial reality system receives a new augmentation request for the manifest and creates a new augmentation. In this example, creating the new personal enhancement includes retrieving from the social media site a profile picture content item 1906 associated with the personal identifier specified in the manifest. The personal enhancer 1902 also controls the population of content items 1904 with messages, which when activated, display a message interface for communicating with the depicted user. Depending on where the user's hand 1202 performs the release gesture after pulling the content item from the augmentation 1702, the augmentation 1902 is placed in an artificial reality environment (in this case, in the slot 1340 of the surface 1102). As the user interacts with the artificial reality system, the user may continue to perform actions in the artificial reality environment, such as creating additional augmentations, causing the augmentations to interact, defining surfaces, placing augmentations on surfaces, and so forth.
Reference in the specification to "an embodiment" (e.g., "some embodiments," "various embodiments," "one embodiment," "an embodiment," etc.) means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Furthermore, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
As used herein, above a threshold means that the value of the item under comparison is above another specified value, the item under comparison is among some specified number of items having a maximum value, or the item under comparison has a value within a specified top percentage value. As used herein, below a threshold means that the value of the item under comparison is below a specified other value, the item under comparison is among some specified number of items having a minimum value, or the item under comparison has a value within a specified bottom percentage value. As used herein, being within a threshold means that the value of the item under comparison is between two specified other values, the item under comparison is among an intermediate specified number of items, or the item under comparison has a value within an intermediate specified percentage range. Relative terms such as high or unimportant when not otherwise defined may be understood as assigning a value and determining how the value will be compared to an established threshold. For example, the phrase "selecting a quick connection" may be understood to mean selecting a connection having a value above a threshold assigned corresponding to its connection speed.
As used herein, the word "or" refers to any possible arrangement of a set of items. For example, the phrase "A, B or C" refers to at least one of A, B, C, or any combination thereof, such as any one of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C; or for example a and a; B. b and C; A. a multiple of any of A, B, C and C; etc.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Specific examples and implementations have been described herein for purposes of illustration, but various modifications may be made without deviating from the scope of the examples and implementations. The specific features and acts described above are disclosed as example forms of implementing the claims. Accordingly, the examples and embodiments are not limited except as by the appended claims.
Any of the patents, patent applications, and other references mentioned above are incorporated herein by reference. Aspects can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further embodiments. If a statement or subject matter in a document incorporated by reference conflicts with a statement or subject matter of the present application, the present application shall govern.

Claims (15)

1. A method of generating a virtual container for an artificial reality environment, the method comprising:
receiving a request for a virtual container, wherein the request is associated with an inventory specifying one or more parameters of the virtual container;
creating a data structure for the virtual container by executing a first portion of an initialization procedure for the data structure, the creating a data structure comprising setting one or more attributes based on the parameters specified in the manifest, wherein the virtual container comprises a plurality of context response display modes and context response logic;
providing a handle to the data structure in response to the request, the handle enabling positioning of the virtual container in the artificial reality environment; and
executing a second portion of the initialization program, wherein the second portion of the initialization program is executed when the handle is used to locate the virtual container in the artificial reality environment.
2. The method according to claim 1,
wherein after the initialization procedure, the virtual container receives one or more contextual factors; and is also provided with
Wherein:
the virtual container enabling one of the plurality of contextual responsive display modes in response to evaluating a corresponding condition employing the one or more contextual factors; or (b)
The virtual container invokes at least a portion of the contextual response logic in response to evaluating a corresponding condition employing the one or more contextual factors.
3. The method according to claim 1,
wherein the handle further enables addition of a content item to the virtual container; and is also provided with
Wherein one or more content items are added to the virtual container using the handle while executing the second portion of the initialization program;
wherein the second portion of the initialization procedure comprises:
registering the virtual container to receive contextual factors;
and/or
Identifying contextual factors that should be received by the virtual container by determining contextual factors for use in conditions corresponding to one or more of the plurality of contextual response display modes or portions of the contextual response logic; and
the virtual container is registered to receive the identified contextual factors.
4. The method of claim 1, wherein the parameters of the virtual container specified in the manifest include at least:
one of the plurality of context responsive display modes; and
one or more of the following: a virtual container type, a container shape, a spatial orientation, a location in the artificial reality environment, a location type consistent with placement of the virtual container, or any combination thereof.
5. The method of claim 1, wherein the request is generated as a result of a user performing an interaction with a content item located in another previously created virtual container and associated with another manifest, wherein the interaction is predefined as signaling creation of a new virtual container based on the content item.
6. The method according to claim 1,
wherein the request is further associated with an indication of a gaze direction of the user; and is also provided with
Wherein the virtual container is initially placed in the artificial reality environment based on the gaze direction of the user.
7. The method according to claim 1,
wherein the virtual container receives contextual factors specifying values of a current mode of the artificial reality system;
wherein one of the plurality of context response display modes corresponds to a condition that evaluates to true when the value of the current mode of the artificial reality system is provided;
wherein the virtual container enables the one of the plurality of context response display modes in response to evaluating the corresponding condition as true; and is also provided with
Wherein the enabling of the one of the plurality of context-responsive display modes causes the virtual container to be set to a maximum size and to move to a particular location corresponding to the current mode of the artificial reality system; and/or
Wherein at least one of the plurality of context response display modes is added to the data structure by creating the data structure for one of the virtual containers of the predefined type specified in the request, wherein each data structure of the predefined type is configured to include the at least one of the plurality of context response display modes.
8. The method according to claim 1,
wherein one or more extended contextual response display modes are extensions of other contextual response display modes of the plurality of contextual response display modes;
wherein each of the one or more extended context response display modes references another one of the plurality of context response display modes; and is also provided with
Wherein the condition for enabling a particular extended contextual response display mode is that the condition associated with the contextual response display mode referenced by the particular extended contextual response display mode evaluates to true.
9. The method according to claim 1,
wherein the data structure is created for a type of virtual container specified in the request; and is also provided with
Wherein the method further comprises automatically adding one or more content items to the virtual container of the specified type based on predefined rules for adding content items to the virtual container.
10. A computing system that generates a virtual container for an artificial reality environment, the system comprising:
one or more processors; and
one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to perform a process comprising:
receiving a request for a virtual container, wherein the request is associated with an inventory specifying one or more parameters of the virtual container;
creating a data structure for the virtual container by executing a first portion of an initialization program for the data structure based on the manifest;
providing a handle of the data structure to enable addition of a content item to the virtual container in response to the request; and
executing a second portion of the initialization program, wherein the second portion of the initialization program is executed when the handle is used to add one or more content items to the virtual container.
11. The computing system of claim 10,
wherein the virtual container receives one or more contextual factors; and is also provided with
Wherein:
the virtual container enabling a contextual response display mode in response to evaluating a corresponding condition employing the one or more contextual factors; or (b)
The virtual container invokes context response logic in response to evaluating a corresponding condition employing the one or more contextual factors.
12. The computing system of claim 10,
wherein the initializing procedure comprises setting attributes in the data structure based on the manifest, at least comprising:
a context response display mode; and
one or both of a container shape and a location type conforming to the placement of the virtual container; and/or
Wherein the data structure is created for a type of virtual container specified in the request; and is also provided with
Wherein the process further comprises automatically adding one or more content items to the virtual container of the specified type based on rules for adding the content items to the virtual container.
13. The computing system of claim 10, wherein the request is generated as a result of a user performing an interaction with a content item located in another previously created virtual container and associated with another manifest, wherein the interaction predefine is established as a signaling to create a new virtual container based on the content item.
14. The computing system of claim 10,
wherein the virtual container receives contextual factors specifying values of a current mode of the artificial reality system;
wherein a contextual response display mode included in the data structure corresponds to a condition that evaluates to true when the value of the current mode of the artificial reality system is provided; and is also provided with
Wherein the virtual container enables the contextual response display mode in response to evaluating a corresponding condition as true.
15. A computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform a process for providing contextual factors to a virtual container in an artificial reality environment, the process comprising:
identifying a change or establishment of one or more contextual factors;
identifying one or more virtual containers registered to receive notifications of changes or establishment of the one or more contextual factors; and
providing a notification of the one or more contextual factors to the registered one or more virtual containers in response to identifying the registered one or more virtual containers;
wherein providing the notification causes at least one of the one or more virtual containers to invoke corresponding logic or enable a corresponding display mode.
CN202180053806.5A 2020-08-31 2021-07-31 Content item management in an augmented reality environment Pending CN116097314A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/008,478 2020-08-31
US17/008,478 US11176755B1 (en) 2020-08-31 2020-08-31 Artificial reality augments and surfaces
PCT/US2021/044098 WO2022046358A1 (en) 2020-08-31 2021-07-31 Managing content items in augmented reality environment

Publications (1)

Publication Number Publication Date
CN116097314A true CN116097314A (en) 2023-05-09

Family

ID=77431429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180053806.5A Pending CN116097314A (en) 2020-08-31 2021-07-31 Content item management in an augmented reality environment

Country Status (6)

Country Link
US (3) US11176755B1 (en)
EP (1) EP4205088A1 (en)
JP (1) JP2023539796A (en)
KR (1) KR20230058641A (en)
CN (1) CN116097314A (en)
WO (1) WO2022046358A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210158400A (en) * 2019-06-19 2021-12-30 엘지전자 주식회사 Signaling of information indicating a transform kernel set in image coding
US11176755B1 (en) * 2020-08-31 2021-11-16 Facebook Technologies, Llc Artificial reality augments and surfaces
US11227445B1 (en) 2020-08-31 2022-01-18 Facebook Technologies, Llc Artificial reality augments and surfaces
US11113893B1 (en) 2020-11-17 2021-09-07 Facebook Technologies, Llc Artificial reality environment with glints displayed by an extra reality device
US11409405B1 (en) 2020-12-22 2022-08-09 Facebook Technologies, Llc Augment orchestration in an artificial reality environment
US11790648B2 (en) * 2021-02-25 2023-10-17 MFTB Holdco, Inc. Automated usability assessment of buildings using visual data of captured in-room images
US11762952B2 (en) 2021-06-28 2023-09-19 Meta Platforms Technologies, Llc Artificial reality application lifecycle
US20230031871A1 (en) * 2021-07-29 2023-02-02 Meta Platforms Technologies, Llc User interface to select field of view of a camera in a smart glass
US11798247B2 (en) 2021-10-27 2023-10-24 Meta Platforms Technologies, Llc Virtual object structures and interrelationships
US11748944B2 (en) 2021-10-27 2023-09-05 Meta Platforms Technologies, Llc Virtual object structures and interrelationships
US20230138204A1 (en) * 2021-11-02 2023-05-04 International Business Machines Corporation Augmented reality object interaction and notification
US20230367611A1 (en) * 2022-05-10 2023-11-16 Meta Platforms Technologies, Llc World-Controlled and Application-Controlled Augments in an Artificial-Reality Environment
US11947862B1 (en) 2022-12-30 2024-04-02 Meta Platforms Technologies, Llc Streaming native application content to artificial reality devices

Family Cites Families (182)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US770149A (en) 1903-12-03 1904-09-13 James Franklin Bailey Turpentine-still.
US3055404A (en) 1960-05-18 1962-09-25 Ralph F Anderson Dispensing valve
US3117274A (en) 1960-07-29 1964-01-07 Ibm Power supply with protective circuits
US3081177A (en) 1962-01-25 1963-03-12 J Bird Moyer Co Inc Dental impression compositions
US3292089A (en) 1962-12-31 1966-12-13 Gen Electric Uhf converter circuit arrangement
US3530252A (en) 1966-11-16 1970-09-22 Communications Satellite Corp Acquisition technique for time division multiple access satellite communication system
US3477368A (en) 1967-10-24 1969-11-11 Itt Printing apparatus employing magnetic transfer band in which image impressions can be made
US3558759A (en) 1968-05-06 1971-01-26 Union Oil Co Polymerization method
US3726233A (en) 1971-04-05 1973-04-10 Fmc Corp Rear trolley dog for power and free push through transfer
JPS5255838Y2 (en) 1971-10-21 1977-12-16
US3947351A (en) 1971-12-22 1976-03-30 Asahi Glass Company, Ltd. Acid diffusion-dialysis process utilizing anion exchange membrane of 4-55 micron thickness
DE4223043C2 (en) 1992-07-14 1994-05-05 Werner & Pfleiderer Extruder with throttle punch
US5842175A (en) 1995-04-28 1998-11-24 Therassist Software, Inc. Therapy system
US6842175B1 (en) 1999-04-22 2005-01-11 Fraunhofer Usa, Inc. Tools for interacting with virtual environments
US7650575B2 (en) 2003-03-27 2010-01-19 Microsoft Corporation Rich drag drop user interface
US8726233B1 (en) 2005-06-20 2014-05-13 The Mathworks, Inc. System and method of using an active link in a state programming environment to locate an element
US7701439B2 (en) 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
KR100783552B1 (en) 2006-10-11 2007-12-07 삼성전자주식회사 Input control method and device for mobile phone
US8341184B2 (en) 2008-05-07 2012-12-25 Smooth Productions Inc. Communications network system and service provider
JP5620134B2 (en) 2009-03-30 2014-11-05 アバイア インク. A system and method for managing trust relationships in a communication session using a graphical display.
US9477368B1 (en) 2009-03-31 2016-10-25 Google Inc. System and method of indicating the distance or the surface of an image of a geographical object
US8473862B1 (en) 2009-05-21 2013-06-25 Perceptive Pixel Inc. Organizational tools on a multi-touch display device
US20100306716A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Extending standard gestures
US20120188279A1 (en) 2009-09-29 2012-07-26 Kent Demaine Multi-Sensor Proximity-Based Immersion System and Method
US9981193B2 (en) 2009-10-27 2018-05-29 Harmonix Music Systems, Inc. Movement based recognition and evaluation
US9507418B2 (en) 2010-01-21 2016-11-29 Tobii Ab Eye tracker based contextual action
US8593402B2 (en) 2010-04-30 2013-11-26 Verizon Patent And Licensing Inc. Spatial-input-based cursor projection systems and methods
US8335991B2 (en) 2010-06-11 2012-12-18 Microsoft Corporation Secure application interoperation via user interface gestures
US9134800B2 (en) 2010-07-20 2015-09-15 Panasonic Intellectual Property Corporation Of America Gesture input device and gesture input method
US9213890B2 (en) 2010-09-17 2015-12-15 Sony Corporation Gesture recognition system for TV control
US8497838B2 (en) 2011-02-16 2013-07-30 Microsoft Corporation Push actuation of interface controls
US8811719B2 (en) 2011-04-29 2014-08-19 Microsoft Corporation Inferring spatial object descriptions from spatial gestures
JP2012243007A (en) 2011-05-18 2012-12-10 Toshiba Corp Image display device and image area selection method using the same
US11048333B2 (en) 2011-06-23 2021-06-29 Intel Corporation System and method for close-range movement tracking
US8558759B1 (en) 2011-07-08 2013-10-15 Google Inc. Hand gestures to signify what is important
US9117274B2 (en) 2011-08-01 2015-08-25 Fuji Xerox Co., Ltd. System and method for interactive markerless paper documents in 3D space with mobile cameras and projectors
US9292089B1 (en) 2011-08-24 2016-03-22 Amazon Technologies, Inc. Gestural object selection
KR101343609B1 (en) 2011-08-24 2014-02-07 주식회사 팬택 Apparatus and Method for Automatically recommending Application using Augmented Reality Data
WO2013028908A1 (en) 2011-08-24 2013-02-28 Microsoft Corporation Touch and social cues as inputs into a computer
JP5718197B2 (en) 2011-09-14 2015-05-13 株式会社バンダイナムコゲームス Program and game device
US8947351B1 (en) 2011-09-27 2015-02-03 Amazon Technologies, Inc. Point of view determinations for finger tracking
JP5581292B2 (en) 2011-09-30 2014-08-27 楽天株式会社 SEARCH DEVICE, SEARCH METHOD, RECORDING MEDIUM, AND PROGRAM
US9081177B2 (en) 2011-10-07 2015-07-14 Google Inc. Wearable computer with nearby object response
EP2595111A1 (en) 2011-11-07 2013-05-22 Gface GmbH Computer Implemented Method of Displaying Contact Nodes in an Online Social Network, Computer Systems and Computer Readable Medium Thereof
JP6121647B2 (en) 2011-11-11 2017-04-26 ソニー株式会社 Information processing apparatus, information processing method, and program
US20130125066A1 (en) 2011-11-14 2013-05-16 Microsoft Corporation Adaptive Area Cursor
EP2602703B1 (en) 2011-12-09 2018-02-07 LG Electronics Inc. -1- Mobile terminal and controlling method thereof
GB2511973A (en) 2011-12-27 2014-09-17 Hewlett Packard Development Co User interface device
US20150220150A1 (en) 2012-02-14 2015-08-06 Google Inc. Virtual touch user interface system and methods
US9477303B2 (en) 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9055404B2 (en) 2012-05-21 2015-06-09 Nokia Technologies Oy Apparatus and method for detecting proximate devices
JP6360050B2 (en) 2012-07-13 2018-07-18 ソフトキネティック ソフトウェア Method and system for simultaneous human-computer gesture-based interaction using unique noteworthy points on the hand
KR101969318B1 (en) 2012-11-05 2019-04-17 삼성전자주식회사 Display apparatus and control method thereof
US9575562B2 (en) 2012-11-05 2017-02-21 Synaptics Incorporated User interface systems and methods for managing multiple regions
US20140149901A1 (en) 2012-11-28 2014-05-29 Motorola Mobility Llc Gesture Input to Group and Control Items
JP6159323B2 (en) 2013-01-31 2017-07-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Information processing method and information processing apparatus
US20140225922A1 (en) 2013-02-11 2014-08-14 Rocco A. Sbardella System and method for an augmented reality software application
US10220303B1 (en) 2013-03-15 2019-03-05 Harmonix Music Systems, Inc. Gesture-based music game
US9245388B2 (en) 2013-05-13 2016-01-26 Microsoft Technology Licensing, Llc Interactions of virtual objects with surfaces
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
US9129430B2 (en) 2013-06-25 2015-09-08 Microsoft Technology Licensing, Llc Indicating out-of-view augmented reality images
US9639987B2 (en) * 2013-06-27 2017-05-02 Canon Information And Imaging Solutions, Inc. Devices, systems, and methods for generating proxy models for an enhanced scene
EP3019913A4 (en) 2013-07-10 2017-03-08 Real View Imaging Ltd. Three dimensional user interface
US9665259B2 (en) 2013-07-12 2017-05-30 Microsoft Technology Licensing, Llc Interactive digital displays
US9448689B2 (en) 2013-08-30 2016-09-20 Paypal, Inc. Wearable user device enhanced display system
US10083627B2 (en) 2013-11-05 2018-09-25 Lincoln Global, Inc. Virtual reality and real welding training system and method
JP6090140B2 (en) 2013-12-11 2017-03-08 ソニー株式会社 Information processing apparatus, information processing method, and program
US10126822B2 (en) 2013-12-16 2018-11-13 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual configuration
EP2887322B1 (en) * 2013-12-18 2020-02-12 Microsoft Technology Licensing, LLC Mixed reality holographic object development
US9622322B2 (en) 2013-12-23 2017-04-11 Sharp Laboratories Of America, Inc. Task light based system and gesture control
US9311718B2 (en) 2014-01-23 2016-04-12 Microsoft Technology Licensing, Llc Automated content scrolling
KR102184402B1 (en) 2014-03-06 2020-11-30 엘지전자 주식회사 glass-type mobile terminal
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20150261659A1 (en) 2014-03-12 2015-09-17 Bjoern BADER Usability testing of applications by assessing gesture inputs
US9959675B2 (en) 2014-06-09 2018-05-01 Microsoft Technology Licensing, Llc Layout design using locally satisfiable proposals
US10852838B2 (en) 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
AU2015274283B2 (en) 2014-06-14 2020-09-10 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10416760B2 (en) 2014-07-25 2019-09-17 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
KR101453815B1 (en) 2014-08-01 2014-10-22 스타십벤딩머신 주식회사 Device and method for providing user interface which recognizes a user's motion considering the user's viewpoint
US9508195B2 (en) 2014-09-03 2016-11-29 Microsoft Technology Licensing, Llc Management of content in a 3D holographic environment
US9734634B1 (en) 2014-09-26 2017-08-15 A9.Com, Inc. Augmented reality product preview
KR20160046150A (en) 2014-10-20 2016-04-28 삼성전자주식회사 Apparatus and method for drawing and solving a figure content
US10185402B2 (en) 2014-11-27 2019-01-22 Erghis Technologies Ab Method and system for gesture based control device
US10088971B2 (en) 2014-12-10 2018-10-02 Microsoft Technology Licensing, Llc Natural user interface camera calibration
WO2016099556A1 (en) 2014-12-19 2016-06-23 Hewlett-Packard Development Company, Lp 3d visualization
US9754416B2 (en) 2014-12-23 2017-09-05 Intel Corporation Systems and methods for contextually augmented video creation and sharing
US20160378291A1 (en) 2015-06-26 2016-12-29 Haworth, Inc. Object group processing and selection gestures for grouping objects in a collaboration system
US10799792B2 (en) 2015-07-23 2020-10-13 At&T Intellectual Property I, L.P. Coordinating multiple virtual environments
US10101803B2 (en) 2015-08-26 2018-10-16 Google Llc Dynamic switching and merging of head, gesture and touch input in virtual reality
US9947140B2 (en) 2015-09-15 2018-04-17 Sartorius Stedim Biotech Gmbh Connection method, visualization system and computer program product
NZ741866A (en) 2015-10-20 2019-07-26 Magic Leap Inc Selecting virtual objects in a three-dimensional space
US10248284B2 (en) 2015-11-16 2019-04-02 Atheer, Inc. Method and apparatus for interface control with prompt and feedback
US9857881B2 (en) 2015-12-31 2018-01-02 Microsoft Technology Licensing, Llc Electrical device for hand gestures detection
US20170242675A1 (en) 2016-01-15 2017-08-24 Rakesh Deshmukh System and method for recommendation and smart installation of applications on a computing device
US10446009B2 (en) * 2016-02-22 2019-10-15 Microsoft Technology Licensing, Llc Contextual notification engine
US10802695B2 (en) 2016-03-23 2020-10-13 Youar Inc. Augmented reality for the internet of things
US10665019B2 (en) 2016-03-24 2020-05-26 Qualcomm Incorporated Spatial relationships for integration of visual images of physical environment into virtual reality
EP3436863A4 (en) 2016-03-31 2019-11-27 Magic Leap, Inc. Interactions with 3d virtual objects using poses and multiple-dof controllers
US10852835B2 (en) 2016-04-15 2020-12-01 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for controlling prosthetic devices by gestures and other modalities
US9992628B2 (en) 2016-04-21 2018-06-05 Microsoft Technology Licensing, Llc Map downloading based on user's future location
US10852913B2 (en) 2016-06-21 2020-12-01 Samsung Electronics Co., Ltd. Remote hover touch system and method
US20170372225A1 (en) 2016-06-28 2017-12-28 Microsoft Technology Licensing, Llc Targeting content to underperforming users in clusters
US20190258318A1 (en) 2016-06-28 2019-08-22 Huawei Technologies Co., Ltd. Terminal for controlling electronic device and processing method thereof
US10473935B1 (en) 2016-08-10 2019-11-12 Meta View, Inc. Systems and methods to provide views of virtual content in an interactive space
US11269480B2 (en) 2016-08-23 2022-03-08 Reavire, Inc. Controlling objects using virtual rays
US10536691B2 (en) 2016-10-04 2020-01-14 Facebook, Inc. Controls and interfaces for user interactions in virtual spaces
US10809808B2 (en) 2016-10-14 2020-10-20 Intel Corporation Gesture-controlled virtual reality systems and methods of controlling the same
CN111610858B (en) 2016-10-26 2023-09-19 创新先进技术有限公司 Interaction method and device based on virtual reality
US20180189647A1 (en) 2016-12-29 2018-07-05 Google, Inc. Machine-learned virtual sensor model for multiple sensors
US10621773B2 (en) 2016-12-30 2020-04-14 Google Llc Rendering content in a 3D environment
US20180300557A1 (en) * 2017-04-18 2018-10-18 Amazon Technologies, Inc. Object analysis in live video content
EP4220258A1 (en) 2017-04-19 2023-08-02 Magic Leap, Inc. Multimodal task execution and text editing for a wearable system
US10417827B2 (en) 2017-05-04 2019-09-17 Microsoft Technology Licensing, Llc Syndication of direct and indirect interactions in a computer-mediated reality environment
US10650544B2 (en) 2017-06-09 2020-05-12 Sony Interactive Entertainment Inc. Optimized shadows in a foveated rendering system
JP2019008351A (en) 2017-06-20 2019-01-17 ソニー株式会社 Information processing apparatus, information processing method and recording medium
US20190005724A1 (en) * 2017-06-30 2019-01-03 Microsoft Technology Licensing, Llc Presenting augmented reality display data in physical presentation environments
US10740804B2 (en) 2017-07-28 2020-08-11 Magical Technologies, Llc Systems, methods and apparatuses of seamless integration of augmented, alternate, virtual, and/or mixed realities with physical realities for enhancement of web, mobile and/or other digital experiences
US10521944B2 (en) 2017-08-16 2019-12-31 Microsoft Technology Licensing, Llc Repositioning user perspectives in virtual reality environments
EP3467707B1 (en) 2017-10-07 2024-03-13 Tata Consultancy Services Limited System and method for deep learning based hand gesture recognition in first person view
WO2019091943A1 (en) 2017-11-07 2019-05-16 Koninklijke Philips N.V. Augmented reality drag and drop of objects
US10671238B2 (en) 2017-11-17 2020-06-02 Adobe Inc. Position-dependent modification of descriptive content in a virtual reality environment
US11138259B2 (en) 2017-11-28 2021-10-05 Muso.Ai Inc. Obtaining details regarding an image based on search intent and determining royalty distributions of musical projects
US11164380B2 (en) 2017-12-05 2021-11-02 Samsung Electronics Co., Ltd. System and method for transition boundaries and distance responsive interfaces in augmented and virtual reality
US10963144B2 (en) 2017-12-07 2021-03-30 Microsoft Technology Licensing, Llc Graphically organizing content in a user interface to a software application
CN114332332B (en) 2017-12-22 2023-08-18 奇跃公司 Method and device for generating a three-dimensional reconstruction of a surface in a scene
US11024086B2 (en) * 2017-12-22 2021-06-01 Magic Leap, Inc. Methods and system for managing and displaying virtual content in a mixed reality system
US10739861B2 (en) 2018-01-10 2020-08-11 Facebook Technologies, Llc Long distance interaction with artificial reality objects using a near eye display interface
US20190213792A1 (en) 2018-01-11 2019-07-11 Microsoft Technology Licensing, Llc Providing Body-Anchored Mixed-Reality Experiences
US11567627B2 (en) 2018-01-30 2023-01-31 Magic Leap, Inc. Eclipse cursor for virtual content in mixed reality displays
US10540941B2 (en) 2018-01-30 2020-01-21 Magic Leap, Inc. Eclipse cursor for mixed reality displays
US10657716B2 (en) 2018-03-07 2020-05-19 California Institute Of Technology Collaborative augmented reality system
US10540821B2 (en) 2018-03-09 2020-01-21 Staples, Inc. Dynamic item placement using 3-dimensional optimization of space
US10916065B2 (en) 2018-05-04 2021-02-09 Facebook Technologies, Llc Prevention of user interface occlusion in a virtual reality environment
US10757109B2 (en) 2018-05-10 2020-08-25 Rovi Guides, Inc. Systems and methods for connecting a public device to a private device with pre-installed content management applications
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
US10698206B2 (en) 2018-05-31 2020-06-30 Renault Innovation Silicon Valley Three dimensional augmented reality involving a vehicle
US10948993B2 (en) 2018-06-07 2021-03-16 Facebook, Inc. Picture-taking within virtual reality
WO2019236344A1 (en) 2018-06-07 2019-12-12 Magic Leap, Inc. Augmented reality scrollbar
US10747302B2 (en) 2018-06-08 2020-08-18 Facebook Technologies, Llc Artificial reality interaction plane
US10748342B2 (en) 2018-06-19 2020-08-18 Google Llc Interaction system for augmented reality objects
US20180350144A1 (en) 2018-07-27 2018-12-06 Yogesh Rathod Generating, recording, simulating, displaying and sharing user related real world activities, actions, events, participations, transactions, status, experience, expressions, scenes, sharing, interactions with entities and associated plurality types of data in virtual world
US10909762B2 (en) 2018-08-24 2021-02-02 Microsoft Technology Licensing, Llc Gestures for facilitating interaction with pages in a mixed reality environment
US10902678B2 (en) 2018-09-06 2021-01-26 Curious Company, LLC Display of hidden information
CN110908741A (en) 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Application performance management display method and device
US10732725B2 (en) 2018-09-25 2020-08-04 XRSpace CO., LTD. Method and apparatus of interactive display based on gesture recognition
US10942577B2 (en) 2018-09-26 2021-03-09 Rockwell Automation Technologies, Inc. Augmented reality interaction techniques
US11010974B2 (en) 2019-01-04 2021-05-18 Vungle, Inc. Augmented reality in-application advertisements
US11107265B2 (en) 2019-01-11 2021-08-31 Microsoft Technology Licensing, Llc Holographic palm raycasting for targeting virtual objects
US11294472B2 (en) 2019-01-11 2022-04-05 Microsoft Technology Licensing, Llc Augmented two-stage hand gesture input
US11397463B2 (en) 2019-01-12 2022-07-26 Microsoft Technology Licensing, Llc Discrete and continuous gestures for enabling hand rays
US20200285761A1 (en) 2019-03-07 2020-09-10 Lookout, Inc. Security policy manager to configure permissions on computing devices
US10994201B2 (en) 2019-03-21 2021-05-04 Wormhole Labs, Inc. Methods of applying virtual world elements into augmented reality
CN113508361A (en) 2019-05-06 2021-10-15 苹果公司 Apparatus, method and computer-readable medium for presenting computer-generated reality files
US11287947B2 (en) * 2019-05-15 2022-03-29 Microsoft Technology Licensing, Llc Contextual input in a three-dimensional environment
US10939034B2 (en) 2019-07-08 2021-03-02 Varjo Technologies Oy Imaging system and method for producing images via gaze-based control
US11017231B2 (en) 2019-07-10 2021-05-25 Microsoft Technology Licensing, Llc Semantically tagged virtual and physical objects
US11227446B2 (en) 2019-09-27 2022-01-18 Apple Inc. Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality
US11126320B1 (en) 2019-12-11 2021-09-21 Amazon Technologies, Inc. User interfaces for browsing objects in virtual reality environments
KR20210078852A (en) 2019-12-19 2021-06-29 엘지전자 주식회사 Xr device and method for controlling the same
US11727650B2 (en) 2020-03-17 2023-08-15 Apple Inc. Systems, methods, and graphical user interfaces for displaying and manipulating virtual objects in augmented reality environments
US11593997B2 (en) 2020-03-31 2023-02-28 Snap Inc. Context based augmented reality communication
EP3926441B1 (en) 2020-06-15 2024-02-21 Nokia Technologies Oy Output of virtual content
US11227445B1 (en) 2020-08-31 2022-01-18 Facebook Technologies, Llc Artificial reality augments and surfaces
US11176755B1 (en) * 2020-08-31 2021-11-16 Facebook Technologies, Llc Artificial reality augments and surfaces
US11178376B1 (en) 2020-09-04 2021-11-16 Facebook Technologies, Llc Metering for display modes in artificial reality
JP2023541275A (en) 2020-09-11 2023-09-29 アップル インコーポレイテッド How to interact with objects in the environment
CN116719413A (en) 2020-09-11 2023-09-08 苹果公司 Method for manipulating objects in an environment
US20220091722A1 (en) 2020-09-23 2022-03-24 Apple Inc. Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
EP4200690A2 (en) 2020-09-25 2023-06-28 Apple Inc. Methods for manipulating objects in an environment
CN116324703A (en) 2020-09-25 2023-06-23 苹果公司 Method for interacting with virtual controls and/or affordances for moving virtual objects in a virtual environment
US20220100265A1 (en) 2020-09-30 2022-03-31 Qualcomm Incorporated Dynamic configuration of user interface layouts and inputs for extended reality systems
US11238664B1 (en) 2020-11-05 2022-02-01 Qualcomm Incorporated Recommendations for extended reality systems
US11017609B1 (en) 2020-11-24 2021-05-25 Horizon Group USA, INC System and method for generating augmented reality objects
JP2024506630A (en) 2021-02-08 2024-02-14 サイトフル コンピューターズ リミテッド Extended reality for productivity
JP2022147265A (en) 2021-03-23 2022-10-06 株式会社コロプラ Program, method, and information processing device
US11762952B2 (en) 2021-06-28 2023-09-19 Meta Platforms Technologies, Llc Artificial reality application lifecycle
US20220261088A1 (en) 2021-09-01 2022-08-18 Facebook Technologies, Llc Artificial reality platforms and controls
US11798247B2 (en) 2021-10-27 2023-10-24 Meta Platforms Technologies, Llc Virtual object structures and interrelationships
US11748944B2 (en) 2021-10-27 2023-09-05 Meta Platforms Technologies, Llc Virtual object structures and interrelationships
US20230134355A1 (en) 2021-10-29 2023-05-04 Meta Platforms Technologies, Llc Efficient Processing For Artificial Reality
US20220406021A1 (en) 2021-12-13 2022-12-22 Meta Platforms Technologies, Llc Virtual Reality Experiences and Mechanics
US20230260233A1 (en) 2022-02-14 2023-08-17 Meta Platforms Technologies, Llc Coordination of Interactions of Virtual Objects

Also Published As

Publication number Publication date
WO2022046358A1 (en) 2022-03-03
JP2023539796A (en) 2023-09-20
US11847753B2 (en) 2023-12-19
KR20230058641A (en) 2023-05-03
US11176755B1 (en) 2021-11-16
US20230162453A1 (en) 2023-05-25
EP4205088A1 (en) 2023-07-05
US11651573B2 (en) 2023-05-16
US20220122329A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US11847753B2 (en) Artificial reality augments and surfaces
US11769304B2 (en) Artificial reality augments and surfaces
US11875162B2 (en) Computer-generated reality platform for generating computer-generated reality environments
US20210191523A1 (en) Artificial reality notification triggers
US11762952B2 (en) Artificial reality application lifecycle
US20230092103A1 (en) Content linking for artificial reality environments
US11935208B2 (en) Virtual object structures and interrelationships
US11748944B2 (en) Virtual object structures and interrelationships
US20230260233A1 (en) Coordination of Interactions of Virtual Objects
US11636655B2 (en) Artificial reality environment with glints displayed by an extra reality device
US11893674B2 (en) Interactive avatars in artificial reality
US20240126406A1 (en) Augment Orchestration in an Artificial Reality Environment
US20230196766A1 (en) Artificial Reality Applications Through Virtual Object Definitions and Invocation
US20230419618A1 (en) Virtual Personal Interface for Control and Travel Between Virtual Worlds
JP7476292B2 (en) Method and system for managing and displaying virtual content in a mixed reality system - Patents.com

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination