CN113544633A - Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration - Google Patents

Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration Download PDF

Info

Publication number
CN113544633A
CN113544633A CN201980093248.8A CN201980093248A CN113544633A CN 113544633 A CN113544633 A CN 113544633A CN 201980093248 A CN201980093248 A CN 201980093248A CN 113544633 A CN113544633 A CN 113544633A
Authority
CN
China
Prior art keywords
user
virtual
users
items
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980093248.8A
Other languages
Chinese (zh)
Inventor
焦阿基诺·诺里斯
潘亚·因弗欣
詹姆斯·艾伦·布思
萨尔塔克·雷
阿莱西亚·马拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Facebook Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/234,128 external-priority patent/US11024074B2/en
Priority claimed from US16/234,013 external-priority patent/US10921878B2/en
Priority claimed from US16/233,846 external-priority patent/US20200210137A1/en
Application filed by Facebook Technologies LLC filed Critical Facebook Technologies LLC
Publication of CN113544633A publication Critical patent/CN113544633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Computing Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Position Input By Displaying (AREA)

Abstract

In one embodiment, a method comprises: displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content; inferring an intent of the first user to interact with the first virtual content based on one or more of the first user action or the contextual information; and adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.

Description

Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration
Technical Field
The present disclosure relates generally to artificial reality environments, including virtual reality environments and mixed virtual reality environments.
Background
Artificial reality is a form of reality that has been adjusted in some way before being presented to the user, and may include, for example, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), hybrid reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include fully generated content or generated content combined with captured content (e.g., real-world photos). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of them may be presented in a single channel or multiple channels (e.g., stereoscopic video that produces a three-dimensional effect to a viewer). The artificial reality may be associated with an application, product, accessory, service, or some combination thereof, for example, to create content in the artificial reality and/or to be used in the artificial reality (e.g., to perform an activity in the artificial reality). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a Head Mounted Display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Summary of the specific embodiments
Certain embodiments described herein relate to a method of creating a system for modifying a VR environment using contextual information and user intent to create an isolated experience for a user. The system may first determine whether the user wants to focus on interacting with a particular object based on the user's movements, interactions with the object, and eye movements. Once the system determines that the user wants to interact with a particular object, the system may then modify the environment to maximize the user's experience with the particular object.
Certain embodiments described herein relate to creating a system for merging reality between various user locations to create a joint VR space (joint VR space) using each user's free space. The system may first determine that the free space of a particular user is large enough to accommodate the joint VR space. The system can then map or retrieve the free space of a particular user and coordinate with other users who also want to participate in the joint VR space by determining and creating a framework of the joint VR space that accommodates the free space limitations of each user and maximizes the overlap between the free spaces of the users.
Certain embodiments described herein relate to synchronizing content and objects from real life with content and objects in a digital/VR environment to enhance user interaction, communication, and collaboration with other users (e.g., for purposes of project collaboration). The system may first determine what objects within the user's real life environment the user may want to use to collaborate with other users. The system may then copy and present the real-life objects in real-time within the UI of the VR shared space so that other users in the VR shared space can view and interact with the objects.
Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some way before being presented to a user, and may include, for example, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), mixed reality, or some combination and/or derivative thereof. The artificial reality content may include fully generated content or generated content combined with captured content (e.g., real-world photos). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (e.g., stereoscopic video that produces a three-dimensional effect to a viewer). Additionally, in some embodiments, the artificial reality may be associated with an application, product, accessory, service, or some combination thereof, for example, to create content in the artificial reality and/or to be used in the artificial reality (e.g., to perform an activity in the artificial reality). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a Head Mounted Display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
The embodiments disclosed herein are merely examples, and the scope of the present disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments in accordance with the present invention are specifically disclosed in the accompanying claims directed to methods, storage media, systems, and computer program products, wherein any feature referred to in one claim category (e.g., method) may also be claimed in another claim category (e.g., system). The dependencies or back-references in the appended claims are chosen for formal reasons only. However, any subject matter resulting from an intentional back-reference to any previous claim (in particular multiple dependencies) may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed regardless of the dependency selected in the appended claims. The subject matter which can be claimed comprises not only the combination of features as set forth in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein may be claimed in a separate claim and/or in any combination with any of the embodiments or features described or depicted herein or in any combination with any of the features of the appended claims.
In an embodiment, a method may include, by a computing system:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of the first user action or the contextual information; and
based on the inference of the first user's intent to interact with the first virtual content, one or more configurations associated with one or more of the second virtual content are adjusted.
The first user action may include one or more of:
the user's eye movement focused on the first virtual content,
a verbal request by a first user of the first device,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
The context information may include one or more of the following:
location information associated with the first user is provided,
the movement information associated with the first user,
time information associated with the first user is provided,
a preset action associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
The time information associated with the first user may include a predetermined period of time during which the user is not active.
Inferring the intent of the first user may be based at least in part on a perspective of a hypothetical user, which may be based at least in part on one or more users of an associated social network.
Assume that the user may be based at least in part on:
each user of the social network, or
One or more subsets of users of a social network.
Adjusting one or more configurations associated with the second virtual content may include one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
One or more social network attributes of one or more of the second virtual content are adjusted.
The adjustment to the visual attribute or the audio attribute of one or more of the second virtual content may be determined based at least in part on a content type associated with the second virtual content or a service type associated with the second virtual content.
Adjusting the social network attribute may include temporarily limiting or removing all notifications from the social network that are associated with the second virtual content.
The virtual area may reside in a virtual reality environment, and the first user may be a virtual user in the virtual reality environment.
In embodiments, one or more computer-readable non-transitory storage media may embody software that, when executed, is operable to perform a method according to any embodiment herein, or is operable to:
displaying first virtual content to a first user in a virtual area, the virtual area may include one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of the first user action or the contextual information; and
based on the inference of the first user's intent to interact with the first virtual content, one or more configurations associated with one or more of the second virtual content are adjusted.
The first user action may include one or more of:
the user's eye movement focused on the first virtual content,
a verbal request by a first user of the first device,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
The context information may include one or more of the following:
location information associated with the first user is provided,
the movement information associated with the first user,
time information associated with the first user is provided,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
Inferring the intent of the first user may be based at least in part on a perspective of a hypothetical user, which may be based at least in part on one or more users of an associated social network.
Adjusting one or more configurations associated with the second virtual content may include one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
One or more social network attributes of one or more of the second virtual content are adjusted.
In an embodiment, a system may include: one or more processors; and a memory coupled to the processor, the memory comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method or being operable to:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of the first user action or the contextual information; and
based on the inference of the first user's intent to interact with the first virtual content, one or more configurations associated with one or more of the second virtual content are adjusted.
The first user action may include one or more of:
the user's eye movement focused on the first virtual content,
a verbal request by a first user of the first device,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
The context information may include one or more of the following:
location information associated with the first user is provided,
the movement information associated with the first user,
time information associated with the first user is provided,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
Inferring the intent of the first user may be based at least in part on a perspective of a hypothetical user, which may be based at least in part on one or more users of an associated social network.
Adjusting one or more configurations associated with the second virtual content may include one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
One or more social network attributes of one or more of the second virtual content are adjusted.
In an embodiment, a method, particularly a method according to any embodiment herein, may include:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with a first user;
retrieving information associated with one or more second rooms for each of the second users;
creating a joint virtual space based on the first region of the first room and information associated with each of the second rooms; and
access to the federated virtual space is provided to each of the first user and the one or more second users.
In an embodiment, a method may include, prior to retrieving information associated with one or more second rooms:
it is determined whether a first area in the first room is equal to or greater than a predetermined minimum area.
The first zone may be determined by calculating a maximum free space associated with the first room after evaluating the spatial limit and the location of the one or more items in the first room.
The retrieved information associated with the second room may include at least:
a spatial limit associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
In an embodiment, a method may include:
determining a second zone for each of the one or more second rooms based at least in part on the spatial constraints and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the spatial constraints and the location of the one or more items in each of the one or more second rooms.
The joint virtual space may be created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
Providing access to the federated virtual space may include notifying each of the first user and the one or more second users that the federated virtual space is available for use.
Notifying each of the first user and the one or more second users may include sending, to each of the first user and the one or more second users, an instruction to generate a portal object (portal object) that allows each of the first user and the one or more second users to virtually access the federated virtual space.
Generating the portal object may include:
sending an instruction to the first user to draw a virtual doorway (door) within the first area in the first room, the virtual doorway allowing the first user to virtually access the joint virtual space; and
sending, to each of the second users, an instruction to draw a virtual doorway in each of the second rooms that allows each of the second users to virtually access the joint virtual space.
The joint virtual space may reside in a virtual reality environment, and each of the first user and the one or more second users may be virtual users in the virtual reality environment.
In an embodiment, one or more computer-readable non-transitory storage media, in particular media according to any embodiment herein, may embody software operable when executed to perform a method or operable to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with a first user;
retrieving information associated with one or more second rooms for each of the second users;
creating a joint virtual space based on the first region of the first room and information associated with each of the second rooms; and
access to the federated virtual space is provided to each of the first user and the one or more second users.
The first zone may be determined by calculating a maximum free space associated with the first room after evaluating the spatial limit and the location of the one or more items in the first room.
The retrieved information associated with the second room may include at least:
a spatial limit associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
In an embodiment, a medium may include:
determining a second zone for each of the one or more second rooms based at least in part on the spatial constraints and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the spatial constraints and the location of the one or more items in each of the one or more second rooms.
The joint virtual space may be created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
In an embodiment, a system, in particular a system according to any embodiment herein, may comprise: one or more processors; and a memory coupled to the processor, the memory comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method or being operable to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with a first user;
retrieving information associated with one or more second rooms for each of the second users;
creating a joint virtual space based on the first region of the first room and information associated with each of the second rooms; and
access to the federated virtual space is provided to each of the first user and the one or more second users.
The first zone may be determined by calculating a maximum free space associated with the first room after evaluating the spatial limit and the location of the one or more items in the first room.
The retrieved information associated with the second room may include at least:
a spatial limit associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
In an embodiment, a system may include:
determining a second zone for each of the one or more second rooms based at least in part on the spatial constraints and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the spatial constraints and the location of the one or more items in each of the one or more second rooms.
The joint virtual space may be created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
In an embodiment, a method, particularly a method according to any embodiment herein, may include:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
a first virtual item is displayed to a subset of one or more users in a virtual reality environment,
wherein if a change to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
The request to share the display of the first interactive item may be from a first user of the one or more users that is currently interacting with the first interactive item.
The request to share the display of the first interactive item may come from one or more second users, the one or more second users being virtual users associated with the virtual reality environment.
In an embodiment, a method may include, prior to receiving a request to share a display of a first interactive item,
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
The subset of one or more users may include virtual users in a virtual reality environment.
The first interactive item may be located in a real-world environment.
In an embodiment, a method may include:
accessing a location of a first interactive item relative to one or more other items surrounding the first interactive item in a real-world environment;
generating one or more second virtual items as copies of one or more other items; and
the first virtual item and the one or more second virtual items are displayed in the virtual reality environment based on a position of the first interactive item relative to one or more other items in the real-world environment.
In an embodiment, a method may include:
accessing an orientation of a first interactive item in a real-world environment; and
a first virtual item is displayed in the virtual reality environment based on an orientation of the first interactive item in the real-world environment.
In an embodiment, a method may include:
receiving comments associated with the first virtual item from one or more users of the subset of users; and
the comment to be displayed is sent to a first user of the one or more users who is currently interacting with the first interactive item in the real-world environment.
The commentary may include one or more of an audio commentary, a video commentary, or a written commentary.
In an embodiment, one or more computer-readable non-transitory storage media, in particular media according to any embodiment herein, may embody software operable when executed to perform a method or operable to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying a first virtual item to a subset of one or more users in a virtual reality environment;
wherein if a change to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
When executed, the software may be operable to, prior to receiving a request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
The first interactive item may be located in a real-world environment.
The software may be operable when executed to:
accessing a location of a first interactive item relative to one or more other items surrounding the first interactive item in a real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
the first virtual item and the one or more second virtual items are displayed in the virtual reality environment based on a position of the first interactive item relative to one or more other items in the real-world environment.
When executed, the software may be operable to:
accessing an orientation of a first interactive item in a real-world environment; and
a first virtual item is displayed in the virtual reality environment based on an orientation of the first interactive item in the real-world environment.
In an embodiment, a system, in particular a system according to any embodiment herein, may comprise: one or more processors; and a memory coupled to the processor, the memory comprising instructions executable by the processor, the processor when executing the instructions being operable to perform a method or being operable to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying a first virtual item to a subset of one or more users in a virtual reality environment;
wherein if a change to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
When executing the instructions, the processor may be operable to, prior to receiving the request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
The first interactive item may be located in a real-world environment.
When executing the instructions, the processor may be operable to:
accessing a location of a first interactive item relative to one or more other items surrounding the first interactive item in a real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
the first virtual item and the one or more second virtual items are displayed in the virtual reality environment based on a position of the first interactive item relative to one or more other items in the real-world environment.
When executing the instructions, the processor may be operable to:
accessing an orientation of a first interactive item in a real-world environment; and
a first virtual item is displayed in the virtual reality environment based on an orientation of the first interactive item in the real-world environment.
In an embodiment, one or more computer-readable non-transitory storage media may embody software that is operable when executed to perform a method according to or within any of the embodiments described above.
In an embodiment, a system may include: one or more processors; and at least one memory coupled to the processor and comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to or within any of the embodiments described above.
In an embodiment, a computer program product, preferably comprising a computer-readable non-transitory storage medium, which when executed on a data processing system may be operable to perform a method according to or in any of the embodiments described above.
Brief Description of Drawings
Fig. 1 illustrates an example artificial reality system.
Fig. 2 shows an example 3D eye tracking system.
FIG. 3 illustrates an example artificial reality space.
Fig. 4 illustrates an example VR environment.
Fig. 5 illustrates another example of a VR environment.
Fig. 6 illustrates an example method for updating a VR environment based on user intent and contextual information.
FIG. 7 illustrates an example first user-specified space for merging with other users into a joint VR space.
Fig. 8 illustrates an example second user-specified space for merging with other users into a joint VR space.
Fig. 9 illustrates an example merged VR environment.
10A and 10B illustrate an example of creating a portal object for accessing a merged VR environment.
FIG. 11 illustrates an example method for specifying a space to merge an artificial reality system with other users.
FIG. 12 illustrates a first user specifying an object for sharing with other users in an artificial reality system.
FIG. 13 illustrates a second user specifying an object for sharing with other users in an artificial reality system.
14A and 14B illustrate various environments for various users to view shared objects.
FIG. 15 illustrates an example method for specifying various objects and sharing those objects with other users in an artificial reality system.
FIG. 16 illustrates an example network environment associated with a social networking system.
FIG. 17 illustrates an example social graph.
FIG. 18 illustrates an example computer system.
Description of example embodiments
Overview of Artificial reality
Fig. 1 illustrates an example artificial reality system 100. In a particular embodiment, the artificial reality system 100 may include a headset (headset)104, a controller 106, and a computing system 108. The user 102 may wear a headset 104 that may display visual artificial reality content to the user 102. The headset 104 may include an audio device that may provide audio artificial reality content to the user 102. The headset 104 may include one or more cameras that may capture images and video of the environment. The headset 104 may include an eye tracking system for determining the convergence distance of the user 102. The vergence distance may be a distance from the user's eyes to an object (e.g., a real world object or a virtual object in a virtual space) that the user's eyes converge on. The head mounted device 104 may be referred to as a Head Mounted Display (HMD). The controller 106 may include a touch pad and one or more buttons. The controller 106 may receive input from the user 102 and relay the input to the computing system 108. The controller 206 may also provide haptic feedback to the user 102. The computing system 108 may be connected to the headset 104 and the controller 106 by a cable or wireless connection. The computing system 108 may control the headset 104 and the controller 106 to provide artificial reality content to the user 102 and to receive input from the user 102. Computing system 108 may be a stand-alone host computer system, an on-board computer system (on-board computer system) integrated with head-mounted device 104, a mobile device, or any other hardware platform capable of providing artificial reality content to user 102 and receiving input from user 102.
In particular embodiments, the artificial reality system may include an eye tracking system for tracking the user's eyes in real-time. The eye tracking system may be a 3D eye tracking system that tracks the user's eye movements (e.g., gaze direction, gaze angle, convergence) and determines where the user is looking (e.g., convergence distance or gaze point). Fig. 2 shows an example 3D eye tracking system 200. The 3D eye tracking system 200 may track eye movement in three dimensions to determine the convergence distance or gaze point of the user. The convergence distance of the user may be the distance from the user's eyes to the point where the user's eyes converge. The gaze point of the user may be the point at which the user is gazing. Eye tracking system 200 may include a lens 210, a plurality of infrared light sources (e.g., 212A-H), a hot mirror 220, and an infrared camera 240. Light sources 212A-H may be infrared Light Emitting Diodes (LEDs) mounted on lens 210. The hot mirror 220 may be a dichroic filter that reflects infrared light while allowing visible light to pass through. Infrared light (e.g., 214) emitted by one or more light sources 212A-H may reach eye 250 and be reflected off eye 250. The reflected light 216 may be further reflected by the hot mirror 220 and reach the infrared camera 240. Camera 240 may be an infrared camera that captures an image of eye 250 using reflected infrared light. The eye tracking system 200 may capture images of both eyes (e.g., pupils) of the user and process the images using computer vision techniques. The eye tracking system 200 may measure the angle of both eyes and use the geometric relationship to determine the convergence distance and gaze point of the user. For example, the 3D eye tracking system 200 may measure the user's eye angle with an accuracy of 1 degree. Visible light 232 from the display screen 230 may pass through the hot mirror 220 and the lens 210 to the eye 250, allowing the user to see the content rendered by the display screen 230. In particular embodiments, the 3D eye tracking system 200 may capture an eye image using ambient light 260 from the environment. Ambient light 240 may reach eye 250 and may be reflected off eye 250. The reflected light may pass through the lens 210 and reach the hot mirror 220 and the camera 240. Camera 240 may capture an image of eye 250 based on ambient light reflected off of eye 250. In particular embodiments, the 3D eye tracking system may use a hybrid approach that utilizes light sources (e.g., 212A-212H) and ambient light 260 to capture eye images and track eye movement.
In particular embodiments, eye tracking may be performed 308 using a Machine Learning (ML) based approach. The headset system may take a sequence of images of the eyes of the user wearing the headset (e.g., using a camera of a 3D eye tracking system), and process the images and output convergence information using a Machine Learning (ML) algorithm. For example, a Machine Learning (ML) algorithm may include an inference model to determine the user's convergence distance and gaze point. In particular embodiments, the headset system may include a hybrid approach that combines 3D eye tracking and ML-based eye tracking.
In particular embodiments, the artificial reality system may use a combination of methods to determine the user's convergence distance and gaze point. These methods may include, for example, but are not limited to, eye tracking based methods (e.g., 3D eye tracking, ML-based eye tracking), body based methods (e.g., head position/movement, hand position/movement, body position/movement), and content based methods (e.g., Z-buffer, face/object recognition, developer-provided information). U.S. patent application No. 16/132, 153 entitled "Vergence Determination" filed on 9, 14.2018, which is incorporated by reference for purposes of illustration only and not limitation, discloses examples of using different combinations of methods to determine the Vergence distance or gaze point.
Fig. 3 illustrates an example artificial reality space 300. In particular embodiments, artificial reality space 300 may include a first user 302, a second user 304, and a third user 306. In particular embodiments, the artificial reality space 300 may include a virtual reality scene rendered by a headset in a virtual space and in a field of view of each of the first, second, and third users 302, 304, and 306 wearing the headsets 308, 310, and 312, respectively. As discussed in more detail below, the first user 302, the second user 304, and the third user 306 may access the artificial reality space 300 to meet and cooperate with each other in addition to interacting with one or more objects or items located in the artificial reality space 300.
Flip-flop
In particular embodiments, the VR environment may be changed or updated based on an analysis of the user's movement in the VR environment and/or interactions with one or more items in the VR environment to better accommodate the user's needs or to provide a better user experience. Certain embodiments described herein are directed to modifying a VR environment using contextual information and a user's intent to create an artificial reality of an isolated experience for the user. As described in more detail below, in addition to various social networking information (discussed below), the system may first determine whether the user wants to focus on interacting with a particular object based on the user's movements, interactions with the object, and eye movements. Once the system determines that the user wants to interact with a particular object, the system may then modify the environment to maximize the user's experience with the particular object.
In particular embodiments, the system may infer a user's intent to focus on interacting with a particular object based on the user's movement or location. Fig. 4 illustrates an example VR environment 400. As shown in fig. 4, VR environment 400 includes screen 410, speaker 412, lights 414, alarm 416, radio 418, and sofa 420. Further, fig. 5 shows another example of a VR environment 500. As shown in fig. 5, VR environment 500 includes a sofa 510, a bookshelf 512, a coffee table 514, a pendant 516, a floor lamp 518, a window covered by a shade 520, and a radio 522. In particular embodiments, the artificial reality system may infer the user's intent based on the contextual information and one or more "triggers" in the VR environment, which may include determining that the user's movement and/or location is within a predetermined distance or location relative to one or more objects (e.g., virtual screen, bookshelf) in the VR environment. These triggers may include instructions associated with one or more objects in the VR environment that are triggered when certain conditions associated with the user are met (e.g., the user is within a predetermined location or distance to the one or more objects, user movement related to the one or more objects, etc.).
As an example, as shown in fig. 4, if the artificial reality system determines that the user wants to view a program on screen 410 in a VR environment based on the user's movement (e.g., by sitting in a couch 420 in front of screen 410, by standing beside screen 410, etc.) toward a location near the location of screen 410 (e.g., a virtual television screen, a virtual projection screen, etc.) in the VR environment, the system may determine that the user wants to focus their attention on the program on viewing screen 410 and minimize all other disturbances. In this way, the system can modify the VR environment around the user to create an isolated experience for the user, thereby maximizing the user's experience. By way of example and not limitation, the system may determine that, in order to isolate the user's experience of watching a program on screen 410, the system may remove any and all interference by dimming or turning off lights 414. In addition, the system may remove objects that become distracters, such as turning off alarm 416, radio 418, and other related objects. Further, the system may temporarily limit interaction with other users (e.g., users of a social network, as discussed in more detail below).
As another example, as shown in fig. 5, if the artificial reality system determines that the user wants to read a book from the bookshelf 512 in the VR environment based on the user's movement toward a location proximate to the location of the bookshelf 512 in the VR environment (e.g., by standing in front of the bookshelf 512, by sitting on a sofa 510, etc.), the system may determine that the user wants to focus their attention on reading the book from the bookshelf 512 and minimize all other disturbances. In this way, the system can modify the VR environment around the user to create an isolated experience for the user, thereby maximizing the experience. By way of example and not limitation, the system may remove any and all interference by turning on the ceiling light 516 and also turning off the floor light 518 to reduce unnecessary light in the room. In addition, the system may turn on the radio 522 to provide quiet music. Further, the system may remove objects that are distracting and temporarily limit interaction with other users.
In particular embodiments, the artificial reality system may infer a user's intent to focus on interacting with a particular object based on the user's interaction with the object. In particular embodiments, the system may infer the user's intent based on the context information and one or more triggers in the VR environment, including determining a level of interaction of the user with one or more objects (e.g., virtual screens, book shelves) in the VR environment. The triggers can include instructions associated with one or more objects in the VR environment that are triggered when a particular condition associated with user interaction with the one or more objects is satisfied. As an example, as shown in fig. 4, if the artificial reality system determines that the user wants to view a program on screen 410 in the VR environment based on the user's interaction with screen 410 in the VR environment (e.g., by using a controller to control the program on screen 410, by verbally requesting that screen 410 open or play the program, etc.) or with sofa 420 (e.g., by sitting in sofa 420 in front of screen 410), the system may determine that the user wants to focus their attention on the program on viewing screen 410 and minimize all other distractions (e.g., using the method of adjusting visual, audio, and/or social network attributes as discussed above). As another example, as shown in fig. 5, if the artificial reality system determines that the user wants to read a book from a bookshelf 512 in the VR environment based on the user's interaction with the book on the bookshelf 512 in the VR environment (e.g., by removing the book from the bookshelf 512) or with a sofa 510 (e.g., by sitting on the sofa 510), the system may determine that the user wants to focus their attention on reading the book from the bookshelf 512 and minimize all other disturbances (e.g., using the method discussed above).
In particular embodiments, the artificial reality system may infer the user's intent to focus on interacting with a particular object based on tracking the user's eye movements (e.g., using the methods described above with respect to fig. 2). In particular embodiments, the system may infer the user's intent based on the context information and one or more triggers in the VR environment, including determining eye movements of the user related to one or more objects (e.g., virtual screens, book shelves) in the VR environment. These triggers may include instructions associated with one or more objects in the VR environment that are triggered when certain conditions associated with tracking the user's eye movement are met. Further, the inference of user intent may be determined based on tracking of user eye movement in conjunction with a temporal component (e.g., a predetermined period of time during which the user is not acting). As an example, as shown in fig. 4, if the artificial reality system tracks the user's eye movement and determines that the user is looking at screen 410 in the VR environment for a predetermined amount of time (e.g., by determining that the user's eyes are focused on point 422 on screen 410), the system may determine that the user wants to focus their attention on watching the program on screen 410 and minimize all other disturbances (e.g., using the method of adjusting visual attributes, audio attributes, and/or social network attributes as discussed above). The determination may be made by further determining that a predetermined period of time has elapsed during which the user is not acting. As another example, as shown in fig. 5, if the artificial reality system tracks the user's eye movement and determines that the user wants to read a book from the bookshelf 512 in the VR environment (e.g., by determining that the user's eyes are focused on a point 524 on the bookshelf 512), the system may determine that the user wants to focus their attention on reading the book from the bookshelf 512 and minimize all other disturbances (e.g., using the method as discussed above). For both examples, the determination may be made by further accessing temporal information associated with the user (e.g., determining that a predetermined period of time for which the user is not acting has elapsed).
In particular embodiments, modifications to the VR environment may be encoded into or associated with various objects based on the object type of the particular object with which the user wants to interact (e.g., screen 410, books from bookshelf 512, etc.), based on the type of activity the user wants to engage in (e.g., watching a program, reading a book, etc.), or the type of service associated with the particular object (e.g., screen 410 is associated with a video application, books on bookshelf 512 are associated with a reading application). Further, the modification to the VR environment may be user specified or user input, or may be based on preset settings (e.g., factory settings) that may be changed or updated by the user.
In particular embodiments, the artificial reality system may infer a user's intent to focus on interacting with a particular object based on one or more items of information stored by or available to the social networking system (discussed in more detail below). Examples of information items stored by a social networking system may include social graph information associated with a target user (i.e., a user interacting with one or more objects). Examples of information items available to a social-networking system may include information items accessible by the social-networking system and stored on one or more client systems, one or more third-party systems, one or more networks, or any combination thereof. In particular embodiments, the information items from which intent may be inferred may include social graph information (e.g., nodes and edges, affinities (affinities), and degrees of separation), content objects, posts, text data, location information, media, user profile information, time information, and privacy settings. In particular embodiments, one or more information items may belong to multiple categories. For example, one or more information items may be classified as social graph information, posts, and media. Alternatively, in particular embodiments, one or more information items may belong to only one category.
In particular embodiments, the artificial reality system may infer a user's intent to focus on interacting with a particular object based on one or more perspectives of one or more users of the social network. For example, the inferred intent may be based on a perspective of a hypothetical user that is based on one or more users of the social network. In particular embodiments, it is assumed that the user may be based on each user of the social network. By way of example and not by way of limitation, by standing near screen 410 shown in fig. 4, interacting with screen 410, and/or viewing screen 410 (e.g., determined based on tracking eye movements), it may be inferred that each user's presumed user based on the social network will have an intent to view the program. As another example, by interacting with a book on bookshelf 524 or looking at a book on bookshelf 524 or sitting in sofa 510 next to bookshelf 524, it may be inferred that each user's assumed user based on the social network will have an intent to read the book, as shown in fig. 5.
As another example, in particular embodiments, the inferred intent may be based on a perspective of a hypothetical user that is based on a subset of users of the social network. In particular embodiments, the subset of users may be determined by any suitable means, including but not limited to one or more numerical limits, one or more temporal limits, one or more location-based limits, one or more degrees of separation, one or more membership (affiliation) coefficients between the target user and the users comprising the subset of users, one or more commonalities between the target user and the users comprising the subset of users, or any combination thereof. In particular embodiments, commonalities may include any feature or characteristic shared between the target user and the users comprising the subset of users, including, but not limited to, location, age, religious or religious beliefs, education, political affiliations or political beliefs, or common interests (e.g., interests in food, books, movies, or music). For example, it may be inferred that a hypothetical user based on the social networking system's sample set of one hundred users sharing a common interest with the target user, being within two degrees of separation of the target user, and standing nearby all the time, interacting with screen 410, and/or viewing (e.g., determined based on tracking eye movement) screen 410 will have an intent to view a program on screen 410. As another example, it may be inferred that a user, based on a sample set of twenty-five users who are within three years of age of the target user, within one degree of separation of the target user, currently reading a book of a similar type as the target user, and who are always interacting with the book on bookshelf 524 or looking at the book on bookshelf 524 or sitting in sofa 510 beside bookshelf 524, will have an intent to read the book on bookshelf 524.
Fig. 6 illustrates an example method 600 of updating a VR environment based on a user's intent and contextual information. The method may begin at step 610, where the artificial reality system may display first virtual content to the first user in a virtual area, the virtual area including one or more second virtual content. At step 620, the artificial reality system may infer an intent of the first user to interact with the first virtual content based on one or more of the first user action or the contextual information. At step 630, the artificial reality system may adjust one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content. In particular embodiments, the first user action may include one or more of a user eye movement focused on the first virtual content, a verbal request of the first user, a user input associated with the first virtual content, or a user input associated with one or more of the second virtual content. In particular embodiments, the contextual information may include one or more of location information associated with the first user, movement information associated with the first user, time information associated with the first user, a preset action associated with the first virtual content, a content type associated with the first virtual content, or a service type associated with the first virtual content. As an example, the time information associated with the first user may include a predetermined period of time during which the user is not active.
In particular embodiments, inferring the intent of the first user may be based, at least in part, on a perspective of a hypothetical user that is based, at least in part, on one or more users of the associated social network. As an example, assume that the user may be based at least in part on each user of the social network, or one or more subsets of users of the social network. In particular embodiments, adjusting the one or more configurations associated with the second virtual content includes adjusting one or more visual attributes of one or more of the second virtual content, adjusting one or more audio attributes of one or more of the second virtual content, or adjusting one or more social network attributes of one or more of the second virtual content. As an example, the adjustment to the visual attribute or the audio attribute of one or more of the second virtual content may be determined based at least in part on a content type associated with the second virtual content or a service type associated with the second virtual content. As another example, the adjustment of the social network attribute may include temporarily limiting or removing all notifications from the social network that are associated with the second virtual content. In particular embodiments, the virtual area may reside in a virtual reality environment, and the first user may be a virtual user in the virtual reality environment
Particular embodiments may repeat one or more steps of the method of fig. 6 where appropriate. Although this disclosure describes and illustrates particular steps of the method of fig. 6 as occurring in a particular order, this disclosure contemplates any suitable steps of the method of fig. 6 occurring in any suitable order. Further, although this disclosure describes and illustrates an example method for updating a VR environment based on user intent and context information that includes particular steps of the method of fig. 6, this disclosure contemplates any suitable method for updating a VR environment based on user intent and context information that includes any suitable steps, which may include all, some, or none of the steps of the method of fig. 6, where appropriate. Moreover, although this disclosure describes and illustrates particular components, devices, or systems performing particular steps of the method of fig. 6, this disclosure contemplates any suitable combination of any suitable components, devices, or systems performing any suitable steps of the method of fig. 6.
Door
In particular embodiments, one or more users may want to create a federated VR space that users can access and use to interact with each other. Certain embodiments described herein are directed to creating an artificial reality system for merging reality between locations of various users to create a joint VR space using free space for each user. The system may first determine that the free space of a particular user is large enough to accommodate the joint VR space. The system can then map or retrieve the free space of a particular user and coordinate with other users who also want to participate in the joint VR space by determining and creating a framework of the joint VR space that accommodates the free space constraints of each user and maximizes the overlap between the free spaces of the users. In particular embodiments, once the joint VR space is created, the system will allow the user to participate in the joint VR space by, for example, drawing a doorway (e.g., a "portal") that is available to enter the joint VR space. Once a user enters the federated VR space, they can interact and collaborate with other users in the space. In particular embodiments, a guardian box (guardian box) may be used to limit the movement of users of the federated VR space by blocking out areas that one or more users may not have access to, as these areas exceed the user's available free space.
In particular embodiments, to generate the joint VR space, the artificial reality system may first determine whether the user has sufficient free space to accommodate the joint VR space. As an example, the artificial reality system may determine whether the user has the least amount of free space by requesting the user to scan the space using a system controller (e.g., controller 106) to make measurements. Fig. 7 illustrates an example first user-specified space 700 for merging into a joint VR space with other users. As shown in fig. 7, the space 700 includes various items that may block free space in the room 710, including a table 712, a chair 714, a bed 716, and a wardrobe 718. In the middle of the room 710 is a blank area that can be used to create a joint VR space. The user 720 may use a controller 722 (e.g., controller 106) to scan the room 710 to measure the blank areas. By way of example, the user 720 may draw the blank region using the controller 722 by delineating the boundaries of the blank region using a straight line (e.g., as shown in fig. 7), a line following the outline of the object, other suitable methods, or any combination thereof. As another example, the artificial reality system may request from the user a measurement value that may be manually input by the user. By accessing measurements taken by the controller or input by the user, the artificial reality system can calculate the area of free space in the room 710. If the system determines that the area is greater than the predetermined minimum area, the system will allow the user to continue to designate the region for the joint VR space. On the other hand, if the system determines that the area is insufficient because it is less than the predetermined minimum area required to participate in the joint VR space, the system will output an error message to the user and notify the user that a larger area is needed.
In certain embodiments, once the system determines that the area is greater than a predetermined minimum area required to participate in the joint VR space, the system may ask the user 720 to scan the room 710 using the controller 722 to map out the white space in the room. As shown in fig. 7, the user 720 may use the controller 722 to designate a region 724 (shown in phantom) as the region to be used for the joint VR space. As an example, the system may ask user 720 to specify the largest possible area available for the joint VR space, so that the maximum amount of space can be used to determine overlap between the spaces of the various users. As another example, the system may require the user 720 to specify only the areas that the user wants joint VR space coverage, while excluding other areas from evaluation (e.g., corridor areas, walkway areas, etc.). In particular embodiments, such a scan of the room 710 may only be needed when the user 720 first requests to participate in the joint VR space. Once the system draws a blank area in the room, the system may store this information for future use. In particular embodiments, before asking user 720 to draw a blank area in a room, the system may first check the stored information for a previously stored map of the room. If the system finds a previously stored room map, the system may first ask the user 720 if he/she wants to use the previously stored room map, and/or before making further measurements, the system may ask the user 720 if the room configuration has changed since the time the previously stored room map was stored.
In particular embodiments, after the artificial reality system determines that the user 720 wants to create the joint VR space with one or more other users and determines that the user 720 has sufficient free space in the room 710, the artificial reality system may then determine one or more other users participating in the joint VR space. As an example, the artificial reality system may ask the user 720 to: user 720 wants to invite which other users to participate in the federated VR space. As another example, the artificial reality system may send the user 720 a list of other users to select. As yet another example, the artificial reality system may maintain a list of other users who have indicated an interest in joining the joint VR space to interact with the other users. The other users may be users from the user's social network as determined by social graph information (e.g., nodes and edges, affinities, and degrees of separation), as discussed in more detail below. In particular embodiments, other users may be determined by any suitable means, including but not limited to one or more temporal limits (e.g., other users using an artificial reality system during the same time period as user 720), one or more location-based limits (e.g., other users within a geographic distance or within a particular geographic area), one or more degrees of separation, one or more membership coefficients between user 720 and other users, one or more commonalities between user 720 and other users, or any combination thereof.
In particular embodiments, after the artificial reality system determines one or more other users participating in the joint VR space, the system may retrieve information about the spatial limitations of the free space of each of the other users. For example, the system may request and receive spatial constraint information from each other user. If space restriction information is not available, the system may request the system of the other user to determine space restriction information (similar to the method described above). Fig. 8 illustrates an example second user-specified space 800 for merging with other users into a joint VR space. As shown in FIG. 8, the space 800 includes various items that may block free space in a room 810, including a television cabinet 812, a bookshelf 814, and a sofa 816. In the middle of the room 810 is a blank area that can be used to create a joint VR space. The user 818 can scan the room 810 using the controller 820 (e.g., the controller 106) to measure the blank areas. By way of example, the user 818 can draw the blank region using the controller 820 to delineate the boundaries of the blank region using a straight line (e.g., as shown in fig. 8), a line following the outline of the object, other suitable methods, or any combination thereof. Once the system determines that the area is greater than a predetermined minimum area required to participate in the joint VR space, the system may ask the user 818 to scan the room 810 using the controller 820 to map out the blank areas in the room. As shown in fig. 8, the user 818 can use the controller 820 to designate a region 822 (shown in dashed lines) as the region to be used for the joint VR space.
Upon receiving the space constraint information for all other users who want to participate in the joint VR space, the artificial reality system can determine and create a framework for the joint VR space that accommodates the free space constraints of each user and maximizes the overlap between the free spaces of the users. Fig. 9 illustrates an example merged VR environment 900. As shown in fig. 9, a merged VR environment 900 is created by maximizing the overlap between the free space of the user 720 and the user 818. As shown using solid lines in fig. 9, the free space of the room 710 of the user 720 is blocked by various items in the room 710, including a table 712, a chair 714, a bed 716, and a wardrobe 718. Further, as shown using the shallow dashed lines in fig. 9, the free space of the room 810 of the user 818 is blocked by various items in the room 810, including the television cabinet 812, the bookshelf 814, and the sofa 816. As shown in fig. 9, the room 710 has a larger area than the room 810 and also has a larger free space region, so the merged VR environment 900 is limited by the free space region of the room 810. In determining the maximum overlap region 910 between the rooms 710 and 810, the artificial reality system may determine a maximum overlap between a maximum free space associated with the room 710 and a maximum free space associated with the room 810. This maximum overlap region 910 is then used as a joint VR space where users 720 and 818 can use together to interact with each other and other users. In particular embodiments, the maximum overlap region used as the joint VR space may be a square, rectangle, circle, polygon, or other suitable region that maximizes the available free space.
In particular embodiments, the maximum overlap region used as a joint VR space may include certain regions that are blocked so that the user does not touch or move close to certain regions in the user's room (e.g., blocking the region around the furnace even if the furnace is located in a free space region). As an example, a protective box displayed in the merged VR environment may be used to block these regions. The protective frame may be a visual cue that tells the user in the merged VR environment that the area is restricted from access.
In particular embodiments, once the joint VR space is created, the artificial reality system may notify users 720 and 818 that the joint VR space is available and accessible for interacting with each other. By way of example, notifying users 720 and 818 may include sending instructions to users 720 and 818 to generate a portal object that allows users 720 and 818 to virtually access the federated virtual space. Fig. 10A and 10B illustrate an example of creating a portal object in a space 1000 to access a merged VR environment. In particular embodiments, generating the portal object may include the artificial reality system sending instructions to users 720 and 818 to draw a virtual doorway within the area in their respective rooms 710 and 810 that allows each of users 720 and 818 to virtually access the joint virtual space. As shown in FIG. 10A, the space 1000 includes various items that may block free space in the room 1010, including a television cabinet 1012, a bookshelf 1014, and a sofa 1016. In the middle of the room 1010 is a blank area 1018 that is used to create the joint VR space. Once the artificial reality system notifies the user 1020 that the joint VR space is available, the artificial reality system may send an instruction to the user 1020 to generate a portal object to access the joint VR space. As an example, as shown in fig. 10A, the artificial reality system may send an instruction to the user 1020 to generate a portal object 1022 (e.g., a door) to access the federated VR space. The portal object 1022 may be generated by the user 1020 using the controller 1024 (e.g., controller 106) by outlining the portal object 1022, selecting the portal object 1022 from a selection list, or any other suitable means. Once the portal object 1022 is generated, as shown in fig. 10B, the user 1020 can interact with the portal object 1022 (e.g., open a door and pass through a doorway 1026) to access the federated VR space.
FIG. 11 illustrates an example method for specifying a space to merge an artificial reality system with other users. The method may begin at step 1110, where an artificial reality system may receive a request from a first user to create a federated virtual space for use with one or more second users. At step 1120, the artificial reality system may determine a first region in a first room associated with a first user based at least in part on spatial limitations associated with the first room and a location of one or more items in the first room. At step 1130, the artificial reality system may retrieve information associated with one or more second rooms for each of the second users. At step 1140, the artificial reality system may create a joint virtual space based on the first region of the first room and information associated with each of the second rooms. At step 1150, the artificial reality system may provide each of the one or more second users and the first user access to the federated virtual space.
In particular embodiments, the artificial reality system may determine whether a first region in the first room is equal to or greater than a predetermined minimum area prior to retrieving information associated with the one or more second rooms. In particular embodiments, the first zone may be determined by calculating a maximum free space associated with the first room after evaluating the spatial constraints and the location of the one or more items in the first room. The retrieved information associated with the second rooms may include at least the space limitations associated with each of the second rooms for each of the second users and the locations of the one or more items in each of the second rooms for each of the second users. The artificial reality system may determine a second region of each of the one or more second rooms based at least in part on the spatial limit and the location of the one or more items, wherein the second region is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the spatial limit and the location of the one or more items in each of the one or more second rooms. The joint virtual space may be created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
In particular embodiments, providing access to the federated virtual space includes notifying each of the first user and the one or more second users that the federated virtual space is available for use. Notifying each of the first user and the one or more second users may include sending, to each of the first user and the one or more second users, an instruction to generate a portal object that allows each of the first user and the one or more second users to virtually access the federated virtual space. In particular embodiments, generating the portal object may include sending instructions to the first users to draw a virtual doorway within a first area in the first room that allows the first users to virtually access the joint virtual space, and sending instructions to each of the second users to draw a virtual doorway in each of the second rooms that allows each of the second users to virtually access the joint virtual space. In particular embodiments, the joint virtual space may reside in a virtual reality environment, and each of the first user and the one or more second users may be virtual users in the virtual reality environment.
Particular embodiments may repeat one or more steps of the method of fig. 11 where appropriate. Although this disclosure describes and illustrates particular steps of the method of fig. 11 as occurring in a particular order, this disclosure contemplates any suitable steps of the method of fig. 11 occurring in any suitable order. Further, although this disclosure describes and illustrates an example method for specifying a space for merging an artificial reality system with other users that includes particular steps of the method of fig. 11, this disclosure contemplates any suitable method for specifying a space for merging an artificial reality system with other users that includes any suitable steps, which may include all, some, or none of the steps of the method of fig. 11, where appropriate. Moreover, although this disclosure describes and illustrates particular components, devices, or systems performing particular steps of the method of fig. 11, this disclosure contemplates any suitable combination of any suitable components, devices, or systems performing any suitable steps of the method of fig. 11.
Shared spaces
In particular embodiments, one or more users may want to collaborate with each other using content that all users can view and dynamically interact with. Certain embodiments described herein are directed to an artificial reality system that synchronizes content and objects from real life with content and objects in a digital/VR environment to enhance user interaction, communication, and collaboration (e.g., for purposes of collaborating on projects) with other users. The system may first determine what objects within the user's real life environment the user may want to use to collaborate with other users. The system may then copy and present the real-life objects in real-time within the UI of the VR shared space so that other users in the VR shared space can view and interact with the objects.
In particular embodiments, the artificial reality system may first determine whether a user is interacting with objects that are available for communication and collaboration with other users. As an example, the system may detect that an object with which the user is interacting may be viewed and may be shared with other users, or has been designated as an object that may be viewed and may be shared with other users. Alternatively, the user may request sharing of the object with other users from the system by presenting images, videos, real-time views, or any other suitable presentation of the object to the other users. FIG. 12 illustrates a first user specifying an object for sharing with other users in an artificial reality system, as shown in FIG. 12, a user 1210 is interacting with a separate screen 1212 using a device 1214. By way of example, the screen 1212 may be a screen that is not connected to the virtual reality system (e.g., a blackboard, a whiteboard, etc.). As another example, screen 1212 may alternatively be a screen connected to a virtual reality system where any information written on the screen is to be copied, saved, and accessible via the system. Further, as an example, the device 1214 may be a physical device (e.g., a writing device, a pointer device, etc.) that allows the user 1210 to interact with the screen 1212 in the real world. As another example, the device 1214 may be an electronic device (e.g., an electronic pointer device) that allows the user 1210 to interact with the screen 1212 in the real world or an electronic device (e.g., the controller 106) that allows the user 1210 to interact with the screen 1212 in the virtual world.
In a particular embodiment, as shown in fig. 12, in the area around user 1210, there is also a bookshelf 1216, a bed 1218 and a desk 1220. Of these objects in the room, the artificial reality system may determine that bookshelf 1216 and bed 1218 are not objects to be designated as objects for sharing information with other users in the artificial reality system, while desk 1220 may be designated as objects for sharing information with other users in the artificial reality system. Further, the user 1210 can designate the screen 1212 as an object for sharing information with other users. In particular embodiments, the artificial reality system may access the environment surrounding the user 1210 to determine the location, position, and/or orientation of the bookshelf 1216, bed 1218, and desk 1220, and include these objects in the virtual reality display along with the screen 1212. This may provide the user with a more realistic scene and context with respect to the location of the screen 1212, so that the user is not just looking at the screen floating in 3D space.
FIG. 13 illustrates a second user specifying an object for sharing with other users in an artificial reality system. As shown in fig. 13, a user 1310 is interacting with a screen 1312 located on a wall of a room. As an example, the screen 1312 may be a physical screen located on a wall. Alternatively, the screen 1312 may be a projection screen that is positioned on a wall and projected from a component of the artificial reality system or other projection system. Further, similar to the example above, screen 1312 may be a screen that is not connected to the virtual reality system (e.g., in the case of a physical screen), or may be a screen that is connected to the virtual reality system where any information written on the screen is to be copied, saved, and accessible via the system (e.g., in the case of a projection screen). The user 1310 may interact with the screen 1312 using the device 1314, which device 1314 may be a physical device that allows the user 1310 to interact with the screen 1312 in the real world, or an electronic device (e.g., an electronic pointer device) that allows the user 1310 to interact with the screen 1312 in the real world or an electronic device (e.g., the controller 106) that allows the user 1310 to interact with the screen 1312 in the virtual world.
In certain embodiments, as shown in FIG. 13, in the area around user 1210, there is also a bookshelf 1316, a television cabinet 1318, and a sofa 1320. Of these objects in the room, the artificial reality system may determine that bookshelf 1316 and sofa 1320 are not objects to be designated as objects for sharing information with other users in the artificial reality system, while television cabinet 1318 (with associated televisions) may be designated as objects for sharing information with other users in the artificial reality system. Further, user 1310 may designate screen 1312 as an object for sharing information with other users and/or for receiving shared information from other users. In certain embodiments, the artificial reality system may access the environment surrounding user 1310 to determine the location, position, and/or orientation of bookshelf 1316, television cabinet 1318, and sofa 1320, and include these objects in the virtual reality display along with screen 1312. As discussed above, this may provide the user with a more realistic scene and context with respect to the location of the screen 1312, so that the user is not just viewing the screen floating in 3D space.
In particular embodiments, once the artificial reality system determines that the user is interacting with objects available for communication and collaboration with other users, the artificial reality system may send a query to the user to determine whether the user wants to share the display of interactive objects with one or more other users. Alternatively, the artificial reality system may receive a request from a user to share a display of an interactive object with one or more other users. In both cases, the artificial reality system may ask the user which other users the user wants to invite to participate in the sharing of the display of the interactive object. As an example, the artificial reality system may send the user a list of other users to select. As another example, the artificial reality system may maintain a list of other users who have indicated an interest in participating in the sharing of the display of the interactive object. The other users may be users from the user's social network as determined by social graph information (e.g., nodes and edges, affinities, and degrees of separation), as discussed in more detail below. In particular embodiments, the other users may be determined by any suitable means, including but not limited to one or more temporal limits (e.g., other users using an artificial reality system during the same time period as the user), one or more location-based limits (e.g., other users within a geographic distance or within a particular geographic area), one or more degrees of separation, one or more membership coefficients between the user and other users, one or more commonalities between the user and other users, or any combination thereof.
Fig. 14A and 14B illustrate various environments 1400, 1450 for various users to view shared objects. In particular embodiments, a user may write on a real-life whiteboard, and an artificial reality system may synchronize the writing on the whiteboard with the virtual whiteboard, so that other users in virtual reality (e.g., in a VR shared space similar to the VR shared space described above) may view and interact with the user and the content on the whiteboard. As shown in fig. 14, user 1410 may be a user who is interacting with a real-life whiteboard and using the whiteboard to write and display content to collaborate with other users 1414 and 1416. The other users 1414 and 1416 may be virtual users that use headsets 1418 and 1420, respectively, to view content displayed on virtual whiteboard 1412 that is a copy of the real-life whiteboard. Users 1414 and 1416 may access the displayed whiteboard 1412 via the VR shared space (as discussed above). In particular embodiments, to display the whiteboard in virtual reality, the artificial reality system may generate a virtual whiteboard 1412 (e.g., a virtual item) as a copy of the real-life whiteboard, and then display the virtual whiteboard 1412 to users 1414 and 1416 in a virtual reality environment (e.g., a VR shared space). Further, the artificial reality system can also create a copy of the user 1410, or use an avatar associated with the user 1410, to display in the virtual reality environment with the virtual whiteboard 1412. In certain embodiments, if a change is made to the real-life whiteboard (e.g., the user 1410 writes more content on the whiteboard), the artificial reality system will update the virtual whiteboard 1412 to include the same changes as the real-life whiteboard. This allows content and objects from real life to be synchronized with their representative versions in the virtual reality environment to create a mixed reality environment that enhances interaction, communication, and collaboration between users. As an example, users 1414 and 1416 may provide comments associated with content on virtual whiteboard 1412, which may be sent to user 1410 separately or visible to all users viewing virtual whiteboard 1412.
In certain embodiments, when a user writes on a real-life whiteboard, and the artificial reality system synchronizes the writing on the whiteboard with the virtual whiteboard so that other users in virtual reality (e.g., in a VR shared space similar to the VR shared space described above) can view and interact with the user and the content on the whiteboard, only a copy of the real-life whiteboard is displayed in virtual reality (without displaying user 1410). Instead, users 1414 and 1416, and additional new users 1452, can only view virtual whiteboard 1412 through headsets 1418, 1420, and 1454, respectively, and only interact with content on virtual whiteboard 1412. Similar to the embodiments discussed above, if a change is made to the real-life whiteboard (e.g., the user 1410 writes more content on the whiteboard), the artificial reality system will update the virtual whiteboard 1412 to include the same changes as the real-life whiteboard. This allows content and objects from real life to be synchronized with their representative versions in the virtual reality environment to create a mixed reality environment that enhances interaction, communication, and collaboration between users. As an example, users 1414, 1416, and 1454 can provide comments associated with content on virtual whiteboard 1412, which can be sent to user 1410 alone or visible to all users viewing virtual whiteboard 1412.
In particular embodiments, as another example, a user may have a question about certain content on their computer screen, and may request that the system synchronize the computer screen with a virtual computer screen in order to invite another user in the VR shared space to help solve the question or answer the question about the content on the computer screen. Similar to the case discussed above, the artificial reality system may receive a request from a first user to share a display of a computer screen with one or more second users. As an example, this may be due to a computer problem that a first user encounters on a computer, and the fastest way to solve the problem may be to ask another user to visually evaluate the computer (rather than interpreting the problem by telephone). As another example, this may be due to a first user wanting to share content on a computer screen with another user (e.g., to collaborate on a project). In this way, the artificial reality system may generate a copy of the computer screen for display in the virtual environment and then allow the second user to view the virtual computer screen in the virtual environment. If the first user makes changes to what is displayed on the computer screen, the virtual computer screen will be updated to display the changes. Similar to the embodiments discussed above, this allows content and objects from real life to be synchronized with their representative versions in the virtual reality environment to create a mixed reality environment that enhances interaction, communication, and collaboration between users.
FIG. 15 illustrates an example method for specifying various objects and sharing those objects with other users in an artificial reality system. The method may begin at step 1510 where the artificial reality system may receive a request to share a display of a first interactive item with one or more users. At step 1520, the artificial reality system may generate the first virtual item as a copy of the first interactive item. At step 1530, the artificial reality system may display the first virtual item to a subset of the one or more users in the virtual reality environment, wherein if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item. In particular embodiments, the request to share the display of the first interactive item may be from a first user of the one or more users that is currently interacting with the first interactive item. In particular embodiments, the request to share the display of the first interactive item may come from one or more second users, the one or more second users being virtual users associated with the virtual reality environment. In particular embodiments, prior to receiving a request to share a display of a first interactive item, the artificial reality system may determine one or more interactive items in the environment and send a list of the interactive items to the first user for selection of at least one of the interactive items for display in the virtual reality environment. In particular embodiments, the subset of one or more users may include virtual users in a virtual reality environment.
In a particular embodiment, the first interactive item may be located in a real-world environment. The artificial reality system may access a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment, generate one or more second virtual items as copies of the one or more other items, and display the first virtual item and the one or more second virtual items in the virtual reality environment based on the location of the first interactive item relative to the one or more other items in the real-world environment. Further, the artificial reality system may access an orientation of the first interactive item in the real-world environment and display the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment. In particular embodiments, the artificial reality system may receive comments associated with the first virtual item from one or more users of the subset of users and send the comments to be displayed to a first user of the one or more users that is currently interacting with the first interactive item in the real-world environment. The commentary may include one or more of an audio commentary, a video commentary, or a written commentary.
Particular embodiments may repeat one or more steps of the method of fig. 15 where appropriate. Although this disclosure describes and illustrates particular steps of the method of fig. 15 as occurring in a particular order, this disclosure contemplates any suitable steps of the method of fig. 15 occurring in any suitable order. Moreover, although this disclosure describes and illustrates example methods for specifying various objects and sharing these objects with other users in an artificial reality system that include particular steps of the method of fig. 15, this disclosure contemplates any suitable method for specifying various objects and sharing these objects with other users in an artificial reality system that includes any suitable steps, which may include all, some, or none of the steps of the method of fig. 15, where appropriate. Moreover, although this disclosure describes and illustrates particular components, devices, or systems performing particular steps of the method of fig. 15, this disclosure contemplates any suitable combination of any suitable components, devices, or systems performing any suitable steps of the method of fig. 15.
Overview of the System
FIG. 16 illustrates an example network environment 1600 associated with a social networking system. Network environment 1600 includes a user 1601, a client system 1630, a social networking system 1660, and a third-party system 1670 connected to each other through a network 1610. Although fig. 16 illustrates a particular arrangement of the user 1601, the client system 1630, the social networking system 1660, the third-party system 1670, and the network 1610, this disclosure contemplates any suitable arrangement of the user 1601, the client system 1630, the social networking system 1660, the third-party system 1670, and the network 1610. By way of example and not limitation, two or more of client system 1630, social-networking system 1660, and third-party system 1670 may be directly connected to each other, bypassing network 1610. As another example, two or more of client system 1630, social-networking system 1660, and third-party system 1670 may be physically or logically co-located with each other, in whole or in part. Moreover, although fig. 16 illustrates a particular number of users 1601, client systems 1630, social networking systems 1660, third-party systems 1670, and networks 1610, the present disclosure contemplates any suitable number of users 1601, client systems 1630, social networking systems 1660, third-party systems 1670, and networks 1610. By way of example and not limitation, network environment 1600 may include a plurality of users 1601, client systems 1630, social networking systems 1660, third-party systems 1670, and networks 1610.
In particular embodiments, user 1601 may be an individual (a human user), an entity (e.g., a company, business, or third-party application), or a group (e.g., a group of individuals or entities) that interacts or communicates with or through social-networking system 1660. In particular embodiments, social-networking system 1660 may be a network-addressable computing system that hosts an online social network. Social-networking system 1660 may generate, store, receive, and send social-networking data (e.g., user-profile data, concept-profile data, social-graph information, or other suitable data related to an online social network). Social-networking system 1660 may be accessed by other components of network environment 1600, either directly or via network 1610. In particular embodiments, the social networking system 1660 may include an authorization server (or other suitable component) that allows the user 1601 to opt-in or opt-out of having their actions logged by the social networking system 1660 or shared with other systems (e.g., third party systems 1670), for example, by setting appropriate privacy settings. The privacy settings of the user may determine what information associated with the user may be recorded, how information associated with the user may be recorded, when information associated with the user may be recorded, who may record information associated with the user, with whom information associated with the user may be shared, and for what purposes information associated with the user may be recorded or shared. The authorization server may be used to enforce one or more privacy settings of the users of the social-networking system 30 by blocking, data hashing, anonymization, or other suitable techniques, as desired. In particular embodiments, the third party system 1670 may be a network addressable computing system. Third party system 1670 may be accessed by other components of network environment 1600, either directly or via network 1610. In particular embodiments, one or more users 1601 may use one or more client systems 1630 to access, send data to, and receive data from social-networking system 1660 or third-party system 1670. Client system 1630 may access social-networking system 1660 or third-party system 1670 directly, via network 1610, or via a third-party system. By way of example and not limitation, client system 1630 may access third-party system 1670 via social-networking system 1660. Client system 1630 may be any suitable computing device, such as a personal computer, laptop computer, cellular telephone, smartphone, tablet computer, or augmented/virtual reality device.
The present disclosure contemplates any suitable network 1610. By way of example and not limitation, one or more portions of network 1610 may include an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a wireless LAN (wlan), a Wide Area Network (WAN), a wireless WAN (wwan), a Metropolitan Area Network (MAN), a portion of the internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. The network 1610 may include one or more networks 1610.
Link 1650 may connect client system 1630, social-networking system 1660, and third-party system 1670 to communication network 1610 or to each other. This disclosure contemplates any suitable link 1650. In particular embodiments, one or more links 1650 include one or more wired links such as, for example, Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS), wireless links such as, for example, Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX), or optical links such as, for example, Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH). In particular embodiments, one or more links 1650 each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the internet, a portion of the PSTN, a cellular technology-based network, a satellite communication technology-based network, another link 1650, or a combination of two or more such links 1650. Link 1650 need not necessarily be the same throughout network environment 1600. The one or more first links 1650 may differ in one or more respects from the one or more second links 1650.
Social graph
Fig. 17 shows an example social graph 1700. In particular embodiments, social-networking system 1660 may store one or more social graphs 1700 in one or more data stores. In particular embodiments, the social graph 1700 may include a plurality of nodes, which may include a plurality of user nodes 1702 or a plurality of concept nodes 1704, and a plurality of edges 1706 connecting the nodes. For purposes of teaching, the example social graph 1700 shown in fig. 17 is shown in a two-dimensional visual map representation. In particular embodiments, social-networking system 1660, client system 1630, or third-party system 1670 may access social graph 1700 and related social graph information for suitable applications. The nodes and edges of the social graph 1700 may be stored as data objects, for example, in a data store (e.g., a social graph database). Such a data store may include one or more searchable or queryable indexes of nodes or edges of the social graph 1700.
In particular embodiments, user node 1702 may correspond to a user of social-networking system 1660. By way of example and not limitation, a user may be an individual (human user), an entity (e.g., an enterprise, company, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with social-networking system 1660 or through social-networking system 1660. In particular embodiments, when a user registers for an account with social-networking system 1660, social-networking system 1660 may create a user node 1702 corresponding to the user and store the user node 1702 in one or more data stores. The users and user nodes 1702 described herein may refer to registered users and user nodes 1702 associated with registered users, where appropriate. Additionally or alternatively, the users and user nodes 1702 described herein may refer to users that are not registered with social-networking system 1660, where appropriate. In particular embodiments, user nodes 1702 may be associated with information provided by a user or information collected by various systems, including social-networking system 1660. By way of example and not by way of limitation, a user may provide his or her name, profile picture, contact information, date of birth, gender, marital status, family status, occupation, educational background, preferences, interests, or other demographic information. In particular embodiments, user node 1702 may be associated with one or more data objects that correspond to information associated with a user. In particular embodiments, user nodes 1702 may correspond to one or more web pages.
In particular embodiments, concept node 1704 may correspond to a concept. By way of example and not by way of limitation, concepts may correspond to locations (such as, for example, movie theaters, restaurants, landmarks, or cities); a website (such as, for example, a website associated with social networking system 1660 or a third-party website associated with a web application server); an entity (such as, for example, an individual, a business, a group, a sports team, or a celebrity); resources (such as, for example, audio files, video files, digital photographs, text files, structured documents, or applications) that may be located within social-networking system 1660 or on an external server (e.g., a web application server); real estate or intellectual property (such as, for example, sculptures, paintings, movies, games, songs, ideas, photographs, or written works); playing a game; moving; an idea or theory; another suitable concept; or two or more such concepts. Concept nodes 1704 may be associated with information for concepts provided by users or information collected by various systems, including social-networking system 1660. By way of example, and not by way of limitation, information for a concept may include a name or title; one or more images (e.g., of the cover of a book); location (e.g., address or geographic location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable conceptual information; or any suitable combination of such information. In particular embodiments, concept node 1704 may be associated with one or more data objects that correspond to information associated with concept node 1704. In particular embodiments, concept node 1704 may correspond to one or more web pages.
In particular embodiments, the nodes in the social graph 1700 may represent or be represented by web pages (which may be referred to as "profile pages"). The profile page may be hosted by the social-networking system 1606 or accessible to the social-networking system 1606. The profile page may also be hosted on a third-party website associated with the third-party server 1670. By way of example and not limitation, a profile page corresponding to a particular external web page may be the particular external web page and the profile page may correspond to the particular concept node 1704. The profile page may be viewable by all or a selected subset of the other users. By way of example and not by way of limitation, user nodes 1702 may have respective user profile pages where a respective user may add content, make statements, or otherwise express himself or herself. As another example and not by way of limitation, concept nodes 1704 may have corresponding concept profile pages where one or more users may add content, make statements, or express themselves, particularly with respect to concepts corresponding to concept nodes 1704.
In particular embodiments, concept node 1704 may represent a third party webpage or resource hosted by a third party system 1670. The third party webpage or resource may include content representing an action or activity, selectable icons or other interactable objects (which may be implemented, for example, in JavaScript, AJAX, or PHP code), and other elements. By way of example and not limitation, third-party web pages may include selectable icons (e.g., "like," "check-in," "eat," "recommend"), or other suitable actions or activities. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., "check-in"), causing client system 130 to send a message to social-networking system 1660 indicating the user's action. In response to the message, social-networking system 1660 may create an edge (e.g., a check-in type edge) between user node 1702 corresponding to the user and concept node 1704 corresponding to the third-party webpage or resource, and store the edge 1706 in one or more data stores.
In a particular embodiment, a pair of nodes in the social graph 1700 may be connected to each other by one or more edges 1706. An edge 1706 connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge 1706 may include or represent one or more data objects or attributes that correspond to a relationship between a pair of nodes. By way of example and not by way of limitation, the first user may indicate that the second user is a "friend" of the first user. In response to the indication, social-networking system 1660 may send a "friend request" to the second user. If the second user confirms the "friend request," the social-networking system 1660 may create an edge 1706 in the social graph 1700 that connects the user node 1702 of the first user to the user node 1702 of the second user and store the edge 1706 as social-graph information in one or more data stores 1664. In the example of fig. 17, the social graph 1700 includes an edge 1706 indicating a friendship between the user nodes 1702 of the user "a" and the user "B", and an edge indicating a friendship between the user nodes 1702 of the user "C" and the user "B". Although this disclosure describes or illustrates a particular edge 1706 of a particular attribute connecting a particular user node 1702, this disclosure contemplates any suitable edge 1706 of any suitable attribute connecting user nodes 1702. By way of example and not limitation, the edge 1706 may represent a friendship, family relationship, business or employment relationship, fan relationship (including, e.g., like likes), follower relationship, visitor relationship (including, e.g., visit, view, check-in, share, etc.), subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Further, while this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to connected users or concepts may refer to nodes corresponding to those users or concepts connected by one or more edges 1706 in the social graph 1700, where appropriate.
In particular embodiments, an edge 1706 between the user node 1702 and the concept node 1704 may represent a particular action or activity performed by a user associated with the user node 1702 on a concept associated with the concept node 1704. By way of example and not by way of limitation, as shown in fig. 17, a user may "like", "attend", "play", "listen", "cook", "work on", or "watch" concepts, each of which may correspond to an edge type or subtype. The concept profile page corresponding to the concept node 1704 may include, for example, a selectable "check-in" icon (e.g., a clickable "check-in" icon) or a selectable "add to favorites" icon. Similarly, after the user clicks on these icons, social-networking system 1660 may create a "favorites" edge or a "check-in" edge in response to the user action corresponding to the respective action. As another example and not by way of limitation, a user (user "C") may listen to a particular song ("Imagine") using a particular application (SPOTIFY, which is an online music application). In this case, the social networking system 1660 may create a "listen" edge 1706 and a "use" edge (as shown in FIG. 17) between the user node 1702 corresponding to the user and the concept node 1704 corresponding to the song and the application to indicate that the user listened to the song and used the application. In addition, the social networking system 1660 may create a "play" edge 1706 (shown in FIG. 17) between the concept node 1704 corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, the "play" edge 1706 corresponds to an action performed by an external application (SPOTIFY) on an external audio file (song "Imagine"). Although this disclosure describes a particular edge 1706 having particular attributes connecting the user node 1702 and the concept node 1704, this disclosure contemplates any suitable edge 1706 having any suitable attributes connecting the user node 1702 and the concept node 1704. Further, while this disclosure describes edges between the user nodes 1702 and the concept nodes 1704 representing a single relationship, this disclosure contemplates edges between the user nodes 1702 and the concept nodes 1704 representing one or more relationships. By way of example and not by way of limitation, edge 1706 may indicate that the user likes and uses a particular concept. Alternatively, another edge 1706 may represent each type of relationship (or a plurality of single relationships) between the user node 1702 and the concept node 1704 (between the user node 1702 of user "E" and the concept node 1704 of "SPOTIFY" as shown in FIG. 17).
In particular embodiments, the social-networking system 1660 may create an edge 1706 between the user node 1702 and the concept node 1704 in the social graph 1700. By way of example and not limitation, a user viewing the concept profile page (e.g., by using a web browser or dedicated application hosted by the user's client system 1630) may indicate that he or she likes the concepts represented by concept nodes 1704 by clicking or selecting a "like" icon, which may cause the user's client system 1630 to send a message to social-networking system 1660 indicating that the user likes the concepts associated with the concept profile page. In response to the message, the social networking system 1660 may create an edge 1706 between the user node 1702 associated with the user and the concept node 1704, as illustrated by the "like" edge 1706 between the user and the concept node 1704. In particular embodiments, social-networking system 1660 may store the edges 1706 in one or more data stores. In particular embodiments, the edge 1706 may be automatically formed by the social-networking system 1660 in response to a particular user action. By way of example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge 1706 may be formed between a user node 1702 corresponding to the first user and concept nodes 1704 corresponding to those concepts. Although this disclosure describes forming particular edges 1706 in a particular manner, this disclosure contemplates forming any suitable edges 1706 in any suitable manner.
Social graph affinity and coefficient
In particular embodiments, social-networking system 1660 may determine social-graph affinities (which may be referred to herein as "affinities") of various social-graph entities to each other. The affinity may represent a strength of relationship or a level of interest between particular objects associated with the online social network (e.g., users, concepts, content, actions, advertisements), other objects associated with the online social network, or any suitable combination thereof. Affinity may also be determined with respect to objects associated with a third party system 1670 or other suitable system. An overall affinity to the social graph entity may be established for each user, topic, or content type. The overall affinity may change based on continuous monitoring of actions or relationships associated with the social graph entity. Although this disclosure describes determining a particular affinity in a particular manner, this disclosure contemplates determining any suitable affinity in any suitable manner.
In particular embodiments, social-networking system 1660 may use affinity coefficients (which may be referred to herein as "coefficients") to measure or quantify social graph affinity. The coefficient may represent or quantify a strength of a relationship between particular objects associated with the online social network. The coefficients may also represent a probability or function that measures the predicted probability that a user will perform a particular action based on the user's interest in that action. In this way, future actions of the user may be predicted based on previous actions of the user, where the coefficients may be calculated based at least in part on a history of actions of the user. The coefficients may be used to predict any number of actions that may be within or outside of the online social network. By way of example, and not by way of limitation, such actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of observation actions, such as accessing or viewing a profile page, media, or other suitable content; various types of consistent information about two or more social graph entities, such as being in the same group, being tagged in the same photograph, being checked-in at the same location, or attending the same event; or other suitable action. Although the present disclosure describes measuring affinity in a particular manner, the present disclosure contemplates measuring affinity in any suitable manner.
In particular embodiments, social-networking system 1660 may calculate the coefficients using various factors. These factors may include, for example, user actions, types of relationships between objects, location information, other suitable factors, or any combination thereof. In particular embodiments, different factors may be weighted differently when calculating the coefficients. The weight of each factor may be static or may change depending on, for example, the user, the type of relationship, the type of action, the location of the user, etc. The ranking (rating) of the factors may be combined according to their weight to determine the overall coefficient of the user. By way of example and not by way of limitation, a particular user action may be assigned a level and weight, while a relationship associated with the particular user action is assigned a level and associated weight (e.g., so the weights total 100%). To calculate the user's coefficient for a particular object, the rating assigned to the user's action may comprise, for example, 60% of the total coefficient, while the relationship between the user and the object may comprise 40% of the total coefficient. In particular embodiments, social-networking system 1660 may consider various variables when determining weights for various factors used to calculate coefficients, such as time since information was accessed, attenuation factors, frequency of access, relationship to information or to objects to which information was accessed, relationship to social-graph entities connected to objects, short-term or long-term averages of user actions, user feedback, other suitable variables, or any combination thereof. By way of example and not by way of limitation, the coefficients may include an attenuation factor that attenuates the strength of the signal provided by a particular action over time such that more recent actions are more correlated in calculating the coefficients. The levels and weights may be continuously updated based on continuous tracking of the actions on which the coefficients are based. Any type of process or algorithm may be employed to assign, combine, average, etc. the rank of each factor and the weight assigned to those factors. In particular embodiments, social-networking system 1660 may determine the coefficients using machine-learning algorithms trained from historical actions and past user responses, or data obtained from the user by exposing the user to various options and measuring responses. Although this disclosure describes calculating coefficients in a particular manner, this disclosure contemplates calculating coefficients in any suitable manner.
In particular embodiments, social-networking system 1660 may calculate the coefficients based on the user's actions. Social-networking system 1660 may monitor such actions on an online social network, on a third-party system 1670, on other suitable systems, or any combination thereof. Any suitable type of user action may be tracked or monitored. Typical user actions include viewing a profile page, creating or publishing content, interacting with content, marking or being marked in an image, joining a group, listing and confirming event attendance, checking in at different locations, favoring a particular page, creating a page, and performing other tasks that facilitate social actions. In particular embodiments, social-networking system 1660 may calculate the coefficients based on the user's actions on particular types of content. The content may be associated with an online social network, a third-party system 1670, or another suitable system. Content may include users, profile pages, posts, news stories, headlines, instant messages, chat room conversations, emails, advertisements, pictures, videos, music, other suitable objects, or any combination thereof. Social-networking system 1660 may analyze the user's actions to determine whether one or more of the actions indicate an affinity for the topic, content, other users, and so forth. By way of example and not by way of limitation, if the user is likely to frequently post content related to "coffee" or variations thereof, social-networking system 1660 may determine that the user has a high coefficient with respect to the concept "coffee". A particular action or type of action may be assigned a higher weight and/or level than other actions, which may affect the overall calculated coefficients. By way of example and not by way of limitation, if a first user sends an email to a second user, the weight or level of the action may be higher than if the first user simply viewed the user profile page of the second user.
In particular embodiments, social-networking system 1660 may calculate the coefficients based on the type of relationship between particular objects. Referring to the social graph 1700, when calculating the coefficients, the social-networking system 1660 may analyze the number and/or types of edges 1706 connecting particular user nodes 1702 and concept nodes 1704. By way of example and not by way of limitation, user nodes 1702 connected by a spouse-type edge (indicating that two users are married) may be assigned a higher coefficient than user nodes 1702 connected by a friend-type edge. In other words, based on the weights assigned to the actions and relationships of a particular user, it may be determined that the overall affinity for content about the user's spouse is higher than the overall affinity for content about the user's friends. In particular embodiments, a user's relationship to another object may affect the weight and/or level of user action with respect to computing coefficients for that object. By way of example and not by way of limitation, if the user is tagged in a first photo, but only likes a second photo, social-networking system 1660 may determine that the user has a higher coefficient with respect to the first photo than the second photo because having a tagged-in-type relationship with content may be assigned a higher weight and/or rank than having a like-type relationship with content. In particular embodiments, social-networking system 1660 may calculate the coefficient for the first user based on the relationships one or more second users have with particular objects. In other words, the associations and coefficients of other users with the object may affect the coefficients of the first user with respect to the object. By way of example and not by way of limitation, if a first user is associated with or has a high coefficient for one or more second users, and those second users are associated with or have a high coefficient for a particular object, social-networking system 1660 may determine that the first user should also have a relatively high coefficient for the particular object. In particular embodiments, the coefficients may be based on a degree of separation between particular objects. A lower coefficient may represent a reduced likelihood that the first user will share the interest of the content object with the user indirectly associated with the first user in the social graph 1700. By way of example and not by way of limitation, social-graph entities that are closer (i.e., less separated) in the social graph 1700 may have a higher coefficient than entities that are further away in the social graph 1700.
In particular embodiments, social-networking system 1660 may calculate the coefficients based on the location information. Objects that are geographically closer to each other may be considered more relevant or interesting to each other than objects that are further away. In particular embodiments, the coefficient for a user for a particular object may be based on the proximity of the location of the object to the current location associated with the user (or the location of the user's client system 1630). The first user may be more interested in other users or concepts that are closer to the first user. By way of example and not by way of limitation, if a user is one mile from an airport and two miles from a gas station, social-networking system 1660 may determine that the user has a higher coefficient for the airport than for the gas station based on the proximity of the airport to the user.
In particular embodiments, social-networking system 1660 may perform particular actions with respect to the user based on the coefficient information. The coefficients may be used to predict whether a user will perform a particular action based on the user's interest in that action. The coefficients may be used when generating or presenting any type of object to a user, such as advertisements, search results, news feeds, media, messages, notifications, or other suitable objects. The coefficients may also be used to properly rank (rank) and order (order) such objects. In this way, social-networking system 1660 may provide information related to the interests and current environment of the user, increasing the likelihood that they will find such information of interest. In particular embodiments, social-networking system 1660 may generate content based on the coefficient information. The content objects may be provided or selected based on user-specific coefficients. By way of example and not by way of limitation, the coefficients may be used to generate media for a user, where the user may be presented with media having a high overall coefficient for the media object. As another example and not by way of limitation, the coefficient may be used to generate advertisements for users, where the users may be presented with advertisements whose overall coefficient with respect to the advertisement object is high. In particular embodiments, social-networking system 1660 may generate search results based on the coefficient information. Search results for a particular user may be scored or ranked based on coefficients associated with the search results for the querying user. By way of example and not by way of limitation, search results corresponding to objects with higher coefficients may be ranked higher on a search results page than results corresponding to objects with lower coefficients.
In particular embodiments, social-networking system 1660 may calculate coefficients in response to coefficient requests from particular systems or processes. Any process may request calculated coefficients for a user in order to predict the likely actions that the user may take (or may be their subject) in a given situation. The request may also include a set of weights for use by various factors for calculating the coefficients. The request may come from a process running on an online social network, from a third-party system 1670 (e.g., via an API or other communication channel), or from another suitable system. In response to the request, social-networking system 1660 may calculate the coefficients (or access the coefficient information if it has been previously calculated and stored). In particular embodiments, social-networking system 1660 may measure affinity with respect to particular processes. Different processes (both internal and external to the online social network) may request coefficients for a particular object or set of objects. Social-networking system 1660 may provide a measure of affinity related to a particular process that requested the measure of affinity. In this way, each process receives an affinity metric that is customized for a different context, where the process will use the affinity metric.
In conjunction with social graph affinity and affinity coefficients, particular embodiments may utilize one or more systems, components, elements, functions, methods, operations, or steps disclosed in U.S. patent application 11/503093, filed on 11 2006, 8, 2010, 12, 22, 12/977027, 12/978265, filed on 12, 2010, 23, and 13/632869, filed on 10, 2012, 01, each of which is incorporated by reference.
Privacy
In particular embodiments, one or more content objects of an online social network may be associated with a privacy setting. The privacy settings (or "access settings") of the object may be stored in any suitable manner, such as, for example, in association with the object, indexed on an authorization server, in another suitable manner, or any combination thereof. The privacy settings of the object may specify how the object (or particular information associated with the object) may be accessed (e.g., viewed or shared) using the online social network. An object may be described as "visible" to a particular user in the event that the privacy settings of the object allow the user to access the object. By way of example and not by way of limitation, a user of an online social network may specify privacy settings about a user profile page that identify a set of users that may access work experience information on the user profile page, thus excluding other users from accessing the information. In particular embodiments, the privacy settings may specify a "blacklist" of users that should not be allowed to access certain information associated with the object. In other words, the blacklist may specify one or more users or entities for which the object is not visible. By way of example and not by way of limitation, a user may specify a set of users who may not have access to an album associated with the user, thus excluding those users from accessing the album (while certain users who are not within the set of users may also be allowed access to the album). In particular embodiments, privacy settings may be associated with particular social graph elements. Privacy settings of a social graph element (e.g., a node or edge) may specify how the social graph element, information associated with the social graph element, or content objects associated with the social graph element may be accessed using an online social network. By way of example and not by way of limitation, a particular concept node 1704 corresponding to a particular photo may have a privacy setting specifying that the photo can only be accessed by users tagged in the photo and their friends. In particular embodiments, privacy settings may allow users to opt-in or opt-out to have their actions recorded by social-networking system 1660 or shared with other systems (e.g., third-party systems 1670). In particular embodiments, the privacy settings associated with the object may specify any suitable granularity of access that is permitted or denial of access. By way of example and not limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, and my boss), users within a particular degree of separation (e.g., friends or friends of friends), groups of users (e.g., gaming clubs, my family), networks of users (e.g., employees of a particular employer, students of a particular university, or alumni), all users ("public"), no users ("private"), users of third-party systems 170, particular applications (e.g., third-party applications, external websites), other suitable users or entities, or any combination thereof. Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner.
In particular embodiments, one or more servers 1662 can be authorization/privacy servers for implementing privacy settings. In response to a request from a user (or other entity) for a particular object stored in data store 1664, social-networking system 1660 may send a request for the object to data store 1664. The request may identify a user associated with the request, and the request may be sent to the user (or the user's client system 1630) only if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store 1664, or may prevent the requested object from being sent to the user. In the context of a search query, an object may be generated as a search result only if the querying user is authorized to access the object. In other words, the object must have visibility that is visible to the querying user. An object may be excluded from search results if the object has visibility that is not visible to the user. Although this disclosure describes implementing privacy settings in a particular manner, this disclosure contemplates implementing privacy settings in any suitable manner.
System and method
Fig. 18 shows an example computer system 1800. In particular embodiments, one or more computer systems 1800 perform one or more steps of one or more methods described or illustrated herein. In a particular embodiment, one or more computer systems 1800 provide the functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems 1800 performs one or more steps of one or more methods described or illustrated herein or provides functions described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 1800. Herein, reference to a computer system may include a computing device, and vice versa, where appropriate. Further, references to a computer system may include one or more computer systems, where appropriate.
This disclosure contemplates any suitable number of computer systems 1800. This disclosure contemplates computer system 1800 taking any suitable physical form. By way of example, and not limitation, computer system 1800 may be an embedded computer system, a system on a chip (SOC), a single board computer System (SBC) (e.g., a Computer On Module (COM) or a System On Module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a computer system mesh, a mobile phone, a Personal Digital Assistant (PDA), a server, a tablet computer system, or a combination of two or more of these systems. Where appropriate, computer system 1800 may include one or more computer systems 1800; is monolithic or distributed; spanning a plurality of locations; spanning multiple machines; spanning multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 1800 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. By way of example, and not limitation, one or more computer systems 1800 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 1800 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In a particular embodiment, computer system 1800 includes a processor 1802, memory 1804, storage 1806, input/output (I/O) interfaces 1808, a communication interface 1810, and a bus 1812. Although this disclosure describes and illustrates a particular computer system with a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In a particular embodiment, the processor 1802 includes hardware for executing instructions (e.g., those instructions that make up a computer program). By way of example, and not limitation, to execute instructions, processor 1802 may retrieve (or retrieve) instructions from internal registers, internal caches, memory 1804, or storage 1806; decode them and execute them; and then write the one or more results to an internal register, internal cache, memory 1804, or storage 1806. In particular embodiments, processor 1802 may include one or more internal caches for data, instructions, or addresses. The present disclosure contemplates processor 1802 including any suitable number of any suitable internal caches, where appropriate. By way of example, and not limitation, processor 1802 may include one or more instruction caches, one or more data caches, and one or more Translation Lookaside Buffers (TLBs). The instructions in the instruction cache may be copies of the instructions in memory 1804 or storage 1806, and the instruction cache may accelerate retrieval of those instructions by processor 1802. The data in the data cache may be: a copy of the data in memory 1804 or storage 1806, to cause instructions executed at processor 1802 to operate; the results of previous instructions executed at processor 1802 for access by subsequent instructions executed at processor 1802 or for writing to memory 1804 or storage 1806; or other suitable data. The data cache may speed up read or write operations by processor 1802. The TLB may accelerate virtual address translations for processor 1802. In particular embodiments, processor 1802 may include one or more internal registers for data, instructions, or addresses. The present disclosure contemplates processor 1802 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor 1802 may include one or more Arithmetic Logic Units (ALUs); is a multi-core processor; or include one or more processors 1802. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In a particular embodiment, the memory 1804 includes main memory for storing instructions for causing the processor 1802 to execute or data for causing the processor 1802 to operate. By way of example, and not limitation, computer system 1800 may load instructions from storage 1806 or another source (e.g., another computer system 1800) into memory 1804. Processor 1802 may then load instructions from memory 1804 into internal registers or internal caches. To execute instructions, processor 1802 may retrieve instructions from internal registers or internal caches and decode them. During or after execution of the instructions, the processor 1802 may write one or more results (which may be intermediate results or final results) to an internal register or internal cache. Processor 1802 may then write one or more of these results to memory 1804. In a particular embodiment, the processor 1802 executes only instructions in the one or more internal registers or internal caches or in the memory 1804 (instead of the storage 1806 or elsewhere) and operates only on data in the one or more internal registers or internal caches or in the memory 1804 (instead of the storage 1806 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor 1802 to memory 1804. The bus 1812 may include one or more memory buses, as described below. In particular embodiments, one or more Memory Management Units (MMUs) reside between processor 1802 and memory 1804 and facilitate accesses to memory 1804 requested by processor 1802. In a particular embodiment, the memory 1804 includes Random Access Memory (RAM). The RAM may be volatile memory, where appropriate. The RAM may be dynamic RAM (dram) or static RAM (sram), where appropriate. Further, the RAM may be single-port RAM or multi-port RAM, where appropriate. The present disclosure contemplates any suitable RAM. Memory 1804 may include one or more memories 1804, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In a particular embodiment, the storage 1806 includes mass storage for data or instructions. By way of example, and not limitation, storage 1806 may include a Hard Disk Drive (HDD), a floppy disk drive, flash memory, an optical disk, a magneto-optical disk, magnetic tape, or a Universal Serial Bus (USB) drive, or a combination of two or more of these. Storage 1806 may include removable or non-removable (or fixed) media, where appropriate. Storage 1806 may be internal or external to computer system 1800, where appropriate. In a particular embodiment, the storage 1806 is non-volatile solid-state memory. In certain embodiments, storage 1806 includes read-only memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically variable ROM (earom), or flash memory, or a combination of two or more of these. The present disclosure contemplates mass storage 1806 in any suitable physical form. The storage 1806 may include one or more storage control units that facilitate communication between the processor 1802 and the storage 1806, where appropriate. Storage 1806 may include one or more storage 1806, where appropriate. Although this disclosure describes and illustrates a particular storage device, this disclosure contemplates any suitable storage device.
In particular embodiments, I/O interfaces 1808 include hardware, software, or both to provide one or more interfaces for communication between computer system 1800 and one or more I/O devices. Computer system 1800 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and the computer system 1800. By way of example, and not limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet computer, touch screen, trackball, video camera, another suitable I/O device, or a combination of two or more of these. The I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces 1808 for them. The I/O interface 1808 may include one or more device or software drivers enabling the processor 1802 to drive one or more of these I/O devices, where appropriate. I/O interfaces 1808 can include one or more I/O interfaces 1808, where appropriate. Although this disclosure describes and illustrates particular I/O interfaces, this disclosure contemplates any suitable I/O interfaces.
In particular embodiments, communication interface 1810 includes hardware, software, or both that provide one or more interfaces for communication (e.g., packet-based communication) between computer system 1800 and one or more other computer systems 1800 or one or more networks. By way of example, and not limitation, communication interface 1810 may include a Network Interface Controller (NIC) or a network adapter for communicating with an ethernet or other wire-based network, or a wireless NIC (wnic) or a wireless adapter for communicating with a wireless network (e.g., a WI-FI network). The present disclosure contemplates any suitable network and any suitable communication interface 1810 therefor. By way of example, and not limitation, computer system 1800 may communicate with an ad hoc network, a Personal Area Network (PAN), a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), or one or more portions of the internet, or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. By way of example, computer system 1800 may communicate with a Wireless PAN (WPAN) (e.g., a Bluetooth WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (e.g., a Global System for Mobile communications (GSM) network), or other suitable wireless network, or a combination of two or more of these. Computer system 1800 may include any suitable communication interface 1810 for any of these networks, where appropriate. Communication interface 1810 may include one or more communication interfaces 1810, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus 1812 includes hardware, software, or both to couple components of computer system 1800 to one another. By way of example, and not limitation, the bus 1812 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Extended Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a hyper random port (ht) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-express (pcie) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or any other suitable bus or combination of two or more of these. Bus 1812 may include one or more buses 1812, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, where appropriate, the one or more computer-readable non-transitory storage media may include one or more semiconductor-based or other Integrated Circuits (ICs) (e.g., Field Programmable Gate Arrays (FPGAs) or application specific ICs (asics)), Hard Disk Drives (HDDs), hybrid hard disk drives (HHDs), optical disks, Optical Disk Drives (ODDs), magneto-optical disks, magneto-optical disk drives, floppy disks, Floppy Disk Drives (FDDs), magnetic tape, Solid State Drives (SSDs), RAM drives, SECURE DIGITAL (SECURE DIGITAL) cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these. Computer-readable non-transitory storage media may be volatile, nonvolatile, or a combination of volatile and nonvolatile, where appropriate.
As used herein, the term "or" is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Thus, herein, "a or B" means "A, B or both" unless explicitly indicated otherwise or indicated otherwise by context. Further, "and" are both conjunctive and disjunctive unless expressly indicated otherwise or indicated otherwise by context. Thus, herein, "a and B" means "a and B, either jointly or individually," unless expressly indicated otherwise or indicated otherwise by context.
The scope of the present disclosure includes all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of the present disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although the present disclosure describes and illustrates respective embodiments herein as including particular components, elements, features, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would understand. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system that is suitable for, arranged to, capable of, configured to, implemented, operable to, or operative to perform a particular function includes the apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, provided that the apparatus, system, or component is so adapted, arranged, enabled, configured, implemented, operable, or operative.

Claims (99)

1. A method comprising, by a computing system:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
2. The method of claim 1, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
3. The method of claim 1, wherein the context information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
a preset action associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
4. The method of claim 3, wherein the time information associated with the first user comprises a predetermined period of user inactivity.
5. The method of claim 1, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network.
6. The method of claim 5, wherein the hypothetical user is based, at least in part, on:
each user of the social network, or
One or more subsets of users of the social network.
7. The method of claim 1, wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content.
8. The method of claim 7, wherein the adjustment to the visual or audio attributes of one or more of the second virtual content is determined based at least in part on a content type associated with the second virtual content or a service type associated with the second virtual content.
9. The method of claim 7, wherein the adjustment to the social network attribute comprises temporarily limiting or removing all notifications from the social network associated with the second virtual content.
10. The method of claim 1, wherein the virtual area resides in a virtual reality environment and the first user is a virtual user in the virtual reality environment.
11. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
12. The media of claim 11, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
13. The media of claim 11, wherein the contextual information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
14. The media of claim 11, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network.
15. The media of claim 11, wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content.
16. A system, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor operable when executing the instructions to:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
17. The system of claim 16, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
18. The system of claim 16, wherein the contextual information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
19. The system of claim 16, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network.
20. The system of claim 16, wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content.
21. A method comprising, by a computing system:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the one or more second users and the first user.
22. The method of claim 21, further comprising, prior to retrieving information associated with the one or more second rooms:
determining whether the first area in the first room is equal to or greater than a predetermined minimum area.
23. The method of claim 21, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and a location of one or more items in the first room.
24. The method of claim 23, wherein the retrieved information associated with the second room includes at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
25. The method of claim 24, further comprising:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms.
26. The method of claim 25, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
27. The method of claim 21, wherein providing access to the federated virtual space comprises notifying each of the first user and the one or more second users that the federated virtual space is available for use.
28. The method of claim 27, wherein notifying each of the first user and the one or more second users comprises sending, to each of the first user and the one or more second users, instructions to generate a portal object that allows each of the first user and the one or more second users to virtually access the federated virtual space.
29. The method of claim 21, wherein generating a portal object comprises:
sending an instruction to the first user to draw within the first area in the first room a virtual doorway that allows the first user to virtually access the joint virtual space; and
sending instructions to each of the second users to draw a virtual doorway in each of the second rooms that allows each of the second users to virtually access the joint virtual space.
30. The method of claim 21, wherein the joint virtual space resides in a virtual reality environment, and each of the first user and the one or more second users is a virtual user in the virtual reality environment.
31. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the first user and the one or more second users.
32. The media of claim 31, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and a location of one or more items in the first room.
33. The media of claim 32, wherein the retrieved information associated with the second room comprises at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
34. The medium of claim 33, further comprising:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms.
35. The media of claim 34, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
36. A system, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor operable when executing the instructions to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the first user and the one or more second users.
37. The system of claim 36, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and a location of one or more items in the first room.
38. The system of claim 37, wherein the retrieved information associated with the second room includes at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
39. The system of claim 38, further comprising:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms.
40. The system of claim 39, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
41. A method comprising, by a computing system:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment,
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
42. The method of claim 41, wherein the request to share the display of the first interactive item is from a first user of the one or more users currently interacting with the first interactive item.
43. The method according to claim 41, wherein the request to share the display of the first interactive item is from one or more second users, the one or more second users being virtual users associated with the virtual reality environment.
44. The method of claim 41, further comprising, prior to receiving the request to share the display of the first interactive item,
determining one or more interactive items in an environment; and is
A list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
45. The method of claim 41, wherein the subset of the one or more users comprises virtual users in the virtual reality environment.
46. The method of claim 41, wherein the first interactive item is located in a real-world environment.
47. The method of claim 46, further comprising:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment.
48. The method of claim 46, further comprising:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
49. The method of claim 41, further comprising:
receiving comments associated with the first virtual item from one or more of the subset of users; and
sending the comment to be displayed to a first user of the one or more users who is currently interacting with the first interactive item in the real-world environment.
50. The method of claim 49, wherein the commentary includes one or more of audio commentary, video commentary, or written commentary.
51. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment;
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
52. The media of claim 51, wherein the software is further operable when executed to, prior to receiving the request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
53. The media of claim 51, wherein the first interactive item is located in a real-world environment.
54. The media of claim 53, wherein the software is further operable when executed to:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment.
55. The media of claim 53, wherein the software is further operable when executed to:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
56. A system, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor operable when executing the instructions to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment;
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
57. The system of claim 56, wherein the processors are further operable when executing the instructions to, prior to receiving the request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
58. The system of claim 56, wherein the first interactive item is located in a real-world environment.
59. The system of claim 58, wherein the processor is further operable when executing the instructions to:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment.
60. The system of claim 58, wherein the processor is further operable when executing the instructions to:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
61. A method comprising, by a computing system:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
62. The method of claim 61, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
63. The method of claim 61 or 62, wherein the context information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
a preset action associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content;
optionally, wherein the time information associated with the first user comprises a predetermined period of user inactivity.
64. The method of any of claims 61-63, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network;
optionally, wherein the hypothetical user is based at least in part on:
each user of the social network, or
One or more subsets of users of the social network.
65. The method of any of claims 61-64, wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content;
optionally, wherein the adjustment to the visual or audio attributes of one or more of the second virtual content is determined based at least in part on a content type associated with the second virtual content or a service type associated with the second virtual content; and/or
Optionally, wherein adjusting the social network attribute comprises temporarily limiting or removing all notifications from the social network associated with the second virtual content.
66. The method of any of claims 61-65, wherein the virtual area resides in a virtual reality environment, and the first user is a virtual user in the virtual reality environment.
67. One or more computer-readable non-transitory storage media embodying software that is operable when executed to perform a method according to any one of claims 61 to 66, or operable to:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
68. The media of claim 67, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
69. The media of claim 67 or 68, wherein the contextual information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
70. The media of any one of claims 67-69, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network; and/or
Wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content.
71. A system, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor being operable when executing the instructions to perform the method according to any one of claims 61 to 66, or being operable to:
displaying first virtual content to a first user in a virtual area, the virtual area including one or more second virtual content;
inferring an intent of the first user to interact with the first virtual content based on one or more of a first user action or contextual information; and
adjusting one or more configurations associated with one or more of the second virtual content based on the inference of the first user's intent to interact with the first virtual content.
72. The system of claim 71, wherein the first user action comprises one or more of:
a user eye movement focused on the first virtual content,
the verbal request of the first user is transmitted,
a user input associated with the first virtual content, or
A user input associated with one or more of the second virtual content.
73. The system of claim 71 or 72, wherein the contextual information comprises one or more of:
location information associated with the first user,
movement information associated with the first user,
time information associated with the first user,
one or more preset actions associated with the first virtual content;
a content type associated with the first virtual content, or
A service type associated with the first virtual content.
74. The system of any of claims 71-73, wherein inferring the intent of the first user is based at least in part on a perspective of a hypothetical user based at least in part on one or more users of an associated social network; and/or
Wherein adjusting the one or more configurations associated with the second virtual content comprises one or more of:
adjusting one or more visual attributes of one or more of the second virtual content,
adjusting one or more audio properties of one or more of the second virtual content, or
Adjusting one or more social network attributes of one or more of the second virtual content.
75. A method, in particular according to claim 61, comprising by the computing system:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the first user and the one or more second users.
76. The method of claim 75, further comprising, prior to retrieving information associated with the one or more second rooms:
determining whether the first area in the first room is equal to or greater than a predetermined minimum area.
77. The method of claim 75 or 76, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and the location of the one or more items in the first room;
optionally, wherein the retrieved information associated with the second room includes at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users;
optionally, the method further comprises:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms;
optionally, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
78. The method of any of claims 75-77, wherein providing access to the federated virtual space comprises notifying each of the first user and the one or more second users that the federated virtual space is available for use;
optionally, wherein notifying each of the first user and the one or more second users comprises sending, to each of the first user and the one or more second users, an instruction to generate a portal object that allows each of the first user and the one or more second users to virtually access the federated virtual space.
79. The method of any of claims 75-78, wherein generating a portal object comprises:
sending an instruction to the first user to draw within the first area in the first room a virtual doorway that allows the first user to virtually access the joint virtual space; and
sending instructions to each of the second users to draw a virtual doorway in each of the second rooms that allows each of the second users to virtually access the joint virtual space.
80. The method of any of claims 75-79, wherein the joint virtual space resides in a virtual reality environment, and each of the first user and the one or more second users is a virtual user in the virtual reality environment.
81. One or more computer-readable non-transitory storage media, in particular according to claim 67, embodying software that is operable when executed to perform a method according to any of claims 75 to 80, or operable to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the first user and the one or more second users.
82. The medium of claim 81, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and the location of the one or more items in the first room;
optionally, wherein the retrieved information associated with the second room includes at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
83. The medium of claim 82, further comprising:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms;
optionally, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
84. A system, in particular according to claim 71, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to any one of claims 75 to 80, or being operable to:
receiving a request from a first user to create a federated virtual space for use with one or more second users;
determining a first zone in a first room based at least in part on a spatial limitation associated with the first room and a location of one or more items in the first room, the first room associated with the first user;
retrieving information associated with one or more second rooms for each of the second users;
creating the joint virtual space based on a first region of the first room and the information associated with each of the second rooms; and
providing access to the federated virtual space to each of the first user and the one or more second users.
85. The system of claim 84, wherein the first zone is determined by calculating a maximum free space associated with the first room after evaluating the space constraint and the location of the one or more items in the first room;
optionally, wherein the retrieved information associated with the second room includes at least:
a spatial limitation associated with each of the second rooms for each of the second users, an
A location of one or more items in each of the second rooms for each of the second users.
86. The system of claim 85, further comprising:
determining a second zone for each of the one or more second rooms based at least in part on the spatial restrictions and the location of the one or more items,
wherein the second zone is determined by calculating a maximum free space associated with each of the one or more second rooms after evaluating the space limitations and the location of the one or more items in each of the one or more second rooms;
optionally, wherein the joint virtual space is created by determining a maximum overlap between a maximum free space associated with the first room and a maximum free space associated with each of the one or more second rooms.
87. A method, in particular according to claim 61 or 75, comprising by a computing system:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment,
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
88. The method of claim 87, wherein the request to share the display of the first interactive item is from a first user of the one or more users currently interacting with the first interactive item.
89. The method of claim 87 or 88, wherein the request to share the display of the first interactive item is from one or more second users, the one or more second users being virtual users associated with the virtual reality environment.
90. The method of any of claims 87-89, further comprising, prior to receiving the request to share the display of the first interactive item,
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
91. The method of any one of claims 87-90, wherein the subset of the one or more users includes virtual users in the virtual reality environment.
92. The method of any one of claims 87 to 91, wherein the first interactive item is located in a real-world environment;
optionally, the method further comprises:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment; and/or
Optionally, the method further comprises:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
93. The method of any one of claims 87-92, further comprising:
receiving comments associated with the first virtual item from one or more of the subset of users; and
sending a comment to be displayed to a first user of the one or more users who is currently interacting with the first interactive item in a real-world environment;
optionally, wherein the commentary comprises one or more of an audio commentary, a video commentary, or a written commentary.
94. One or more computer-readable non-transitory storage media, in particular according to claim 67 or 81, embodying software that is operable when executed to perform a method according to any of claims 87 to 93, or operable to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment;
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
95. The media of claim 94, wherein the software is further operable when executed to, prior to receiving the request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
96. The medium of claim 94 or 95, wherein the first interactive item is located in a real-world environment;
optionally, wherein the software is further operable when executed to:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment; and/or
Optionally, wherein the software is further operable when executed to:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
97. A system, in particular according to claim 71 or 84, comprising: one or more processors; and a memory coupled to the processor, the memory including instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to any one of claims 87 to 93, or being operable to:
receiving a request to share a display of a first interactive item with one or more users;
generating a first virtual item as a copy of the first interactive item; and
displaying the first virtual item to a subset of the one or more users in a virtual reality environment;
wherein, if a change made to the first interactive item is received, the display of the first virtual item in the virtual reality environment is updated to include the same change as the first interactive item.
98. The system of claim 97, wherein the processors are further operable when executing the instructions to, prior to receiving the request to share the display of the first interactive item:
determining one or more interactive items in an environment; and
a list of interactive items is sent to the first user for selection of at least one of the interactive items for display in the virtual reality environment.
99. The system of claim 97 or 98, wherein the first interactive item is located in a real-world environment;
optionally, wherein the instructions, when executed, are further operable to:
accessing a location of the first interactive item relative to one or more other items surrounding the first interactive item in the real-world environment;
generating one or more second virtual items as copies of the one or more other items; and
displaying the first virtual item and the one or more second virtual items in the virtual reality environment based on a location of the first interactive item relative to the one or more other items in the real-world environment; and/or
Optionally, wherein the instructions, when executed, are further operable to:
accessing an orientation of the first interactive item in the real-world environment; and
displaying the first virtual item in the virtual reality environment based on the orientation of the first interactive item in the real-world environment.
CN201980093248.8A 2018-12-27 2019-02-14 Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration Pending CN113544633A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US16/234,128 US11024074B2 (en) 2018-12-27 2018-12-27 Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
US16/234,013 2018-12-27
US16/234,013 US10921878B2 (en) 2018-12-27 2018-12-27 Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
US16/234,128 2018-12-27
US16/233,846 US20200210137A1 (en) 2018-12-27 2018-12-27 Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
US16/233,846 2018-12-27
PCT/US2019/017947 WO2020139409A1 (en) 2018-12-27 2019-02-14 Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration

Publications (1)

Publication Number Publication Date
CN113544633A true CN113544633A (en) 2021-10-22

Family

ID=71125881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980093248.8A Pending CN113544633A (en) 2018-12-27 2019-02-14 Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration

Country Status (4)

Country Link
JP (1) JP2022521117A (en)
KR (1) KR20210096695A (en)
CN (1) CN113544633A (en)
WO (1) WO2020139409A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382613A (en) * 2023-06-05 2023-07-04 江西格如灵科技股份有限公司 Whiteboard synchronization method, device, equipment and medium in virtual reality environment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117170504B (en) * 2023-11-01 2024-01-19 南京维赛客网络科技有限公司 Method, system and storage medium for viewing with person in virtual character interaction scene

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140136999A1 (en) * 2012-11-14 2014-05-15 Rounds Entertainment Ltd. Multi-User Interactive Virtual Environment System and Method
CN108604118A (en) * 2016-03-07 2018-09-28 谷歌有限责任公司 Smart object size adjustment in enhancing/reality environment and arrangement

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2685353C (en) * 2007-03-07 2016-04-26 Ideaflood, Inc. Multi-instance, multi-user animation platforms
US20120142415A1 (en) * 2010-12-03 2012-06-07 Lindsay L Jon Video Show Combining Real Reality and Virtual Reality
US9645394B2 (en) * 2012-06-25 2017-05-09 Microsoft Technology Licensing, Llc Configured virtual environments
US9818225B2 (en) * 2014-09-30 2017-11-14 Sony Interactive Entertainment Inc. Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
US10062208B2 (en) * 2015-04-09 2018-08-28 Cinemoi North America, LLC Systems and methods to provide interactive virtual environments
US11163358B2 (en) * 2016-03-17 2021-11-02 Sony Interactive Entertainment Inc. Spectating virtual (VR) environments associated with VR user interactivity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140136999A1 (en) * 2012-11-14 2014-05-15 Rounds Entertainment Ltd. Multi-User Interactive Virtual Environment System and Method
CN108604118A (en) * 2016-03-07 2018-09-28 谷歌有限责任公司 Smart object size adjustment in enhancing/reality environment and arrangement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116382613A (en) * 2023-06-05 2023-07-04 江西格如灵科技股份有限公司 Whiteboard synchronization method, device, equipment and medium in virtual reality environment

Also Published As

Publication number Publication date
JP2022521117A (en) 2022-04-06
WO2020139409A1 (en) 2020-07-02
KR20210096695A (en) 2021-08-05

Similar Documents

Publication Publication Date Title
US10921878B2 (en) Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
US11024074B2 (en) Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
KR102630902B1 (en) Automated decisions based on descriptive models
US20200210137A1 (en) Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration
US10719989B2 (en) Suggestion of content within augmented-reality environments
CN112639682A (en) Multi-device mapping and collaboration in augmented reality environments
US20180300822A1 (en) Social Context in Augmented Reality
WO2020041652A1 (en) Sharing and presentation of content within augmented-reality environments
AU2017254967A1 (en) Presence granularity with augmented reality
US20180316900A1 (en) Continuous Capture with Augmented Reality
US20220345537A1 (en) Systems and Methods for Providing User Experiences on AR/VR Systems
CN113544633A (en) Virtual space, mixed reality space, and combined mixed reality space for improved interaction and collaboration
US11172189B1 (en) User detection for projection-based augmented reality system
US20240135701A1 (en) High accuracy people identification over time by leveraging re-identification
US11196985B1 (en) Surface adaptation for projection-based augmented reality system
US11006097B1 (en) Modeling for projection-based augmented reality system
US11070792B1 (en) Surface selection for projection-based augmented reality system
CN110908501B (en) Display opacity control in artificial reality to prevent occlusion of field of view
US20220083631A1 (en) Systems and methods for facilitating access to distributed reconstructed 3d maps
CN110908501A (en) Display opacity control to prevent field of view occlusion in artificial reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: California, USA

Applicant after: Yuan Platform Technology Co.,Ltd.

Address before: California, USA

Applicant before: Facebook Technologies, LLC