US20220319117A1 - Providing adaptive asynchronous interactions in extended reality environments - Google Patents
Providing adaptive asynchronous interactions in extended reality environments Download PDFInfo
- Publication number
- US20220319117A1 US20220319117A1 US17/655,785 US202217655785A US2022319117A1 US 20220319117 A1 US20220319117 A1 US 20220319117A1 US 202217655785 A US202217655785 A US 202217655785A US 2022319117 A1 US2022319117 A1 US 2022319117A1
- Authority
- US
- United States
- Prior art keywords
- avatar
- user
- extended reality
- reality environment
- personas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000003993 interaction Effects 0.000 title claims description 53
- 230000003044 adaptive effect Effects 0.000 title description 14
- 238000012545 processing Methods 0.000 claims abstract description 60
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000009877 rendering Methods 0.000 claims abstract description 21
- 230000009471 action Effects 0.000 claims description 32
- 230000001960 triggered effect Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims 3
- 230000006870 function Effects 0.000 description 20
- 230000006399 behavior Effects 0.000 description 18
- 238000004891 communication Methods 0.000 description 9
- 230000001413 cellular effect Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 206010042008 Stereotypy Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 239000004984 smart glass Substances 0.000 description 2
- 230000003997 social interaction Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 206010011469 Crying Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004770 neurodegeneration Effects 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
Definitions
- the present disclosure relates generally to extended reality (XR) systems, and relates more particularly to devices, non-transitory computer-readable media, and methods for generating and displaying avatars in XR environments that allow for adaptive asynchronous interactions among users.
- XR extended reality
- Extended reality is an umbrella term that has been used to refer to various different forms of immersive technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), cinematic reality (CR), and diminished reality (DR).
- VR virtual reality
- AR augmented reality
- MR mixed reality
- CR cinematic reality
- DR diminished reality
- XR technologies allow virtual world (e.g., digital) objects to be brought into “real” (e.g., non-virtual) world environments and real world objects to be brought into virtual environments, e.g., via overlays or other mechanisms.
- XR technologies may have applications in fields including architecture, sports training, medicine, real estate, gaming, television and film, engineering, travel, and others. As such, immersive experiences that rely on XR technologies are growing in popularity.
- a method performed by a processing system including at least one processor includes rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- a non-transitory computer-readable medium stores instructions which, when executed by a processing system, including at least one processor, cause the processing system to perform operations.
- the operations include rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- a device in another example, includes a processing system including at least one processor and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations.
- the operations include rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- FIG. 1 illustrates an example system in which examples of the present disclosure may operate
- FIG. 2 illustrates a flowchart of an example method for providing adaptive asynchronous interactions in extended reality environments in accordance with the present disclosure
- FIG. 3 depicts a high-level block diagram of a computing device specifically programmed to perform the functions described herein.
- the present disclosure enhances social engagement among users in extended reality (XR) environments by generating and displaying avatars that allow users to interact in an adaptive asynchronous manner with other users.
- XR technologies allow virtual world (e.g., digital) objects to be brought into “real” (e.g., non-virtual) world environments and real world objects to be brought into virtual environments, e.g., via overlays or other mechanisms.
- virtual world object is an avatar, or a digital representation of a person.
- the avatar may be used by the user to explore the XR environment, to interact with objects in the XR environment, and to interact with avatars of other users in the XR environment.
- the avatar functions as a virtual persona for the user within the XR environment.
- Examples of the present disclosure allow users of XR environments to create a plurality of different avatar personas, where each avatar persona may provide an interaction that is representative of a different behavior of the user.
- the XR system e.g., an application server rendering the XR environment
- the meeting is a more formal meeting with American users (e.g., a business meeting), then it may be more appropriate for the user's avatar to shake hands with other user's avatars as a form of greeting.
- the meeting is a more formal meeting with Japanese users, then it may be more appropriate for the user's avatar to bow to the other user's avatars as a form of greeting.
- locational and situational context may be considered when determining which avatar persona of a plurality of avatar personas created for a given user should be presented to other users when the given user is not actively engaged in the extended reality environment.
- an interaction that is “asynchronous” is understood to refer to an interaction that is independent of time.
- at least one of the parties (e.g., a first party) participating in the interaction may not be “online” (e.g., logged into or otherwise actively engaged with the XR environment) at the time that the interaction is taking place.
- the interaction may still be able to proceed as a natural exchange that appears to be taking place synchronously, in real time as to one or more other parties, e.g., a second party.
- an “avatar” is understood to refer to a digital representation of a user, where the digital representation may have certain visual features (e.g., an appearance), audible features (e.g., a voice), and the like which may be selected by the user.
- the user may configure the avatar to look and/or sound like the user, or the user may configure the avatar to look and/or sound different from the user.
- the avatar may function as a representation of the user in the XR environment.
- An avatar “persona” is understood to refer to a behavior of the user's avatar, where the behavior may vary depending on location, company, and/or other context of the avatar. According to examples of the present disclosures, multiple different personas may be created for the same avatar. Thus, the avatar may act or behave differently in different situations (while the appearance of the avatar may or may not change depending upon the situation).
- FIG. 1 illustrates an example system 100 in which examples of the present disclosure may operate.
- the system 100 may include any one or more types of communication networks, such as a traditional circuit switched network (e.g., a public switched telephone network (PSTN)) or a packet network such as an Internet Protocol (IP) network (e.g., an IP Multimedia Subsystem (IMS) network), an asynchronous transfer mode (ATM) network, a wireless network, a cellular network (e.g., 2G, 3G, and the like), a long term evolution (LTE) network, 5G and the like related to the current disclosure.
- IP Internet Protocol
- IMS IP Multimedia Subsystem
- ATM asynchronous transfer mode
- wireless network e.g., a wireless network
- a cellular network e.g., 2G, 3G, and the like
- LTE long term evolution
- 5G 5G and the like related to the current disclosure.
- IP network is broadly defined as a network that uses Internet Protocol to exchange data packets.
- the system 100 may comprise a network 102 , e.g., a telecommunication service provider network, a core network, or an enterprise network comprising infrastructure for computing and communications services of a business, an educational institution, a governmental service, or other enterprises.
- the network 102 may be in communication with one or more access networks 120 and 122 , and the Internet (not shown).
- network 102 may combine core network components of a cellular network with components of a triple play service network; where triple-play services include telephone services, Internet or data services and television services to subscribers.
- network 102 may functionally comprise a fixed mobile convergence (FMC) network, e.g., an IP Multimedia Subsystem (IMS) network.
- FMC fixed mobile convergence
- IMS IP Multimedia Subsystem
- network 102 may functionally comprise a telephony network, e.g., an Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) backbone network utilizing Session Initiation Protocol (SIP) for circuit-switched and Voice over internet Protocol (VoIP) telephony services.
- IP/MPLS Internet Protocol/Multi-Protocol Label Switching
- SIP Session Initiation Protocol
- VoIP Voice over internet Protocol
- Network 102 may further comprise a broadcast television network, e.g., a traditional cable provider network or an internet Protocol Television (IPTV) network, as well as an Internet Service Provider (ISP) network.
- IPTV Internet Protocol Television
- ISP Internet Service Provider
- network 102 may include a plurality of television (TV) servers (e.g., a broadcast server, a cable head-end), a plurality of content servers, an advertising server (AS), an interactive TV/video on demand (VoD) server, and so forth.
- TV television
- AS advertising server
- VoD interactive TV/video on demand
- the access networks 120 and 122 may comprise broadband optical and/or cable access networks, Local Area Networks (LANs), wireless access networks (e.g., an IEEE 802.11/Wi-Fi network and the like), cellular access networks, Digital Subscriber Line (DSL) networks, public switched telephone network (PSTN) access networks, 3 rd party networks, and the like.
- LANs Local Area Networks
- wireless access networks e.g., an IEEE 802.11/Wi-Fi network and the like
- cellular access networks e.g., Digital Subscriber Line (DSL) networks, public switched telephone network (PSTN) access networks, 3 rd party networks, and the like.
- DSL Digital Subscriber Line
- PSTN public switched telephone network
- 3 rd party networks 3 rd party networks
- the operator of network 102 may provide a cable television service, an IPTV service, or any other types of telecommunication service to subscribers via access networks 120 and 122 .
- the access networks 120 and 122 may comprise different types of access networks, may comprise the
- the network 102 may be operated by a telecommunication network service provider.
- the network 102 and the access networks 120 and 122 may be operated by different service providers, the same service provider or a combination thereof, or may be operated by entities having core businesses that are not related to telecommunications services, e.g., corporate, governmental or educational institution LANs, and the like.
- network 102 may include an application server (AS) 104 , which may comprise a computing system or server, such as computing system 300 depicted in FIG. 3 , and may be configured to provide one or more operations or functions in connection with examples of the present disclosure for anchor caching for extended reality applications.
- the network 102 may also include a database (DB) 106 that is communicatively coupled to the AS 104 .
- AS application server
- DB database
- the terms “configure,” and “reconfigure” may refer to programming or loading a processing system with computer-readable/computer-executable instructions, code, and/or programs, e.g., in a distributed or non-distributed memory, which when executed by a processor, or processors, of the processing system within a same device or within distributed devices, may cause the processing system to perform various functions.
- Such terms may also encompass providing variables, data values, tables, objects, or other data structures or the like which may cause a processing system executing computer-readable instructions, code, and/or programs to function differently depending upon the values of the variables or other data structures that are provided.
- a “processing system” may comprise a computing device including one or more processors, or cores (e.g., as illustrated in FIG. 3 and discussed below) or multiple computing devices collectively configured to perform various steps, functions, and/or operations in accordance with the present disclosure.
- AS application server
- DB database
- AS 104 may comprise a centralized network-based server for generating extended reality media.
- the AS 104 may host an application that allows users to create multiple different avatar personas of themselves, where the multiple different avatar personas may be used to interact with other users within an XR environment when the users represented by the avatars are offline (e.g., not logged into or otherwise actively engaged in the XR environment).
- the multiple different avatar personas may be programmed to represent instances of different behaviors of the associated user. For instance, in one example, when a first user is offline and a second user wishes to interact with the first user within the XR environment, the AS 104 may select an avatar persona from among the first user's multiple avatar personas to conduct an asynchronous interaction with the second user.
- the AS 104 may select the avatar persona from among the multiple avatar personas based on the conditions and context of the desired interaction, such as the relationship of the second user to the first user (e.g., family, friends, professional acquaintances, etc.), the conditions of the location within the XR environment where the interaction will take place (e.g., crowded, noisy, etc.), and the like.
- the relationship of the second user to the first user e.g., family, friends, professional acquaintances, etc.
- the conditions of the location within the XR environment where the interaction will take place e.g., crowded, noisy, etc.
- AS 104 may comprise a physical storage device (e.g., a database server), to store sets of different avatar personas for the users of the XR environment, as discussed in greater detail below.
- the AS 104 may store an index, where the index maps each user in the XR environment to a plurality of different avatar personas for that user.
- the index may further map each avatar persona to a set of predefined actions and/or utterances, where each action or utterance in the set of predefined actions and/or utterances may be triggered by different locations, contexts, and/or events.
- the AS 104 may also store user profiles which may specify user preferences that can be used to filter a set of avatar personas for an asynchronous interaction.
- a user profile may specify, for each user: demographic information (e.g., age, gender, marital status, education, etc.), device information (e.g., whether the user uses a head mounted display, a mobile phone, a tablet computer, or the like to render and display XR media, the types of connections used by the device to access XR media such as cellular or WiFi, etc.), interests (e.g., favorite hobbies, sports teams, music, movies, etc.), usage history with respect to XR media (e.g., types of digital objects that the user has interacted with and/or ignored in the past), and/or preferences (e.g., does not like swearing or violence).
- demographic information e.g., age, gender, marital status, education, etc.
- device information e.g., whether the user uses a head mounted display, a mobile phone, a
- a user profile may also specify restrictions on the types of interactions that may be rendered for the user. For instance, a parent may configure a child's profile so that interactions which may be considered too violent or too scary (or which may include too much strong language) are prohibited from being rendered.
- profiles may be stored on an opt-in basis, i.e., a user may elect to not have a profile.
- the user profiles may be stored in encrypted form to protect any user information that may be deemed private.
- the DB 106 may store the avatar personas, the index, and/or the user profiles, and the AS 104 may retrieve the avatar personas, the index, and/or user profiles from the DB 106 when needed.
- various additional elements of network 102 are omitted from FIG. 1 .
- access network 122 may include an edge server 108 , which may comprise a computing system or server, such as computing system 300 depicted in FIG. 3 , and may be configured to provide one or more operations or functions for providing adaptive asynchronous interactions in XR environments, as described herein.
- edge server 108 may comprise a computing system or server, such as computing system 300 depicted in FIG. 3 , and may be configured to provide one or more operations or functions for providing adaptive asynchronous interactions in XR environments, as described herein.
- FIG. 2 an example method 200 for providing adaptive asynchronous interactions in XR environments is illustrated in FIG. 2 and described in greater detail below.
- application server 104 may comprise a network function virtualization infrastructure (NFVI), e.g., one or more devices or servers that are available as host devices to host virtual machines (VMs), containers, or the like comprising virtual network functions (VNFs).
- NFVI network function virtualization infrastructure
- VMs virtual machines
- VNFs virtual network functions
- access networks 120 and 122 may comprise “edge clouds,” which may include a plurality of nodes/host devices, e.g., computing resources comprising processors, e.g., central processing units (CPUs), graphics processing units (GPUs), programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), or the like, memory, storage, and so forth.
- processors e.g., central processing units (CPUs), graphics processing units (GPUs), programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), or the like, memory, storage, and so forth.
- CPUs central processing units
- edge server 108 may be instantiated on one or more servers hosting virtualization platforms for managing one or more virtual machines (VMs), containers, microservices, or the like.
- VMs virtual machines
- edge server 108 may comprise a VM, a container, or the like.
- the access network 120 may be in communication with a server 110 .
- access network 122 may be in communication with one or more devices, e.g., a user endpoint device 112 .
- Access networks 120 and 122 may transmit and receive communications between server 110 , user endpoint device 112 , application server (AS) 104 , other components of network 102 , devices reachable via the Internet in general, and so forth.
- user endpoint device 112 may comprise a mobile device, a cellular smart phone, a wearable computing device (e.g., smart glasses, a virtual reality (VR) headset or other types of head mounted display, or the like), a laptop computer, a tablet computer, or the like (broadly an “XR device”).
- user endpoint device 112 may comprise a computing system or device, such as computing system 300 depicted in FIG. 3 , and may be configured to provide one or more operations or functions in connection with examples of the present disclosure for providing adaptive asynchronous interactions in XR environments.
- server 110 may comprise a network-based server for generating XR media.
- server 110 may comprise the same or similar components as those of AS 104 and may provide the same or similar functions.
- AS 104 may similarly apply to server 110 , and vice versa.
- server 110 may be a component of an XR system operated by an entity that is not a telecommunications network operator.
- a provider of an XR system may operate server 110 and may also operate edge server 108 in accordance with an arrangement with a telecommunication service provider offering edge computing resources to third-parties.
- a telecommunication network service provider may operate network 102 and access network 122 , and may also provide an XR system via AS 104 and edge server 108 .
- the XR system may comprise an additional service that may be offered to subscribers, e.g., in addition to network access services, telephony services, traditional television services, and so forth.
- an XR system may be provided via AS 104 and edge server 108 .
- a user may engage an application on user endpoint device 112 (e.g., an “XR device”) to establish one or more sessions with the XR system, e.g., a connection to edge server 108 (or a connection to edge server 108 and a connection to AS 104 ).
- the access network 122 may comprise a cellular network (e.g., a 4G network and/or an LTE network, or a portion thereof, such as an evolved Uniform Terrestrial Radio Access Network (eUTRAN), an evolved packet core (EPC) network, etc., a 5G network, etc.).
- eUTRAN evolved Uniform Terrestrial Radio Access Network
- EPC evolved packet core
- the communications between user endpoint device 112 and edge server 108 may involve cellular communication via one or more base stations (e.g., eNodeBs, gNBs, or the like).
- the communications may alternatively or additional be via a non-cellular wireless communication modality, such as IEEE 802.11/Wi-Fi, or the like.
- access network 122 may comprise a wireless local area network (WLAN) containing at least one wireless access point (AP), e.g., a wireless router.
- WLAN wireless local area network
- AP wireless access point
- user endpoint device 112 may communicate with access network 122 , network 102 , the Internet in general, etc., via a WLAN that interfaces with access network 122 .
- user endpoint device 112 may establish a session with edge server 108 for obtaining an XR media.
- the XR media may insert one or more digital objects into a real-time image stream of a real world scene to generate an XR environment.
- an example XR environment 114 is illustrated in FIG. 1 .
- the XR environment 114 may be viewed by a user through the user endpoint device 112 , e.g., on a display of a head mounted display or mobile phone, or through a set of smart glasses.
- the user endpoint device 112 (or alternatively the AS 104 , edge server 108 , or server 110 ) may detect one or more virtual items and/or avatars with which the user may interact.
- the user may interact with virtual items 116 1 - 116 n (hereinafter individually referred to as a “virtual item 116 ” or collectively referred to as “virtual items 116 ”) and/or avatar 118 .
- Each virtual item 166 and avatar 118 may have a set of predefined actions that the user may take with respect to the virtual item 116 or avatar 118 . Furthermore, in the case of the avatar 118 , the avatar 118 may also have a set of predefined actions or utterances that the avatar 118 may perform or speak when triggered by the user.
- another example may represent a virtual item 116 as a digital twin (or XR representation) of a real world object, such as a real tree 116 1 being mapped to a virtual tree at location 116 1 where the XR experience can activate or disable itself according to the avatars or to the index of avatars in the presence of the virtual item.
- the virtual item is a tree (e.g., virtual item 116 1 )
- the user may be able to climb the tree, pick a piece of fruit from the tree, or the like.
- the virtual item is a mailbox (e.g., virtual item 116 2 )
- the user may be able to place a piece of mail in the mailbox.
- the avatar 118 may represent a specific user who may currently be offline (e.g., not currently logged into or actively engaged with the XR environment). However, the avatar 118 may be programmed with a predefined set of actions and/or utterances (a persona) that allows the user to interact with the specific user represented by the avatar in an asynchronous manner. For instance, the avatar 118 may be triggered to say something to the user and/or to perform some action upon the user attempting an interaction with the avatar 118 . For instance, if the user waves to the avatar 118 or says hello, the avatar 118 may wave back or say hello to the user.
- a persona a predefined set of actions and/or utterances
- the avatar 118 may be further customized to resemble the specific user represented by the avatar and to speak and behave like the specific user. Thus, if the avatar represents a friend of the user, when the user attempts to interact with the avatar 118 , the avatar 118 may greet the user by name or hug the user (or the user's avatar). This more personalized interaction may provide for a more immersive experience.
- system 100 has been simplified. Thus, it should be noted that the system 100 may be implemented in a different form than that which is illustrated in FIG. 1 , or may be expanded by including additional endpoint devices, access networks, network elements, application servers, etc. without altering the scope of the present disclosure.
- system 100 may be altered to omit various elements, substitute elements for devices that perform the same or similar functions, combine elements that are illustrated as separate devices, and/or implement network elements as functions that are spread across several devices that operate collectively as the respective network elements.
- the system 100 may include other network elements (not shown) such as border elements, routers, switches, policy servers, security devices, gateways, a content distribution network (CDN) and the like.
- CDN content distribution network
- portions of network 102 , access networks 120 and 122 , and/or Internet may comprise a content distribution network (CDN) having ingest servers, edge servers, and the like for packet-based streaming of video, audio, or other content.
- CDN content distribution network
- access networks 120 and/or 122 may each comprise a plurality of different access networks that may interface with network 102 independently or in a chained manner.
- AS 104 may be similarly provided by server 110 , or may be provided by AS 104 in conjunction with server 110 .
- AS 104 and server 110 may be configured in a load balancing arrangement, or may be configured to provide for backups or redundancies with respect to each other, and so forth.
- FIG. 2 illustrates a flowchart of a method 200 for providing adaptive asynchronous interactions in extended reality environments in accordance with the present disclosure.
- the method 200 provides a method by which a user in an extended reality environment may place an avatar in the extended reality environment in order to interact asynchronously with other users.
- the method 200 may be performed by an XR server that is configured to generate digital overlays that may be superimposed over images of a “real world” environment to produce an extended reality environment, such as the AS 104 or server 110 illustrated in FIG. 1 .
- the method 200 may be performed by another device, such as the processor 302 of the system 300 illustrated in FIG. 3 .
- the method 200 is described as being performed by a processing system.
- the processing system may render an extended reality (XR) environment.
- XR extended reality
- the XR environment may be an environment that combines images or elements of the real world with digital or “virtual” elements. At least some of the virtual elements may be interactive, such that a user may interact with the virtual elements to trigger some action or event.
- a user may view and interact with the XR environment using any type of device of combination of devices that is capable of displaying and exchanging signals with the XR environment, including a mobile device such as a mobile phone or tablet computer or a wearable device such as a head mounted display.
- the processing system may render the XR environment by generating a digital overlay that is superimposed over a stream of images (e.g., video) of a real world environment.
- a wearable device such as a head mounted display may present the overlay on a display while the user is viewing the real world environment through the head mounted display.
- the processing system may generate an entirely digital environment in which certain elements of a real world environment (e.g., buildings, vehicles, geological features, landmarks, etc.) are recreated in digital form.
- the processing system may receive a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment.
- the avatar may comprise a digital representation of the first user.
- the avatar may not be simply a static representation of the first user; the avatar may be an interactive digital persona that mimics or emulates the personality and behaviors of the first user.
- the first user may control the avatar.
- the first user may actively control the utterances and behaviors of the avatar in real time (i.e., as the avatar is interacting with the other users).
- the first user may wish to leave the avatar in the XR environment to interact with other users, even when the first user is not logged into or actively present and interacting within the XR environment.
- the first user may select a location within the XR environment and request that the processing system place the avatar in that location so that other users may interact with the avatar.
- the first user may not be capable of actively controlling the utterances and behaviors of the avatar in real time (i.e., as the avatar is interacting with the other users).
- the processing system may place the avatar of the first user within the extended reality environment.
- the processing system may place the avatar of the first user in a location within the XR environment that is chosen by the first user.
- the first user may not specify a location for the avatar placement, or the requested location may be unavailable (e.g., placement of the avatar in the requested location may obstruct other users' access to other interactive virtual elements within the XR environment, or placement of the avatar in the requested location may render the avatar difficult for other users to see).
- the processing system may automatically select a location for the placement of the avatar.
- the processing system may place the avatar as close as possible to a location requested by the first user (e.g., while minimizing any obstruction of other virtual elements by the avatar).
- a plurality of avatar personas may be associated with the first user, where each avatar persona of the plurality of avatar personas may present an interaction that is representative of a different behavior of the first user.
- a first avatar persona may present a casual social interaction that is suitable for greeting close friends (e.g., high-fiving, using slang language, etc.)
- a second avatar persona may present a more formal social interaction that is more suitable for greeting professional acquaintances or individuals with whom the first user is less socially familiar (e.g., shaking hands, using more formal language, etc.).
- the request from the first user may identify which avatar persona of the plurality of avatar personas the first user wishes to have placed in the XR environment (e.g., as a default avatar persona).
- the avatar persona that is presented to other users to represent the first user may be adapted or changed over time based on changing circumstances and conditions within the XR environment.
- the processing system may select which avatar persona of the plurality of avatar personas to place for the first user (e.g., as a default avatar persona), based on the current conditions within the XR environment at the placement location.
- the conditions may include one or more of: the physical location of the placement location (e.g., coordinates within a coordinate system), the orientation of the placement location, the environmental conditions (e.g., sunny, rainy, noisy, etc.) present at the placement location, time of day within the XR environment (e.g., morning hours.
- the social conditions e.g., behaviors or demeanors of other users
- At least some of these conditions may be detected through the use of sensors (e.g., cameras, microphones, and/or the like) which may be used to gather data from the placement location and provide the data to the processing system (or another system) for further analysis.
- the processing system may detect that a second user is attempting to interact with the avatar of the first user, e.g., via a user endpoint device of the second user.
- the first user may not be logged into or actively present and interacting in the XR environment.
- the avatar of the first user may conduct an interaction with the second user in an asynchronous manner. That is, although the interaction may appear, from the second user's perspective, to be happening in real time (e.g., as if happening with a first user who is logged into or actively present within the XR environment), the utterances and/or behaviors of the first user's avatar may be predefined to at least some extent.
- the processing system may detect the second user attempting to interact with the avatar of the first user when some signal is received from the second user. For instance, the second user, e.g., via the user endpoint device of the second user, may click on or position a pointer over the avatar of the first user, may speak an utterance that indicates that an interaction with the avatar of the first user is desired (e.g., “Hello, Eric” or “Excuse me, sir”), or may move in a direction toward the avatar of the first user while the avatar of the first user is within the direction of the second user's gaze (e.g., as may be monitored by a head mounted display or other external sensors).
- the second user e.g., via the user endpoint device of the second user, may click on or position a pointer over the avatar of the first user, may speak an utterance that indicates that an interaction with the avatar of the first user is desired (e.g., “Hello, Eric” or “Excuse me, sir”), or may move in a
- the processing system may detect the conditions surrounding the avatar of the first user.
- the conditions may include one or more of: the physical location of the placement location (e.g., coordinates within a coordinate system), the orientation of the placement location, the environmental conditions (e.g., sunny, rainy, noisy, etc.) present at the placement location, the time of day, and/or the social conditions (e.g., behaviors or demeanors of other users) present at the placement location. At least some of these conditions may be detected through the use of sensors (e.g., cameras, microphones, and/or the like).
- the processing system may identify, based on the conditions determined in step 212 , a set of candidate avatar personas for the avatar of the first user (e.g., to replace the default avatar persona placed in step 208 ).
- the conditions may serve as a filter for determining which avatar personas of the plurality of avatar personas may be suitable or unsuitable for interacting with the second user at the current time.
- One or more machine learning and/or learning from demonstration techniques may be used to determine which types of avatar personas (or more specifically, which types of action, utterances, and/or behaviors) may be suitable or unsuitable given a specific set of conditions.
- Other examples may include the social company as a combination of the first user and the second user in the XR experience.
- the second user may be accompanied by her children, while the first user's avatar may be accompanied by no other avatars.
- the second user may be alone, but the first user's avatar may be accompanied by other avatars or users which may trigger a more formal social environment.
- conditions may change as part of the XR experience itself, where both the first user's avatar and the second user are participating in the “introduction” stage of a racing XR experience. However, after a period of conversational time, the XR experience may transition to the “competition” stage, such that the avatar of the first user changes appearance or social mannerisms or detects similar changes from the second user.
- the conditions are noisy, it may not be appropriate to include an avatar persona in the set of candidate avatar personas which whispers or speaks in a very quiet voice.
- the second user appears to be upset (e.g., if audio of the second user appears to show the second user crying, or if the avatar of the second user is frowning), then it may not be appropriate to include an avatar persona in the set of candidate avatar personas that is loud and boisterous.
- the set of candidate avatar personas may only include avatar personas that are dressed in business attire (e.g., suits or business casual attire, as opposed to jeans and t-shirts) and/or that speak and behave in a more formal demeanor.
- business attire e.g., suits or business casual attire, as opposed to jeans and t-shirts
- the processing system may filter the set of candidate avatar personas based on a preference of the second user, to generate a filtered set of candidate avatar personas for the first user.
- a profile for the second user may include the second user's preferences for interacting with other users in the XR environment.
- the second user may be a child whose parents have configured parental control settings to control the child's interactions within the XR environment.
- the second user's preferences may indicate that the second user does not wish to engage in interactions that include swearing or violence.
- any avatar personas in the set of candidate avatar personas which may speak or behave in a manner that is violent or that includes swearing may be filtered (i.e., removed) from the set of candidate avatar personas.
- the second user's preferences may indicate that the second user is an avid hockey fan.
- an avatar persona in the set of candidate avatar personas that knows hockey statistics and/or is wearing the jersey of a professional hockey team (or is changed dynamically in the XR environment, e.g., showing the avatar taking the action of flipping over a shirt revealing a hockey team logo or taking an action of putting on an outer jacket with the hockey team logo and so on) may be appropriate to keep for consideration in the set of candidate avatar personas, and may even be prioritized or ranked highly relative to other avatar personas in the set of candidate avatar personas.
- the processing system may select a first avatar persona from among the (optionally filtered) set of candidate avatar personas.
- the first avatar persona may be the avatar persona in the set of candidate avatar personas that the processing system determines is most appropriate for the conditions surrounding the avatar of the first user.
- the processing system may rank the set of candidate avatar personas in an order from most appropriate to least appropriate for the conditions. The ranking may take into consideration any metadata associated with the plurality of avatar personas, where the processing system may attempt to match the metadata to metadata or keywords describing the set of conditions. For instance, as discussed above, if the conditions are noisy, then the processing system may rank an avatar persona whose metadata indicates that the avatar persona speaks in a whisper relatively low.
- the processing system may rank an avatar persona whose metadata indicates that the avatar persona is wearing a hockey jersey or can discuss hockey relatively high. In one example, the processing system may select the avatar persona with the highest ranking. In another example, the processing system may detect a pattern of behaviors from the second user that will trigger selection of an avatar persona that is not related to the most relevant interaction. For instance, in a gaming scenario or a malicious hacking scenario, the second user may repeat an interaction with the first user's avatar to discover a weakness in the avatar indexing system or in an avatar index that is valid but is not intended for the second user's current conditions.
- the system may default to a generic or incorrect avatar response to avoid or temporarily suppress anomalous interactions of the second user (much like current password entry systems may block a user after a number of consecutive wrong entries).
- Other behaviors may also be enabled by the system, but the selection stage (e.g., step 218 ) may contain historical conditions from the second user to assist in determining appropriate actions in this situation.
- the processing system may render the avatar of the first user with the first avatar persona in the extended reality environment. That is, the processing system may insert the avatar of the first user into the XR environment so that the second user may interact with the avatar of the first user in a manner that is consistent with the first avatar persona.
- the avatar of the first user may be rendered in such a manner that the avatar of the first user may replay a set of predefined actions (e.g., high fiving, hand shaking, dancing, etc.) and/or predefined utterances.
- the predefined actions and/or utterances may not necessarily be played in any sort of defined or continuous order.
- the avatar of the first user may have a set of predefined actions that the avatar of the first user can perform and a set of predefined utterances that the avatar of the first user can speak. Any one of these predefined actions or predefined utterances may be selected by the processing system for rendering in response to some sort of trigger.
- the trigger may be an action or utterance of the second user. For instance, if the second user holds out a hand for a high five, then the processing system may control the avatar of the first user to return the high five. If the second user makes a statement about a particular hockey team, then the processing system may control the avatar of the first user to respond with another statement about the particular hockey team or about hockey in general.
- the method 200 may thus return to step 210 and may continue to adapt the avatar of the first user (e.g., adapt the persona of the avatar of the first user) in response to the actions and/or utterances of the second user. This interaction may continue until the second user gives some indication that the interaction is over (e.g., the second user may walk away, or say “goodbye,” or “I have to go now.”). Steps of the method 200 may also be repeated for subsequent users who may wish to interact with the avatar of the first user, and different avatar personas for the avatar of the first user may be selected for these subsequent users based on the conditions at the time of the interactions and/or the preferences of the subsequent users, as discussed above.
- examples of the disclosure allow a first user to interact with a second user in an XR environment in an asynchronous manner. That is, the first user may be able to “interact” with the second user even when the first user is not online or actively engaged in the XR environment.
- This is made possible by the use of multiple different avatar personas for the avatar of the first user, where each avatar persona of the multiple different avatar personas may be programmed to perform a set of predefined actions and/or speak a set of predefined utterances that provide an interaction that is representative of a different behavior of the first user.
- a processing system may select an appropriate avatar persona for the first user, from among the multiple different avatar personas, to present to the second user.
- This provides for a more natural and more immersive experience for users, even when some users may be offline.
- the second user may also be offline or not actively engaged in the XR environment in this case.
- two avatars having respective sets of predefined actions and/or utterances may interact with each other in an asynchronous manner.
- lower ranked avatar personas of the avatar of the first user may be subsequently deployed if the second user returns to again interact with the avatar of the first user in some later time. Selecting a different avatar persona of the avatar of the first user may be more engaging for the second user to provide variability in the interactions.
- the ability to provide adaptive asynchronous interactions in XR environments may have a number of applications in the XR field. For instance, in one example, a user may take an avatar “selfie” and leave the avatar at a particular location in an XR environment. In a heavily socialized environment (e.g., where memes and sharing may be encouraged), these types of avatars may have predefined patterns of actions that can be replayed or dismissed for the purposes of entertainment or adaptation. The predefined patterns of actions may embody the personality of the user who the avatar represents, thereby allowing the user to personalize the avatar.
- a first user might also identify an avatar of a second user, where the avatar of the second user is performing some sort of action, and may “borrow” or request that the action be applied to the first user's avatar.
- a user may create an avatar to connect with a particular action or location.
- the user may use local objects in a location to capture actions for later replay.
- the user may create an avatar of himself performing a trick on a skateboard, and leave the avatar at a skate park in the XR environment.
- a user's avatar may be created with superhero powers, and the user may point to different locations or objects in the XR environment to define jump points for the avatar.
- a repeatable game scenario may be created for educational or entertainment purposes.
- different avatars could be created to act as different game characters, to perform different game actions (e.g., running back and forth, guarding an object), to provide motivational support or hints, and the like.
- a virtual scavenger hunt could be created within the XR environment. In this case, certain asynchronous interactions with avatars may only be unlocked when a user has completed some other list of actions or interactions first.
- an avatar may be created to function as a virtual tour guide or concierge.
- different actions, utterances, and/or behaviors for the avatar may be triggered at different locations.
- a virtual tour guide in an art museum may be programmed with a first set of utterances (e.g., facts, trivia, answers to frequently asked questions, etc.) that is triggered when the virtual tour guide is within x feet of a first painting and a second set of utterances that is triggered when the virtual tour guide is within x feet of a second painting.
- a user or business may be able to create a semi-knowledgeable presence with some limited interaction patterns that can be placed at specific locations to assist, direct, or educate other users at those locations.
- learning from demonstration techniques may also be used to help determine suitable or unsuitable behaviors, utterances, and actions for given conditions and contexts.
- these techniques may also be used to learn how a particular user reacts to specific conditions and contexts, which may help to further personalize different avatars for the particular user.
- such techniques could be used to personalize a base or template avatar for the particular user. For instance, a template avatar for “ninja” could be personalized to incorporate physical features, mannerisms, and/or speech patterns of the particular user.
- a user who has placed an avatar for asynchronous interactions in an XR environment may receive notifications when other users interact with the avatar. For instance, the user may receive a summary or replay of the interactions. These notifications may help the user to make decisions which may affect the avatar, such as adapting certain patterns of behavior to new users or contexts. For instance, when interacting asynchronously with specific other users, the user may want the avatar to hug those other users after offering a high five.
- Avatars may also be constructed from historical synchronous interactions involving a user. Such avatars may emulate the user in a way that allows family or friends to interact with the user when the user may no longer be present or may only be capable of interacting in person in limited ways (e.g., as in the case of a user who may have neurodegenerative disease or impairment due to an accident).
- one or more steps of the method 200 may include a storing, displaying and/or outputting step as required for a particular application.
- any data, records, fields, and/or intermediate results discussed in the method can be stored, displayed and/or outputted to another device as required for a particular application.
- operations, steps, or blocks in FIG. 2 that recite a determining operation or involve a decision do not necessarily require that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step.
- FIG. 3 depicts a high-level block diagram of a computing device specifically programmed to perform the functions described herein.
- any one or more components or devices illustrated in FIG. 1 or described in connection with the method 200 may be implemented as the system 300 .
- a server (such as might be used to perform the method 200 ) could be implemented as illustrated in FIG. 3 .
- the system 300 comprises a hardware processor element 302 , a memory 304 , a module 305 for providing adaptive asynchronous interactions in extended reality environments, and various input/output (I/O) devices 306 .
- the hardware processor 302 may comprise, for example, a microprocessor, a central processing unit (CPU), or the like.
- the memory 304 may comprise, for example, random access memory (RAM), read only memory (ROM), a disk drive, an optical drive, a magnetic drive, and/or a Universal Serial Bus (USB) drive.
- the module 305 for providing adaptive asynchronous interactions in extended reality environments may include circuitry and/or logic for performing special purpose functions relating to the operation of a home gateway or XR server.
- the input/output devices 306 may include, for example, a camera, a video camera, storage devices (including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive), a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like), or a sensor.
- storage devices including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive
- a receiver includes a transmitter, a speaker, a display, a speech synthesizer, an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like), or a sensor.
- the computer may employ a plurality of processor elements.
- the computer may employ a plurality of processor elements.
- the computer of this Figure is intended to represent each of those multiple computers.
- one or more hardware processors can be utilized in supporting a virtualized or shared computing environment.
- the virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices.
- hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented.
- the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the steps, functions and/or operations of the above disclosed method(s).
- ASIC application specific integrated circuits
- PLA programmable logic array
- FPGA field-programmable gate array
- instructions and data for the present module or process 305 for providing adaptive asynchronous interactions in extended reality environments can be loaded into memory 304 and executed by hardware processor element 302 to implement the steps, functions or operations as discussed above in connection with the example method 200 .
- a hardware processor executes instructions to perform “operations,” this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component (e.g., a co-processor and the like) to perform the operations.
- the processor executing the computer readable or software instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor.
- the present module 305 for providing adaptive asynchronous interactions in extended reality environments (including associated data structures) of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, non-volatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like.
- the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Hospice & Palliative Care (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Radar, Positioning & Navigation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
In one example, a method performed by a processing system including at least one processor includes rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
Description
- This application is a continuation of U.S. patent application Ser. No. 17/221,732, filed on Apr. 2, 2021, now U.S. Pat. No. 11,282,278, which is herein incorporated by reference in its entirety.
- The present disclosure relates generally to extended reality (XR) systems, and relates more particularly to devices, non-transitory computer-readable media, and methods for generating and displaying avatars in XR environments that allow for adaptive asynchronous interactions among users.
- Extended reality (XR) is an umbrella term that has been used to refer to various different forms of immersive technologies, including virtual reality (VR), augmented reality (AR), mixed reality (MR), cinematic reality (CR), and diminished reality (DR). Generally speaking, XR technologies allow virtual world (e.g., digital) objects to be brought into “real” (e.g., non-virtual) world environments and real world objects to be brought into virtual environments, e.g., via overlays or other mechanisms. XR technologies may have applications in fields including architecture, sports training, medicine, real estate, gaming, television and film, engineering, travel, and others. As such, immersive experiences that rely on XR technologies are growing in popularity.
- In one example, the present disclosure describes a device, computer-readable medium, and method for generating and displaying avatars in XR environments that allow for adaptive asynchronous interactions among users. For instance, in one example, a method performed by a processing system including at least one processor includes rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- In another example, a non-transitory computer-readable medium stores instructions which, when executed by a processing system, including at least one processor, cause the processing system to perform operations. The operations include rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- In another example, a device includes a processing system including at least one processor and a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations. The operations include rendering an extended reality environment, receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment, placing the avatar of the first user within the extended reality environment, detecting that a second user is attempting to interact with the avatar of the first user, detecting conditions surrounding the avatar of the first user, identifying, based on the conditions, a set of candidate avatar personas for the first user, selecting a first avatar persona from among the set of candidate avatar personas, and rendering the avatar of the first user with the first avatar persona in the extended reality environment.
- The teachings of the present disclosure can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
-
FIG. 1 illustrates an example system in which examples of the present disclosure may operate; -
FIG. 2 illustrates a flowchart of an example method for providing adaptive asynchronous interactions in extended reality environments in accordance with the present disclosure; and -
FIG. 3 depicts a high-level block diagram of a computing device specifically programmed to perform the functions described herein. - To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
- In one example, the present disclosure enhances social engagement among users in extended reality (XR) environments by generating and displaying avatars that allow users to interact in an adaptive asynchronous manner with other users. As discussed above, XR technologies allow virtual world (e.g., digital) objects to be brought into “real” (e.g., non-virtual) world environments and real world objects to be brought into virtual environments, e.g., via overlays or other mechanisms. One specific type of virtual world object is an avatar, or a digital representation of a person. The avatar may be used by the user to explore the XR environment, to interact with objects in the XR environment, and to interact with avatars of other users in the XR environment. Thus, the avatar functions as a virtual persona for the user within the XR environment.
- As more and more interactions move to virtual environments including XR environments, there is a growing user demand for the ability to connect with other users via virtual personas that can be accessed at any time, e.g., including when the users who are represented by the virtual personas may not be “online” (e.g., actively logged into or engaged with the application that is rendering the virtual environment). Presently, options for asynchronous user interactions (i.e., interactions in which the two or more interacting users are online at different, potentially non-overlapping times) are limited. For instance, these options include static messages (e.g., email, voice messages, and the like) and more dynamic content (e.g., videos that may reenact an interaction) which have no anchors to particular locations within the XR environment.
- Examples of the present disclosure allow users of XR environments to create a plurality of different avatar personas, where each avatar persona may provide an interaction that is representative of a different behavior of the user. The XR system (e.g., an application server rendering the XR environment) may then select one of a user's many avatar personas for presentation based on different triggers within the XR environment, where the triggers may be location-based or context-based. For instance, if the user's avatar is participating in a casual meeting within the XR environment, then it may be appropriate for the user's avatar to high-five another user's avatar as a form of greeting. However, if the meeting is a more formal meeting with American users (e.g., a business meeting), then it may be more appropriate for the user's avatar to shake hands with other user's avatars as a form of greeting. Similarly, if the meeting is a more formal meeting with Japanese users, then it may be more appropriate for the user's avatar to bow to the other user's avatars as a form of greeting. Thus, locational and situational context may be considered when determining which avatar persona of a plurality of avatar personas created for a given user should be presented to other users when the given user is not actively engaged in the extended reality environment. These and other aspects of the present disclosure are described in greater detail below in connection with the examples of
FIGS. 1-3 . - Within the context of the present disclosure, an interaction that is “asynchronous” is understood to refer to an interaction that is independent of time. For instance, in an asynchronous interaction occurring in an XR environment, at least one of the parties (e.g., a first party) participating in the interaction may not be “online” (e.g., logged into or otherwise actively engaged with the XR environment) at the time that the interaction is taking place. However, the interaction may still be able to proceed as a natural exchange that appears to be taking place synchronously, in real time as to one or more other parties, e.g., a second party.
- Moreover, within the context of the present disclosure, an “avatar” is understood to refer to a digital representation of a user, where the digital representation may have certain visual features (e.g., an appearance), audible features (e.g., a voice), and the like which may be selected by the user. For instance, the user may configure the avatar to look and/or sound like the user, or the user may configure the avatar to look and/or sound different from the user. In either case, the avatar may function as a representation of the user in the XR environment. An avatar “persona” is understood to refer to a behavior of the user's avatar, where the behavior may vary depending on location, company, and/or other context of the avatar. According to examples of the present disclosures, multiple different personas may be created for the same avatar. Thus, the avatar may act or behave differently in different situations (while the appearance of the avatar may or may not change depending upon the situation).
- To further aid in understanding the present disclosure,
FIG. 1 illustrates anexample system 100 in which examples of the present disclosure may operate. Thesystem 100 may include any one or more types of communication networks, such as a traditional circuit switched network (e.g., a public switched telephone network (PSTN)) or a packet network such as an Internet Protocol (IP) network (e.g., an IP Multimedia Subsystem (IMS) network), an asynchronous transfer mode (ATM) network, a wireless network, a cellular network (e.g., 2G, 3G, and the like), a long term evolution (LTE) network, 5G and the like related to the current disclosure. It should be noted that an IP network is broadly defined as a network that uses Internet Protocol to exchange data packets. Additional example IP networks include Voice over IP (VoIP) networks, Service over IP (SoIP) networks, and the like. - In one example, the
system 100 may comprise anetwork 102, e.g., a telecommunication service provider network, a core network, or an enterprise network comprising infrastructure for computing and communications services of a business, an educational institution, a governmental service, or other enterprises. Thenetwork 102 may be in communication with one ormore access networks network 102 may combine core network components of a cellular network with components of a triple play service network; where triple-play services include telephone services, Internet or data services and television services to subscribers. For example,network 102 may functionally comprise a fixed mobile convergence (FMC) network, e.g., an IP Multimedia Subsystem (IMS) network. In addition,network 102 may functionally comprise a telephony network, e.g., an Internet Protocol/Multi-Protocol Label Switching (IP/MPLS) backbone network utilizing Session Initiation Protocol (SIP) for circuit-switched and Voice over internet Protocol (VoIP) telephony services. Network 102 may further comprise a broadcast television network, e.g., a traditional cable provider network or an internet Protocol Television (IPTV) network, as well as an Internet Service Provider (ISP) network. In one example,network 102 may include a plurality of television (TV) servers (e.g., a broadcast server, a cable head-end), a plurality of content servers, an advertising server (AS), an interactive TV/video on demand (VoD) server, and so forth. - In one example, the
access networks network 102 may provide a cable television service, an IPTV service, or any other types of telecommunication service to subscribers viaaccess networks access networks network 102 may be operated by a telecommunication network service provider. Thenetwork 102 and theaccess networks - In accordance with the present disclosure,
network 102 may include an application server (AS) 104, which may comprise a computing system or server, such ascomputing system 300 depicted inFIG. 3 , and may be configured to provide one or more operations or functions in connection with examples of the present disclosure for anchor caching for extended reality applications. Thenetwork 102 may also include a database (DB) 106 that is communicatively coupled to theAS 104. - It should be noted that as used herein, the terms “configure,” and “reconfigure” may refer to programming or loading a processing system with computer-readable/computer-executable instructions, code, and/or programs, e.g., in a distributed or non-distributed memory, which when executed by a processor, or processors, of the processing system within a same device or within distributed devices, may cause the processing system to perform various functions. Such terms may also encompass providing variables, data values, tables, objects, or other data structures or the like which may cause a processing system executing computer-readable instructions, code, and/or programs to function differently depending upon the values of the variables or other data structures that are provided. As referred to herein a “processing system” may comprise a computing device including one or more processors, or cores (e.g., as illustrated in
FIG. 3 and discussed below) or multiple computing devices collectively configured to perform various steps, functions, and/or operations in accordance with the present disclosure. Thus, although only a single application server (AS) 104 and single database (DB) are illustrated, it should be noted that any number of servers may be deployed, and which may operate in a distributed and/or coordinated manner as a processing system to perform operations in connection with the present disclosure. - In one example, AS 104 may comprise a centralized network-based server for generating extended reality media. For instance, the
AS 104 may host an application that allows users to create multiple different avatar personas of themselves, where the multiple different avatar personas may be used to interact with other users within an XR environment when the users represented by the avatars are offline (e.g., not logged into or otherwise actively engaged in the XR environment). The multiple different avatar personas may be programmed to represent instances of different behaviors of the associated user. For instance, in one example, when a first user is offline and a second user wishes to interact with the first user within the XR environment, theAS 104 may select an avatar persona from among the first user's multiple avatar personas to conduct an asynchronous interaction with the second user. TheAS 104 may select the avatar persona from among the multiple avatar personas based on the conditions and context of the desired interaction, such as the relationship of the second user to the first user (e.g., family, friends, professional acquaintances, etc.), the conditions of the location within the XR environment where the interaction will take place (e.g., crowded, noisy, etc.), and the like. - In one example, AS 104 may comprise a physical storage device (e.g., a database server), to store sets of different avatar personas for the users of the XR environment, as discussed in greater detail below. For instance, the
AS 104 may store an index, where the index maps each user in the XR environment to a plurality of different avatar personas for that user. In one example, the index may further map each avatar persona to a set of predefined actions and/or utterances, where each action or utterance in the set of predefined actions and/or utterances may be triggered by different locations, contexts, and/or events. - In a further example, the
AS 104 may also store user profiles which may specify user preferences that can be used to filter a set of avatar personas for an asynchronous interaction. For instance, in one example, a user profile may specify, for each user: demographic information (e.g., age, gender, marital status, education, etc.), device information (e.g., whether the user uses a head mounted display, a mobile phone, a tablet computer, or the like to render and display XR media, the types of connections used by the device to access XR media such as cellular or WiFi, etc.), interests (e.g., favorite hobbies, sports teams, music, movies, etc.), usage history with respect to XR media (e.g., types of digital objects that the user has interacted with and/or ignored in the past), and/or preferences (e.g., does not like swearing or violence). - A user profile may also specify restrictions on the types of interactions that may be rendered for the user. For instance, a parent may configure a child's profile so that interactions which may be considered too violent or too scary (or which may include too much strong language) are prohibited from being rendered. In one example, profiles may be stored on an opt-in basis, i.e., a user may elect to not have a profile. In a further example, the user profiles may be stored in encrypted form to protect any user information that may be deemed private.
- In one example, the
DB 106 may store the avatar personas, the index, and/or the user profiles, and theAS 104 may retrieve the avatar personas, the index, and/or user profiles from theDB 106 when needed. For ease of illustration, various additional elements ofnetwork 102 are omitted fromFIG. 1 . - In one example,
access network 122 may include anedge server 108, which may comprise a computing system or server, such ascomputing system 300 depicted inFIG. 3 , and may be configured to provide one or more operations or functions for providing adaptive asynchronous interactions in XR environments, as described herein. For instance, anexample method 200 for providing adaptive asynchronous interactions in XR environments is illustrated inFIG. 2 and described in greater detail below. - In one example,
application server 104 may comprise a network function virtualization infrastructure (NFVI), e.g., one or more devices or servers that are available as host devices to host virtual machines (VMs), containers, or the like comprising virtual network functions (VNFs). In other words, at least a portion of thenetwork 102 may incorporate software-defined network (SDN) components. Similarly, in one example,access networks access network 122 comprises radio access networks, the nodes and other components of theaccess network 122 may be referred to as a mobile edge infrastructure. As just one example,edge server 108 may be instantiated on one or more servers hosting virtualization platforms for managing one or more virtual machines (VMs), containers, microservices, or the like. In other words, in one example,edge server 108 may comprise a VM, a container, or the like. - In one example, the
access network 120 may be in communication with aserver 110. Similarly,access network 122 may be in communication with one or more devices, e.g., auser endpoint device 112.Access networks server 110,user endpoint device 112, application server (AS) 104, other components ofnetwork 102, devices reachable via the Internet in general, and so forth. In one example,user endpoint device 112 may comprise a mobile device, a cellular smart phone, a wearable computing device (e.g., smart glasses, a virtual reality (VR) headset or other types of head mounted display, or the like), a laptop computer, a tablet computer, or the like (broadly an “XR device”). In one example,user endpoint device 112 may comprise a computing system or device, such ascomputing system 300 depicted inFIG. 3 , and may be configured to provide one or more operations or functions in connection with examples of the present disclosure for providing adaptive asynchronous interactions in XR environments. - In one example,
server 110 may comprise a network-based server for generating XR media. In this regard,server 110 may comprise the same or similar components as those ofAS 104 and may provide the same or similar functions. Thus, any examples described herein with respect to AS 104 may similarly apply toserver 110, and vice versa. In particular,server 110 may be a component of an XR system operated by an entity that is not a telecommunications network operator. For instance, a provider of an XR system may operateserver 110 and may also operateedge server 108 in accordance with an arrangement with a telecommunication service provider offering edge computing resources to third-parties. However, in another example, a telecommunication network service provider may operatenetwork 102 andaccess network 122, and may also provide an XR system via AS 104 andedge server 108. For instance, in such an example, the XR system may comprise an additional service that may be offered to subscribers, e.g., in addition to network access services, telephony services, traditional television services, and so forth. - In an illustrative example, an XR system may be provided via AS 104 and
edge server 108. In one example, a user may engage an application on user endpoint device 112 (e.g., an “XR device”) to establish one or more sessions with the XR system, e.g., a connection to edge server 108 (or a connection to edgeserver 108 and a connection to AS 104). In one example, theaccess network 122 may comprise a cellular network (e.g., a 4G network and/or an LTE network, or a portion thereof, such as an evolved Uniform Terrestrial Radio Access Network (eUTRAN), an evolved packet core (EPC) network, etc., a 5G network, etc.). Thus, the communications betweenuser endpoint device 112 andedge server 108 may involve cellular communication via one or more base stations (e.g., eNodeBs, gNBs, or the like). However, in another example, the communications may alternatively or additional be via a non-cellular wireless communication modality, such as IEEE 802.11/Wi-Fi, or the like. For instance,access network 122 may comprise a wireless local area network (WLAN) containing at least one wireless access point (AP), e.g., a wireless router. Alternatively, or in addition,user endpoint device 112 may communicate withaccess network 122,network 102, the Internet in general, etc., via a WLAN that interfaces withaccess network 122. - In the example of
FIG. 1 ,user endpoint device 112 may establish a session withedge server 108 for obtaining an XR media. For illustrative purposes, the XR media may insert one or more digital objects into a real-time image stream of a real world scene to generate an XR environment. In this regard, anexample XR environment 114 is illustrated inFIG. 1 . - In one example, the
XR environment 114 may be viewed by a user through theuser endpoint device 112, e.g., on a display of a head mounted display or mobile phone, or through a set of smart glasses. As the user moves through theXR environment 114, the user endpoint device 112 (or alternatively theAS 104,edge server 108, or server 110) may detect one or more virtual items and/or avatars with which the user may interact. For instance, in the example ofFIG. 1 , the user may interact with virtual items 116 1-116 n (hereinafter individually referred to as a “virtual item 116” or collectively referred to as “virtual items 116”) and/oravatar 118. Each virtual item 166 andavatar 118 may have a set of predefined actions that the user may take with respect to the virtual item 116 oravatar 118. Furthermore, in the case of theavatar 118, theavatar 118 may also have a set of predefined actions or utterances that theavatar 118 may perform or speak when triggered by the user. With no loss of generality, another example may represent a virtual item 116 as a digital twin (or XR representation) of a real world object, such as a real tree 116 1 being mapped to a virtual tree at location 116 1 where the XR experience can activate or disable itself according to the avatars or to the index of avatars in the presence of the virtual item. - As an example, where the virtual item is a tree (e.g., virtual item 116 1), the user may be able to climb the tree, pick a piece of fruit from the tree, or the like. Where the virtual item is a mailbox (e.g., virtual item 116 2), the user may be able to place a piece of mail in the mailbox.
- In the case of the
avatar 118, theavatar 118 may represent a specific user who may currently be offline (e.g., not currently logged into or actively engaged with the XR environment). However, theavatar 118 may be programmed with a predefined set of actions and/or utterances (a persona) that allows the user to interact with the specific user represented by the avatar in an asynchronous manner. For instance, theavatar 118 may be triggered to say something to the user and/or to perform some action upon the user attempting an interaction with theavatar 118. For instance, if the user waves to theavatar 118 or says hello, theavatar 118 may wave back or say hello to the user. Theavatar 118 may be further customized to resemble the specific user represented by the avatar and to speak and behave like the specific user. Thus, if the avatar represents a friend of the user, when the user attempts to interact with theavatar 118, theavatar 118 may greet the user by name or hug the user (or the user's avatar). This more personalized interaction may provide for a more immersive experience. - It should also be noted that the
system 100 has been simplified. Thus, it should be noted that thesystem 100 may be implemented in a different form than that which is illustrated inFIG. 1 , or may be expanded by including additional endpoint devices, access networks, network elements, application servers, etc. without altering the scope of the present disclosure. In addition,system 100 may be altered to omit various elements, substitute elements for devices that perform the same or similar functions, combine elements that are illustrated as separate devices, and/or implement network elements as functions that are spread across several devices that operate collectively as the respective network elements. For example, thesystem 100 may include other network elements (not shown) such as border elements, routers, switches, policy servers, security devices, gateways, a content distribution network (CDN) and the like. For example, portions ofnetwork 102,access networks access networks 120 and/or 122 may each comprise a plurality of different access networks that may interface withnetwork 102 independently or in a chained manner. In addition, as described above, the functions ofAS 104 may be similarly provided byserver 110, or may be provided by AS 104 in conjunction withserver 110. For instance, AS 104 andserver 110 may be configured in a load balancing arrangement, or may be configured to provide for backups or redundancies with respect to each other, and so forth. Thus, these and other modifications are all contemplated within the scope of the present disclosure. - To further aid in understanding the present disclosure,
FIG. 2 illustrates a flowchart of amethod 200 for providing adaptive asynchronous interactions in extended reality environments in accordance with the present disclosure. In particular, themethod 200 provides a method by which a user in an extended reality environment may place an avatar in the extended reality environment in order to interact asynchronously with other users. In one example, themethod 200 may be performed by an XR server that is configured to generate digital overlays that may be superimposed over images of a “real world” environment to produce an extended reality environment, such as theAS 104 orserver 110 illustrated inFIG. 1 . However, in other examples, themethod 200 may be performed by another device, such as theprocessor 302 of thesystem 300 illustrated inFIG. 3 . For the sake of example, themethod 200 is described as being performed by a processing system. - The
method 200 beings instep 202. In step 204, the processing system may render an extended reality (XR) environment. As discussed above, the XR environment may be an environment that combines images or elements of the real world with digital or “virtual” elements. At least some of the virtual elements may be interactive, such that a user may interact with the virtual elements to trigger some action or event. A user may view and interact with the XR environment using any type of device of combination of devices that is capable of displaying and exchanging signals with the XR environment, including a mobile device such as a mobile phone or tablet computer or a wearable device such as a head mounted display. - In one example, the processing system may render the XR environment by generating a digital overlay that is superimposed over a stream of images (e.g., video) of a real world environment. For instance, a wearable device such as a head mounted display may present the overlay on a display while the user is viewing the real world environment through the head mounted display. In another example, the processing system may generate an entirely digital environment in which certain elements of a real world environment (e.g., buildings, vehicles, geological features, landmarks, etc.) are recreated in digital form.
- In
step 206, the processing system may receive a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment. As discussed above, the avatar may comprise a digital representation of the first user. However, the avatar may not be simply a static representation of the first user; the avatar may be an interactive digital persona that mimics or emulates the personality and behaviors of the first user. When the first user is logged into or actively present and interacting within the XR environment, the first user may control the avatar. Thus, when the first user interacts with other users within the XR environment, the first user may actively control the utterances and behaviors of the avatar in real time (i.e., as the avatar is interacting with the other users). - However, the first user may wish to leave the avatar in the XR environment to interact with other users, even when the first user is not logged into or actively present and interacting within the XR environment. Thus, in this case, the first user may select a location within the XR environment and request that the processing system place the avatar in that location so that other users may interact with the avatar. In this case, the first user may not be capable of actively controlling the utterances and behaviors of the avatar in real time (i.e., as the avatar is interacting with the other users).
- In
step 208, the processing system may place the avatar of the first user within the extended reality environment. As discussed above, in one example, the processing system may place the avatar of the first user in a location within the XR environment that is chosen by the first user. In some examples, however, the first user may not specify a location for the avatar placement, or the requested location may be unavailable (e.g., placement of the avatar in the requested location may obstruct other users' access to other interactive virtual elements within the XR environment, or placement of the avatar in the requested location may render the avatar difficult for other users to see). Thus, in such cases, the processing system may automatically select a location for the placement of the avatar. In one example, the processing system may place the avatar as close as possible to a location requested by the first user (e.g., while minimizing any obstruction of other virtual elements by the avatar). - As discussed above, in one example, a plurality of avatar personas may be associated with the first user, where each avatar persona of the plurality of avatar personas may present an interaction that is representative of a different behavior of the first user. For instance, a first avatar persona may present a casual social interaction that is suitable for greeting close friends (e.g., high-fiving, using slang language, etc.), while a second avatar persona may present a more formal social interaction that is more suitable for greeting professional acquaintances or individuals with whom the first user is less socially familiar (e.g., shaking hands, using more formal language, etc.). In one example, the request from the first user may identify which avatar persona of the plurality of avatar personas the first user wishes to have placed in the XR environment (e.g., as a default avatar persona). However, as discussed in further detail below, the avatar persona that is presented to other users to represent the first user may be adapted or changed over time based on changing circumstances and conditions within the XR environment.
- In another example, the processing system may select which avatar persona of the plurality of avatar personas to place for the first user (e.g., as a default avatar persona), based on the current conditions within the XR environment at the placement location. In one example, the conditions may include one or more of: the physical location of the placement location (e.g., coordinates within a coordinate system), the orientation of the placement location, the environmental conditions (e.g., sunny, rainy, noisy, etc.) present at the placement location, time of day within the XR environment (e.g., morning hours. afternoon hours, evening hours, night hours, a particular hour or time (e.g., 1:00 pm), etc.) and/or the social conditions (e.g., behaviors or demeanors of other users) present at the placement location. At least some of these conditions may be detected through the use of sensors (e.g., cameras, microphones, and/or the like) which may be used to gather data from the placement location and provide the data to the processing system (or another system) for further analysis.
- In
step 210, the processing system may detect that a second user is attempting to interact with the avatar of the first user, e.g., via a user endpoint device of the second user. At this time, the first user may not be logged into or actively present and interacting in the XR environment. Thus, the avatar of the first user may conduct an interaction with the second user in an asynchronous manner. That is, although the interaction may appear, from the second user's perspective, to be happening in real time (e.g., as if happening with a first user who is logged into or actively present within the XR environment), the utterances and/or behaviors of the first user's avatar may be predefined to at least some extent. - In one example, the processing system may detect the second user attempting to interact with the avatar of the first user when some signal is received from the second user. For instance, the second user, e.g., via the user endpoint device of the second user, may click on or position a pointer over the avatar of the first user, may speak an utterance that indicates that an interaction with the avatar of the first user is desired (e.g., “Hello, Eric” or “Excuse me, sir”), or may move in a direction toward the avatar of the first user while the avatar of the first user is within the direction of the second user's gaze (e.g., as may be monitored by a head mounted display or other external sensors).
- In
step 212, the processing system may detect the conditions surrounding the avatar of the first user. As discussed above, the conditions may include one or more of: the physical location of the placement location (e.g., coordinates within a coordinate system), the orientation of the placement location, the environmental conditions (e.g., sunny, rainy, noisy, etc.) present at the placement location, the time of day, and/or the social conditions (e.g., behaviors or demeanors of other users) present at the placement location. At least some of these conditions may be detected through the use of sensors (e.g., cameras, microphones, and/or the like). - In
step 214, the processing system may identify, based on the conditions determined instep 212, a set of candidate avatar personas for the avatar of the first user (e.g., to replace the default avatar persona placed in step 208). For instance, the conditions may serve as a filter for determining which avatar personas of the plurality of avatar personas may be suitable or unsuitable for interacting with the second user at the current time. One or more machine learning and/or learning from demonstration techniques may be used to determine which types of avatar personas (or more specifically, which types of action, utterances, and/or behaviors) may be suitable or unsuitable given a specific set of conditions. Other examples may include the social company as a combination of the first user and the second user in the XR experience. For example, in one scenario, the second user may be accompanied by her children, while the first user's avatar may be accompanied by no other avatars. In this case, it may be appropriate to present an informal, familiar avatar persona as the first user's avatar persona. In another scenario, the second user may be alone, but the first user's avatar may be accompanied by other avatars or users which may trigger a more formal social environment. In another scenario, conditions may change as part of the XR experience itself, where both the first user's avatar and the second user are participating in the “introduction” stage of a racing XR experience. However, after a period of conversational time, the XR experience may transition to the “competition” stage, such that the avatar of the first user changes appearance or social mannerisms or detects similar changes from the second user. - As an example, if the conditions are noisy, it may not be appropriate to include an avatar persona in the set of candidate avatar personas which whispers or speaks in a very quiet voice. Similarly, if the second user appears to be upset (e.g., if audio of the second user appears to show the second user crying, or if the avatar of the second user is frowning), then it may not be appropriate to include an avatar persona in the set of candidate avatar personas that is loud and boisterous. As another example, if the second user is determined to be a professional acquaintance of the first user (e.g., based on recognition of the second user's user ID, comparison to a set of contacts for the first user, or the like), then the set of candidate avatar personas may only include avatar personas that are dressed in business attire (e.g., suits or business casual attire, as opposed to jeans and t-shirts) and/or that speak and behave in a more formal demeanor.
- In optional step 216 (illustrated in phantom), the processing system may filter the set of candidate avatar personas based on a preference of the second user, to generate a filtered set of candidate avatar personas for the first user. For instance, a profile for the second user may include the second user's preferences for interacting with other users in the XR environment. As an example, the second user may be a child whose parents have configured parental control settings to control the child's interactions within the XR environment. Thus, the second user's preferences may indicate that the second user does not wish to engage in interactions that include swearing or violence. As a result, any avatar personas in the set of candidate avatar personas which may speak or behave in a manner that is violent or that includes swearing may be filtered (i.e., removed) from the set of candidate avatar personas. As another example, the second user's preferences may indicate that the second user is an avid hockey fan. In this case, an avatar persona in the set of candidate avatar personas that knows hockey statistics and/or is wearing the jersey of a professional hockey team (or is changed dynamically in the XR environment, e.g., showing the avatar taking the action of flipping over a shirt revealing a hockey team logo or taking an action of putting on an outer jacket with the hockey team logo and so on) may be appropriate to keep for consideration in the set of candidate avatar personas, and may even be prioritized or ranked highly relative to other avatar personas in the set of candidate avatar personas.
- In
step 218, the processing system may select a first avatar persona from among the (optionally filtered) set of candidate avatar personas. In one example, the first avatar persona may be the avatar persona in the set of candidate avatar personas that the processing system determines is most appropriate for the conditions surrounding the avatar of the first user. For instance, the processing system may rank the set of candidate avatar personas in an order from most appropriate to least appropriate for the conditions. The ranking may take into consideration any metadata associated with the plurality of avatar personas, where the processing system may attempt to match the metadata to metadata or keywords describing the set of conditions. For instance, as discussed above, if the conditions are noisy, then the processing system may rank an avatar persona whose metadata indicates that the avatar persona speaks in a whisper relatively low. Similarly, if the profile for the second user indicates that the second user likes hockey, then the processing system may rank an avatar persona whose metadata indicates that the avatar persona is wearing a hockey jersey or can discuss hockey relatively high. In one example, the processing system may select the avatar persona with the highest ranking. In another example, the processing system may detect a pattern of behaviors from the second user that will trigger selection of an avatar persona that is not related to the most relevant interaction. For instance, in a gaming scenario or a malicious hacking scenario, the second user may repeat an interaction with the first user's avatar to discover a weakness in the avatar indexing system or in an avatar index that is valid but is not intended for the second user's current conditions. In this situation, the system may default to a generic or incorrect avatar response to avoid or temporarily suppress anomalous interactions of the second user (much like current password entry systems may block a user after a number of consecutive wrong entries). Other behaviors may also be enabled by the system, but the selection stage (e.g., step 218) may contain historical conditions from the second user to assist in determining appropriate actions in this situation. - In
step 220, the processing system may render the avatar of the first user with the first avatar persona in the extended reality environment. That is, the processing system may insert the avatar of the first user into the XR environment so that the second user may interact with the avatar of the first user in a manner that is consistent with the first avatar persona. The avatar of the first user may be rendered in such a manner that the avatar of the first user may replay a set of predefined actions (e.g., high fiving, hand shaking, dancing, etc.) and/or predefined utterances. In one example, the predefined actions and/or utterances may not necessarily be played in any sort of defined or continuous order. For instance, the avatar of the first user may have a set of predefined actions that the avatar of the first user can perform and a set of predefined utterances that the avatar of the first user can speak. Any one of these predefined actions or predefined utterances may be selected by the processing system for rendering in response to some sort of trigger. The trigger may be an action or utterance of the second user. For instance, if the second user holds out a hand for a high five, then the processing system may control the avatar of the first user to return the high five. If the second user makes a statement about a particular hockey team, then the processing system may control the avatar of the first user to respond with another statement about the particular hockey team or about hockey in general. - The
method 200 may thus return to step 210 and may continue to adapt the avatar of the first user (e.g., adapt the persona of the avatar of the first user) in response to the actions and/or utterances of the second user. This interaction may continue until the second user gives some indication that the interaction is over (e.g., the second user may walk away, or say “goodbye,” or “I have to go now.”). Steps of themethod 200 may also be repeated for subsequent users who may wish to interact with the avatar of the first user, and different avatar personas for the avatar of the first user may be selected for these subsequent users based on the conditions at the time of the interactions and/or the preferences of the subsequent users, as discussed above. - Thus, examples of the disclosure allow a first user to interact with a second user in an XR environment in an asynchronous manner. That is, the first user may be able to “interact” with the second user even when the first user is not online or actively engaged in the XR environment. This is made possible by the use of multiple different avatar personas for the avatar of the first user, where each avatar persona of the multiple different avatar personas may be programmed to perform a set of predefined actions and/or speak a set of predefined utterances that provide an interaction that is representative of a different behavior of the first user. Thus, when the second user attempts to interact with the first user (who may be offline or not actively engaged in the XR environment), a processing system may select an appropriate avatar persona for the first user, from among the multiple different avatar personas, to present to the second user. This provides for a more natural and more immersive experience for users, even when some users may be offline. It should be noted that the second user may also be offline or not actively engaged in the XR environment in this case. For instance, two avatars having respective sets of predefined actions and/or utterances may interact with each other in an asynchronous manner. Furthermore, lower ranked avatar personas of the avatar of the first user may be subsequently deployed if the second user returns to again interact with the avatar of the first user in some later time. Selecting a different avatar persona of the avatar of the first user may be more engaging for the second user to provide variability in the interactions.
- The ability to provide adaptive asynchronous interactions in XR environments may have a number of applications in the XR field. For instance, in one example, a user may take an avatar “selfie” and leave the avatar at a particular location in an XR environment. In a heavily socialized environment (e.g., where memes and sharing may be encouraged), these types of avatars may have predefined patterns of actions that can be replayed or dismissed for the purposes of entertainment or adaptation. The predefined patterns of actions may embody the personality of the user who the avatar represents, thereby allowing the user to personalize the avatar. A first user might also identify an avatar of a second user, where the avatar of the second user is performing some sort of action, and may “borrow” or request that the action be applied to the first user's avatar.
- In another example, a user may create an avatar to connect with a particular action or location. In this case, the user may use local objects in a location to capture actions for later replay. For instance, the user may create an avatar of himself performing a trick on a skateboard, and leave the avatar at a skate park in the XR environment. In another example, a user's avatar may be created with superhero powers, and the user may point to different locations or objects in the XR environment to define jump points for the avatar.
- In another example, a repeatable game scenario may be created for educational or entertainment purposes. For instance, different avatars could be created to act as different game characters, to perform different game actions (e.g., running back and forth, guarding an object), to provide motivational support or hints, and the like. In one particular example, a virtual scavenger hunt could be created within the XR environment. In this case, certain asynchronous interactions with avatars may only be unlocked when a user has completed some other list of actions or interactions first.
- In another example, an avatar may be created to function as a virtual tour guide or concierge. In this case, different actions, utterances, and/or behaviors for the avatar may be triggered at different locations. For instance, a virtual tour guide in an art museum may be programmed with a first set of utterances (e.g., facts, trivia, answers to frequently asked questions, etc.) that is triggered when the virtual tour guide is within x feet of a first painting and a second set of utterances that is triggered when the virtual tour guide is within x feet of a second painting. Thus, a user or business may be able to create a semi-knowledgeable presence with some limited interaction patterns that can be placed at specific locations to assist, direct, or educate other users at those locations.
- As also discussed above, learning from demonstration techniques may also be used to help determine suitable or unsuitable behaviors, utterances, and actions for given conditions and contexts. In a further example, these techniques may also be used to learn how a particular user reacts to specific conditions and contexts, which may help to further personalize different avatars for the particular user. In a further example, such techniques could be used to personalize a base or template avatar for the particular user. For instance, a template avatar for “ninja” could be personalized to incorporate physical features, mannerisms, and/or speech patterns of the particular user.
- In some examples, a user who has placed an avatar for asynchronous interactions in an XR environment may receive notifications when other users interact with the avatar. For instance, the user may receive a summary or replay of the interactions. These notifications may help the user to make decisions which may affect the avatar, such as adapting certain patterns of behavior to new users or contexts. For instance, when interacting asynchronously with specific other users, the user may want the avatar to hug those other users after offering a high five.
- Avatars may also be constructed from historical synchronous interactions involving a user. Such avatars may emulate the user in a way that allows family or friends to interact with the user when the user may no longer be present or may only be capable of interacting in person in limited ways (e.g., as in the case of a user who may have neurodegenerative disease or impairment due to an accident).
- Although not expressly specified above, one or more steps of the
method 200 may include a storing, displaying and/or outputting step as required for a particular application. In other words, any data, records, fields, and/or intermediate results discussed in the method can be stored, displayed and/or outputted to another device as required for a particular application. Furthermore, operations, steps, or blocks inFIG. 2 that recite a determining operation or involve a decision do not necessarily require that both branches of the determining operation be practiced. In other words, one of the branches of the determining operation can be deemed as an optional step. However, the use of the term “optional step” is intended to only reflect different variations of a particular illustrative embodiment and is not intended to indicate that steps not labelled as optional steps to be deemed to be essential steps. Furthermore, operations, steps or blocks of the above described method(s) can be combined, separated, and/or performed in a different order from that described above, without departing from the examples of the present disclosure. -
FIG. 3 depicts a high-level block diagram of a computing device specifically programmed to perform the functions described herein. For example, any one or more components or devices illustrated inFIG. 1 or described in connection with themethod 200 may be implemented as thesystem 300. For instance, a server (such as might be used to perform the method 200) could be implemented as illustrated inFIG. 3 . - As depicted in
FIG. 3 , thesystem 300 comprises ahardware processor element 302, amemory 304, amodule 305 for providing adaptive asynchronous interactions in extended reality environments, and various input/output (I/O)devices 306. - The
hardware processor 302 may comprise, for example, a microprocessor, a central processing unit (CPU), or the like. Thememory 304 may comprise, for example, random access memory (RAM), read only memory (ROM), a disk drive, an optical drive, a magnetic drive, and/or a Universal Serial Bus (USB) drive. Themodule 305 for providing adaptive asynchronous interactions in extended reality environments may include circuitry and/or logic for performing special purpose functions relating to the operation of a home gateway or XR server. The input/output devices 306 may include, for example, a camera, a video camera, storage devices (including but not limited to, a tape drive, a floppy drive, a hard disk drive or a compact disk drive), a receiver, a transmitter, a speaker, a display, a speech synthesizer, an output port, and a user input device (such as a keyboard, a keypad, a mouse, and the like), or a sensor. - Although only one processor element is shown, it should be noted that the computer may employ a plurality of processor elements. Furthermore, although only one computer is shown in the Figure, if the method(s) as discussed above is implemented in a distributed or parallel manner for a particular illustrative example, i.e., the steps of the above method(s) or the entire method(s) are implemented across multiple or parallel computers, then the computer of this Figure is intended to represent each of those multiple computers. Furthermore, one or more hardware processors can be utilized in supporting a virtualized or shared computing environment. The virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtualized virtual machines, hardware components such as hardware processors and computer-readable storage devices may be virtualized or logically represented.
- It should be noted that the present disclosure can be implemented in software and/or in a combination of software and hardware, e.g., using application specific integrated circuits (ASIC), a programmable logic array (PLA), including a field-programmable gate array (FPGA), or a state machine deployed on a hardware device, a computer or any other hardware equivalents, e.g., computer readable instructions pertaining to the method(s) discussed above can be used to configure a hardware processor to perform the steps, functions and/or operations of the above disclosed method(s). In one example, instructions and data for the present module or
process 305 for providing adaptive asynchronous interactions in extended reality environments (e.g., a software program comprising computer-executable instructions) can be loaded intomemory 304 and executed byhardware processor element 302 to implement the steps, functions or operations as discussed above in connection with theexample method 200. Furthermore, when a hardware processor executes instructions to perform “operations,” this could include the hardware processor performing the operations directly and/or facilitating, directing, or cooperating with another hardware device or component (e.g., a co-processor and the like) to perform the operations. - The processor executing the computer readable or software instructions relating to the above described method(s) can be perceived as a programmed processor or a specialized processor. As such, the
present module 305 for providing adaptive asynchronous interactions in extended reality environments (including associated data structures) of the present disclosure can be stored on a tangible or physical (broadly non-transitory) computer-readable storage device or medium, e.g., volatile memory, non-volatile memory, ROM memory, RAM memory, magnetic or optical drive, device or diskette and the like. More specifically, the computer-readable storage device may comprise any physical devices that provide the ability to store information such as data and/or instructions to be accessed by a processor or a computing device such as a computer or an application server. - While various examples have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred example should not be limited by any of the above-described example examples, but should be defined only in accordance with the following claims and their equivalents.
Claims (20)
1. A method comprising:
rendering, by a processing system including at least one processor, an extended reality environment;
receiving, by the processing system, a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment;
placing, by the processing system, the avatar of the first user within the extended reality environment;
detecting, by the processing system, at least one condition surrounding the avatar of the first user;
identifying, by the processing system based on the at least one condition, a set of candidate avatar personas for the avatar of the first user;
selecting, by the processing system, a first avatar persona from among the set of candidate avatar personas; and
rendering, by the processing system, the avatar of the first user with the first avatar persona in the extended reality environment.
2. The method of claim 1 , wherein the set of candidate avatar personas comprises a subset of a plurality of avatar personas associated with the avatar of the first user, and wherein each avatar persona of the plurality of avatar personas associated with the avatar of the first user represents a different behavior of the avatar of the first user.
3. The method of claim 2 , wherein the different behavior comprises a predefined action to be performed by the each avatar persona in response to an occurrence of a trigger.
4. The method of claim 2 , wherein the different behavior comprises a predefined utterance to be spoken by the each avatar persona in response to an occurrence of a trigger.
5. The method of claim 2 , wherein the at least one condition comprises a location of the avatar of the first user in the extended reality environment.
6. The method of claim 5 , wherein the at least one condition comprises an orientation of the location of the avatar of the first user in the extended reality environment.
7. The method of claim 2 , wherein the at least one condition comprises a time of day in the extended reality environment.
8. The method of claim 2 , wherein the at least one condition comprises an environmental condition of the location of the avatar of the first user in the extended reality environment.
9. The method of claim 2 , wherein the at least one condition comprises at least one social condition of a location of the avatar of the first user in the extended reality environment.
10. The method of claim 2 , wherein the at least one condition comprises a demeanor of a second user interacting with the avatar of the first user.
11. The method of claim 2 , wherein the at least one condition comprises a relationship of a second user interacting with the avatar of the first user.
12. The method of claim 1 , wherein the rendering comprises controlling the avatar of the first user to carry out an interaction with a second user while the first user is offline.
13. The method of claim 1 , further comprising:
filtering, by the processing system, the set of candidate avatar personas based on a preference of a second user interacting with the avatar of the first user to generate a filtered set of candidate avatar personas for the first user, wherein the first avatar persona is selected from among the filtered set of candidate avatar personas.
14. The method of claim 13 , wherein the filtering comprises removing, from the set of candidate avatar personas for the first user, any avatar personas representing behavior that the preference indicates the second user does not wish to see.
15. The method of claim 13 , wherein the filtering comprises removing, from the set of candidate avatar personas for the first user, any avatar personas speaking utterances that the preference indicates the second user does not wish to hear.
16. The method of claim 1 , wherein the rendering comprises controlling the avatar of the first user to function as a virtual tour guide within the extended reality environment.
17. The method of claim 16 , wherein the rendering further comprises controlling the avatar of the first user to speak an utterance that is triggered by a specific location within the extended reality environment.
18. The method of claim 16 , wherein the rendering further comprises controlling the avatar of the first user to perform an action that is triggered by a specific location within the extended reality environment.
19. A non-transitory computer-readable medium storing instructions which, when executed by a processing system including at least one processor, cause the processing system to perform operations, the operations comprising:
rendering an extended reality environment;
receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment;
placing the avatar of the first user within the extended reality environment;
detecting at least one condition surrounding the avatar of the first user;
identifying, based on the at least one condition, a set of candidate avatar personas for the first user;
selecting a first avatar persona from among the set of candidate avatar personas; and
rendering the avatar of the first user with the first avatar persona in the extended reality environment.
20. A device comprising:
a processing system including at least one processor; and
a computer-readable medium storing instructions which, when executed by the processing system, cause the processing system to perform operations, the operations comprising:
rendering an extended reality environment;
receiving a request from a user endpoint device of a first user in the extended reality environment to place an avatar of the first user in the extended reality environment;
placing the avatar of the first user within the extended reality environment;
detecting at least one condition surrounding the avatar of the first user;
identifying, based on the at least one condition, a set of candidate avatar personas for the first user;
selecting a first avatar persona from among the set of candidate avatar personas; and
rendering the avatar of the first user with the first avatar persona in the extended reality environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/655,785 US20220319117A1 (en) | 2021-04-02 | 2022-03-21 | Providing adaptive asynchronous interactions in extended reality environments |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/221,732 US11282278B1 (en) | 2021-04-02 | 2021-04-02 | Providing adaptive asynchronous interactions in extended reality environments |
US17/655,785 US20220319117A1 (en) | 2021-04-02 | 2022-03-21 | Providing adaptive asynchronous interactions in extended reality environments |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/221,732 Continuation US11282278B1 (en) | 2021-04-02 | 2021-04-02 | Providing adaptive asynchronous interactions in extended reality environments |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220319117A1 true US20220319117A1 (en) | 2022-10-06 |
Family
ID=80782068
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/221,732 Active US11282278B1 (en) | 2021-04-02 | 2021-04-02 | Providing adaptive asynchronous interactions in extended reality environments |
US17/655,785 Abandoned US20220319117A1 (en) | 2021-04-02 | 2022-03-21 | Providing adaptive asynchronous interactions in extended reality environments |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/221,732 Active US11282278B1 (en) | 2021-04-02 | 2021-04-02 | Providing adaptive asynchronous interactions in extended reality environments |
Country Status (1)
Country | Link |
---|---|
US (2) | US11282278B1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11710280B1 (en) * | 2020-08-14 | 2023-07-25 | United Services Automobile Association (Usaa) | Local physical environment modeling in extended reality environments |
US11769379B1 (en) | 2022-03-30 | 2023-09-26 | Bank Of America Corporation | Intelligent self evolving automated teller machine (ATM) digital twin apparatus for enhancing real time ATM performance |
US20230410378A1 (en) * | 2022-06-20 | 2023-12-21 | Qualcomm Incorporated | Systems and methods for user persona management in applications with virtual content |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110029889A1 (en) * | 2009-07-31 | 2011-02-03 | International Business Machines Corporation | Selective and on-demand representation in a virtual world |
US20110055732A1 (en) * | 2009-08-28 | 2011-03-03 | International Business Machines Corporation | Creation and Prioritization of Multiple Virtual Universe Teleports In Response to an Event |
US20110131239A1 (en) * | 2007-12-26 | 2011-06-02 | International Business Machines Corporation | Media playlist construction for virtual environments |
US20170189815A1 (en) * | 2016-01-05 | 2017-07-06 | Acshun, Inc. | Wearable activity tracking systems and technology |
US20170326457A1 (en) * | 2016-05-16 | 2017-11-16 | Google Inc. | Co-presence handling in virtual reality |
US20190374857A1 (en) * | 2018-06-08 | 2019-12-12 | Brian Deller | System and method for creation, presentation and interaction within multiple reality and virtual reality environments |
US20200311995A1 (en) * | 2019-03-28 | 2020-10-01 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8146005B2 (en) * | 2007-08-07 | 2012-03-27 | International Business Machines Corporation | Creating a customized avatar that reflects a user's distinguishable attributes |
US9258337B2 (en) * | 2008-03-18 | 2016-02-09 | Avaya Inc. | Inclusion of web content in a virtual environment |
US9220981B2 (en) * | 2008-12-30 | 2015-12-29 | International Business Machines Corporation | Controlling attribute expression within a virtual environment |
-
2021
- 2021-04-02 US US17/221,732 patent/US11282278B1/en active Active
-
2022
- 2022-03-21 US US17/655,785 patent/US20220319117A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110131239A1 (en) * | 2007-12-26 | 2011-06-02 | International Business Machines Corporation | Media playlist construction for virtual environments |
US20110029889A1 (en) * | 2009-07-31 | 2011-02-03 | International Business Machines Corporation | Selective and on-demand representation in a virtual world |
US20110055732A1 (en) * | 2009-08-28 | 2011-03-03 | International Business Machines Corporation | Creation and Prioritization of Multiple Virtual Universe Teleports In Response to an Event |
US20170189815A1 (en) * | 2016-01-05 | 2017-07-06 | Acshun, Inc. | Wearable activity tracking systems and technology |
US20170326457A1 (en) * | 2016-05-16 | 2017-11-16 | Google Inc. | Co-presence handling in virtual reality |
US20190374857A1 (en) * | 2018-06-08 | 2019-12-12 | Brian Deller | System and method for creation, presentation and interaction within multiple reality and virtual reality environments |
US20200311995A1 (en) * | 2019-03-28 | 2020-10-01 | Nanning Fugui Precision Industrial Co., Ltd. | Method and device for setting a multi-user virtual reality chat environment |
Also Published As
Publication number | Publication date |
---|---|
US11282278B1 (en) | 2022-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11341775B2 (en) | Identifying and addressing offensive actions in visual communication sessions | |
US11282278B1 (en) | Providing adaptive asynchronous interactions in extended reality environments | |
AU2015242255B2 (en) | Enhanced messaging stickers | |
CN109691054A (en) | Animation user identifier | |
US20220165035A1 (en) | Latency indicator for extended reality applications | |
CN113950687A (en) | Media presentation device control based on trained network model | |
WO2017092194A1 (en) | Method and apparatus for enabling communication interface to produce animation effect in communication process | |
US11290659B2 (en) | Physical object-based visual workspace configuration system | |
US20220084292A1 (en) | Extended reality markers for enhancing social engagement | |
JP7273100B2 (en) | Generation of text tags from game communication transcripts | |
US11663791B1 (en) | Dynamic avatars for customer support applications | |
US20220172415A1 (en) | Event orchestration for virtual events | |
US20230079099A1 (en) | Policy definition and enforcement for extended reality media sessions | |
US20220368770A1 (en) | Variable-intensity immersion for extended reality media | |
WO2018057921A1 (en) | System and method for situation awareness in immersive digital experiences | |
US20230063988A1 (en) | External audio enhancement via situational detection models for wearable audio devices | |
CN108011968A (en) | The method, terminal and computer-readable recording medium of message are shared in display | |
US20230059361A1 (en) | Cross-franchise object substitutions for immersive media | |
US20230368811A1 (en) | Managing directed personal immersions | |
US20220167052A1 (en) | Dynamic, user-specific content adaptation | |
US20230318999A1 (en) | Message validation and routing in extended reality environments | |
US20240211705A1 (en) | Generating personalized languages for extended reality communication | |
US20230052418A1 (en) | Dynamic expansion and contraction of extended reality environments | |
US20230376625A1 (en) | Virtual reality privacy protection | |
US20240212223A1 (en) | Adaptive simulation of celebrity and legacy avatars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AT&T INTELLECTUAL PROPERTY I, L.P., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZAVESKY, ERIC;BRADLEY, NIGEL;PRATT, JAMES;SIGNING DATES FROM 20210330 TO 20210405;REEL/FRAME:059671/0981 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |