WO2024061462A1 - Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique - Google Patents

Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique Download PDF

Info

Publication number
WO2024061462A1
WO2024061462A1 PCT/EP2022/076308 EP2022076308W WO2024061462A1 WO 2024061462 A1 WO2024061462 A1 WO 2024061462A1 EP 2022076308 W EP2022076308 W EP 2022076308W WO 2024061462 A1 WO2024061462 A1 WO 2024061462A1
Authority
WO
WIPO (PCT)
Prior art keywords
physical object
participant
environment
physical
avatar
Prior art date
Application number
PCT/EP2022/076308
Other languages
English (en)
Inventor
Tommy Arngren
Peter ÖKVIST
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2022/076308 priority Critical patent/WO2024061462A1/fr
Publication of WO2024061462A1 publication Critical patent/WO2024061462A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Definitions

  • the present disclosure relates to rendering extended reality (XR) environments and associated XR rendering devices, and more particularly to rendering avatars in immersive XR environments displayed on XR participant devices.
  • XR extended reality
  • Immersive extended reality (XR) environments have been developed to enable a myriad of different types of user experiences for gaming, on-line meetings, co-creation of products, etc.
  • Immersive XR environments can include virtual reality (VR) environments where human users see computer generated graphical renderings and can include augmented reality (AR) environments where users see a combination of computer generated graphical renderings overlaid on a view of the physical real-world through, e.g., see-through display screens.
  • VR virtual reality
  • AR augmented reality
  • Example XR environment rendering devices include, without limitation, XR environment servers, XR headsets, gaming consoles, smartphones running an XR application, and tablet/laptop/desktop computers running an XR application.
  • Oculus Quest is an example XR device and Google Glass is an example AR device.
  • XR meeting applications are tools for native digital meetings and also useful as a thinking and planning space for oneself as well as having online meetings in a digital environment.
  • Some XR meeting applications support AR devices, browsers, and VR devices.
  • a participant using a browser may join via desktop, tablet-PC or smartphone and share their views using a front faced cam or a web cam.
  • some XR meeting solutions have mobile application versions, e.g., Android and iOS, which allow a user to navigate in the virtual space on the screen or activate an augmented reality mode to display the meeting in their own surroundings.
  • the XR meeting solutions introduce new features to online meetings that allow for new ways to share and create content etc.
  • Today’s commonly and commercially available XR devices typically include a head-mounted display (HMD) and a pair of hand controllers, sometimes with more advanced solutions also “foot controllers”.
  • HMD head-mounted display
  • foot controllers sometimes with more advanced solutions also “foot controllers”.
  • Immersive XR environments such as gaming environments and meeting environments, are often configured to display computer generated avatars which represent poses of human users in the immersive XR environments.
  • a user may select and customize an avatar, such as gender, clothing, hair style, etc. to represent that user for viewing by other users participating in the immersive XR environment.
  • an avatar such as gender, clothing, hair style, etc.
  • users can be unexpectedly disappointed with how their avatar is viewed by other participants as the user's avatar moves through an environment and/or transitions between different poses, such as standing, sitting, squatting, and laying.
  • Some embodiments disclosed herein are directed to an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
  • the XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations. Operations include determining the participant is interacting with a physical object in the participant’s physical environment. Operations also include identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with. Operations also include determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment. Operations also include rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
  • Some other related embodiments are directed to a corresponding method by an XR rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
  • the method includes determining the participant is interacting with a physical object in the participant’s physical environment.
  • the method also includes identifying characteristics of the physical object in the participant’s physical environment which the participant is interacting with.
  • the method also includes determining a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
  • the method also includes rendering the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
  • embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
  • Figure 1 illustrates an XR system that includes a plurality of participant devices that communicate through networks with an XR rendering device to operate in accordance with some embodiments of the present disclosure
  • Figure 2 illustrates an immersive XR environment with participants' avatars and a shared virtual presentation screen that are rendered with various poses within the XR environment, in accordance with some embodiments of the present disclosure
  • Figure 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure
  • Figure 4 illustrates an example of various operations which are performed by a user device and XR rendering device based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure.
  • Figures 5 through 10 are flowcharts of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure
  • Figure 11 is a block diagram of components of an XR rendering device that are configured to operate in accordance with some embodiments of the present disclosure.
  • Figure 1 illustrates an XR system that includes a plurality of participant devices 1 lOa-d that communicate through networks 120 with an XR rendering device 100 to operate in accordance with some embodiments of the present disclosure.
  • the XR rendering device 100 is configured to generate a graphical representation of an immersive XR environment (also called an "XR environment" for brevity) which is viewable from various perspectives of virtual poses of human participants in the XR environment through display screens of the various participant devices 1 lOa-d.
  • an immersive XR environment also called an "XR environment" for brevity
  • the illustrated devices include VR headsets 1 lOa-c which can be worn by participants to view and navigate through the XR environment, and a participant electronic device 1 lOd, such as a personal computer, laptop, tablet, smartphone, smart ring, or smart fabrics, which can be operated by a participant to view and navigate through the XR environment.
  • the participants have associated avatars which are rendered in the XR environment to represent poses (e.g., location, body assembly orientation, etc.) of the participants relative to a coordinate system of the XR environment.
  • the XR rendering device 100 may include a rendering module 102 that performs operations disclosed herein for determine a participant avatar posture based on participant’s interactions with a physical object in the participant’s physical environment. The XR rendering device 100 then renders the participant avatar with the determined participant avatar posture for viewing by other participants through their respective devices, e.g., 110b- HOd.
  • the XR rendering device 100 is illustrated in Figure 1 as being a centralized network computing server separate from one or more of the participant devices, in some other embodiments the XR rendering device 100 is implemented as a component of one or more of the participant devices.
  • one of the participant devices may be configured to perform operations of the XR rendering device in a centralized manner controlling rendering for or by other ones of the participant devices.
  • each of the participant devices may be configured to perform at least some of the operations of the XR rendering device in a distributed decentralized manner with coordinated communications being performed between the distributed XR rendering devices (e.g., between software instances of XR rendering devices).
  • FIG. 2 illustrates an immersive XR environment with avatars 200a-f that are graphically rendered with poses (e.g., at locations and with orientations) representing the present field of views (FOVs) of associated human participants in the XR environment.
  • streaming video from a camera of the participant device 1 lOd is displayed in a virtual screen 230 instead of rendering an avatar to represent the participant.
  • a shared virtual presentation screen 210 is also graphically rendered at a location within the XR environment, and can display pictures and/or video that are being presented for viewing by the participants in the XR environment.
  • a virtual object 204 is graphically rendered in the XR environment.
  • the virtual object 204 may be graphically rendered in the XR environment with any shape or size, and can represent any type of object (e.g., table, chair, object on table, door, window, television or computer, virtual appliance, animated vehicle, animated animal, etc.).
  • the virtual object 204 may represent a physical object in the XR environment and may be animated to track movement and pose of the physical object within the XR environment responsive to movement input or physical interaction from the human participant.
  • an XR rendering device e.g., an XR environment server or a participant device 110a
  • an XR rendering device can become constrained by its processing bandwidth limitations when attempting to simultaneously render in real-time each of the participants' avatars, the virtual screen, the shared virtual presentation screen 210, and the virtual objects 204 including room surfaces and other parts of the XR environment.
  • FIG. 3 is a further block diagram of an XR rendering system which illustrates data flows and operations between a plurality of participant devices and an XR rendering device in accordance with some embodiments of the present disclosure.
  • each of the participants can define a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
  • the participant avatar posture based on participant’s interactions with the physical object may be stored as an attribute of a physical object in the participant's device.
  • the participant avatar posture is used by the rendering circuit 300 of the XR rendering device 100 for rendering the respective avatars.
  • a XR rendering device 100 of a first participant can define a participant avatar posture which is provided 310a to the first participant device and requests that rendering participant avatar posture be given to an avatar associated with the first participant for rendering.
  • a XR Rendering Device 100 of a second participant can define a participant avatar posture which is provided 310b to the second participant device and requests that rendering participant avatar posture be given to an avatar associated with the second participant for rendering.
  • Other participants can similarly define participant avatar posture which are provided to the rendering devices 100 to control rendering related to the respective other participants.
  • the XR rendering device 100 can use the participant avatar posture that have been defined to participant avatar posture for other participants 314a, 314b, etc. which control the rendering operations performed by the respective participant devices.
  • Various embodiments of the present disclosure describe determining, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings. Examples of physical objects which users may interact with include but are not limited to chairs, sofas, mugs, writing utensils, and electronic devices.
  • the system should be able to identify the object, its characteristics, main use, its impact on user body posture, limb motion range and user height in relation to other users in the application which then is displayed to the users in an immersive VR application.
  • Various embodiments identify physical objects that a user in VR interacts with by means of image processing, object recognition and sensor inputs. Additionally, the various embodiments utilize data associated to objects to determine associated typical motion patterns of body parts, such as upper body, lower body, limbs or hand or fingers. [0029] Potential advantages of various embodiments include providing additional data to an XR application that may use physical object data as input for typical motion patterns/range of body parts in an immersive environment. Additionally, embodiments may provide and share textures of physical -to-digital object renderings to other VR meeting users, with usermanaged constraints on whom, when, in which context and for how long time a digital object texture can be loaned for others’ renderings. Additionally, embodiments may reduce computational resources consumed by rendering based on using characteristics of identified physical objects which can reduce the range of motion of body parts to be rendered and/or reduce the range of motion of the physical object to be rendered.
  • Embodiments describe a solution/method that determines, based on user interactions with physical objects, how to further digitally represent changes in user body posture or actions depending on which physical object the user is interacting with while in VR meetings.
  • Figure 5 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • an extended reality rendering device for rendering an immersive XR environment on a display device for viewing by a participant among a group of participants who have associated avatars representing the group of participants which are rendered in the immersive XR environment.
  • the XR rendering device includes at least one processor, and at least one memory storing instructions executable by the at least one processor to perform operations.
  • Operations include determining 500 the participant is interacting with a physical object in the participant’s physical environment.
  • Operations also include identifying 502 characteristics of the physical object in the participant’s physical environment which the participant is interacting with.
  • Operations also include determining 504 a participant avatar posture based on participant’s interactions with the physical object in the participant’s physical environment.
  • Operations also include rendering 506 the avatar of the participant interacting with a virtual object in the immersive XR environment based on the identified characteristics of the physical object and the determined participant avatar posture.
  • Figure 6 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • the operations further include to determine 600 a body part of the participant that is interacting with the physical object.
  • the operations further include to determine 602 a predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
  • An example scenario using the operations of Figure 6 can include the XR rendering device 100 determining 600 that the participant’s hand has picked up a coffee mug from a table.
  • the determination 600 may be performed based on processing images from a front facing camera of the participant device 110a and/or a point cloud from a lidar sensor of the participant device 110a to identify locations of the hand and the coffee mug and the associated physical interaction of the hand holding the coffee mug.
  • the XR rendering device 100 responsively determines 602 a predicted motion pattern of the hand holding the coffee mug.
  • the predicted motion pattern may define a pathway along which the hand and coffee mug will travel, such as along an arc between the previous location of the coffee mug resting on the table and a mouth location on the participant’s avatar, and/or may define one or more limits on the range of predicted motion of the hand and coffee mug.
  • the predicted motion pattern may optionally be scaled based on the participant’s defined attributes, such as gender, height, weight, age, and more particular anatomical measurements and/or other characteristics, e.g., wheel chair usage, etc.
  • the XR rendering device 100 can then use the predicted motion pattern to render the avatar of the participant interacting with a virtual representation of the coffee mug.
  • the XR rendering device 100 may render motion of the arm and the virtual representation of the coffee mug in a manner that is constrained by the predicted motion pattern so as to avoid rendering movements that would appear to other participants to be in an unnatural manner, e.g., which may have otherwise occurred if processing of time sequence of images and/or point cloud data indicates erratic (e.g., jittery) movements that would have resulted in rendering erratic (unnatural) avatar movements.
  • erratic e.g., jittery
  • the physical object rendered in the immersive XR environment may be scaled for physical user attributes such as gender, length, age, weight, and anatomy attributes if available.
  • the VR system may then display typical motions related to the physical object. Adapt the object motion pattern to generate corresponding avatar motion pattern which controls the VR system rendering of an avatar interacting with the VR digital representation of object (e.g., avatar arm is moved according to avatar motion pattern (hand, arm, torso, and head) so that cup moves according to object motion pattern).
  • avatar motion pattern e.g., avatar arm is moved according to avatar motion pattern (hand, arm, torso, and head) so that cup moves according to object motion pattern).
  • These operations may be selectively initiated responsive to the VR system identifying a known object that the person is interacting with, and which has a corresponding object and associated motion pattern in the database accessed by the VR system.
  • the operations may be triggered by the user's eye gaze and/or observed motion correlating to a real -world object.
  • the operation to render 506 the avatar of the participant interacting with the virtual object is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
  • Figure 7 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • the operations further include initiating 700 a request for the user to input a typical motion pattern of the body part interacting with the physical object.
  • the operations further include determining 702 the predicted motion pattern of the body part to be associated with the identified characteristics of the physical object, based on the user input of the typical motion pattern of the body part interacting with the physical object.
  • the operations further include initiating the request for the user to input a typical motion pattern of the body part interacting with the physical object responsive to determining that no predicted motion pattern of the body part is associated with the identified characteristics of the physical object.
  • Figure 4 illustrates an example of various operations which are performed by a user device 110 and an XR rendering device (server) 100 based on a user’s interaction with a physical object, in accordance with some embodiments of the present disclosure.
  • the user device 100 determines 400 that a user is engaged in an XR meeting application and using an XR rendering device which is capable of capturing camera images and/or lidar point cloud data of a physical environment.
  • the user device 100 displays 402 a rendering of the user’s avatar in the XR meeting, which may be obtained from the XR rendering server 100.
  • a determination 404 and 406 is made, by the user device 100 or the XR rendering server 100, as to whether the user is interacting with a physical object, e.g., coffee cup, chair, handle of a door, etc. If the determination is “yes”, then the XR rendering device 100 operates to identify 408 characteristics of the physical object, e.g., type of object (cup, chair, door, etc.), physical size, shape, texture, color/pattern, etc.
  • a physical object e.g., coffee cup, chair, handle of a door, etc.
  • the XR rendering device 100 determines 410 the participant’s avatar posture based on the relative location of the physical participant relative to the physical object, and based on the characteristics of the physical object and characteristics of the participant, e.g., height, weight, gender, age, etc.
  • the XR rendering device 100 determines 412 a body part of the participant, e.g., hand, which is interacting with the physical object, e.g., coffee cup, and the predicted motion pattern of the body part based on characteristics that have been associated with the physical object, such as the predicted motion of the hand and coffee cup being moved from a resting location on a table to the mouth of the participant’s avatar.
  • the XR rendering device 100 then renders 414 graphical representations of the participant’s avatar interacting with a virtual object representation of the physical object, e.g., graphical representation of the coffee cup.
  • the XR rendering device 100 communicates the rendered graphical representations to the user device 100, which responsively displays 416 the avatar interacting with the virtual object.
  • Device or cloud server application may furthermore select digital object attributes according to resemblance, history, and/or context.
  • the operation to identify 502 characteristics of the physical object includes processing image data from a camera arranged to capture images of the physical object, to identify at least one of: size of the physical object, form of the physical object, and texture of the physical object.
  • the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed to render the avatar with a form, size, and/or texture defined based on the size of the physical object, the form of the physical object, and/or the texture of the physical object.
  • the operations may use a recorded (stored) history of a previously used object, such as a recorded historical indication of previously used mugs, chairs, and shoes.
  • a recorded (stored) history of a previously used object such as a recorded historical indication of previously used mugs, chairs, and shoes.
  • the operations can record digital attributes of a previously used digital object, such as a chair, to be re-used in a future digital meeting session responsive to determining the user body posture indicates use of “same physical chair.”
  • the user may have an opportunity to select among a set of previously imported (stored) objects of similar type, e.g., satisfying a defined similarity rule.
  • Figure 8 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • the operations further include to compare 800 the processed image data of the physical object to historical virtual objects in a historical virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which the user has previously interacted in the immersive XR environment.
  • the operations further include to select 802 one of the historical virtual objects in the historical virtual object repository based on similarity between the image data of the physical object to the one of the historical virtual objects.
  • the operations further include to render 804 the virtual object in the immersive XR environment based on the selected one of the historical virtual objects.
  • context of a digital meeting may be used, such as the digital meeting being private, for leisure, or for business purposes. For example, determining a type of sitting object and associated body postures expected in a digital business meeting may be separate from an expected and accepted sitting object and associated body postures in a private after work digital meeting.
  • the operations further include to determine at least one of the following context parameters: XR rendering device location data; time data; date data; characteristic of a background noise component; and sensor data indicating a sensed type of physical object or environmental parameter.
  • the operation to render 506 the avatar of the participant interacting with the virtual object in the immersive XR environment is performed based on the identified characteristics of the physical object, the determined participant avatar posture, and the context parameters.
  • operations by a participating XR device, a managing cloud server, or XR meeting application may select digital object attributes according to resemblance, history, and/or context as discussed above.
  • the identification, selection, and rendering application may provide to a database an instance of digital object attributes, such as physical form factor or textures, associated with the associated digital object.
  • a second user may also search or match for a second digital object among the set that corresponds to the first user's digital object and attributes.
  • the first user may in these operations further associate the provided digital object with a lifetime or persistence value indicative for how long time the digital object and attributes may be accessible for other participants.
  • Figure 9 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • the operations further include to compare 900 the processed image data of the physical object to predefined virtual objects in a predefined virtual object repository which defines sizes of virtual objects, forms of virtual objects, and/or textures of virtual objects with which are predefined in the immersive XR environment.
  • the operations further include to select 902 one of the predefined virtual objects in the predefined virtual object repository based on similarity between the image data of the physical object to the one of the predefined virtual objects.
  • the operations further include to render 904 the virtual object in the immersive XR environment based on the selected one of the predefined virtual objects.
  • Figure 10 is a flowchart of operations that can be performed by an XR rendering device in accordance with some embodiments of the present disclosure.
  • the operations further include to obtain 1000 from another XR rendering device a predicted motion pattern of a body part that is defined as being associated with the identified characteristics of the physical object.
  • the operation to render the avatar of the participant interacting with the virtual object is performed based on the determined participant avatar posture and the predicted motion pattern of the body part that is defined as being associated with the identified characteristics of the physical object.
  • the identification/selection/rendering application may in the aspect of “within the set of digital objects available for rendering in the digital meeting” provide to a database (or similar) an instance of digital object attributes, such as physical form factor, and textures, associated with the “own digital object.”
  • a second user in the step of “find matching digital object from a set of digital objects available for rendering in the digital meeting” now also search/match the corresponding second digital object among the set also including first users digital object and attributes.
  • the first user may in these aspects further associate provided digital object with a lifetime or persistence value indicative for how long time the object may be accessible for other participants to adopt; for example, “duration of ongoing meeting,” “for meetings where first user is present,” “today,” etc.
  • the first user may also specify what other users and in which context the “own object” may be used by other users’ XR renderings, for example “only for family and friends,” “only for business colleagues but not external meeting participants.” [0062] Example XR Rendering Device Configuration.
  • FIG 11 is a block diagram of components of an XR rendering device 100 that are configured to operate in accordance with some embodiments of the present disclosure.
  • the XR rendering device 100 can include at least one processor circuit 1100 (processor), at least one memory 1110 (memory), at least one network interface 1120 (network interface), and a display device 1130.
  • the processor 1100 is operationally connected to these various components.
  • the memory 1110 stores executable instructions 1112 that are executed by the processor 1100 to perform operations.
  • the processor 1100 may include one or more data processing circuits, such as a general purpose and/or special purpose processor (e.g., microprocessor and/or digital signal processor), which may be collocated or distributed across one or more data networks.
  • a general purpose and/or special purpose processor e.g., microprocessor and/or digital signal processor
  • the processor 1100 is configured to execute the instructions 1112 in the memory 1110, described below as a computer readable medium, to perform some or all of the operations and methods for one or more of the embodiments disclosed herein for an XR rendering device.
  • the XR rendering device may be separate from and communicatively connect to the participant devices, or may be at least partially integrated within one or more of the participant devices.
  • the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof.
  • the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item.
  • the common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation.
  • Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits.
  • These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un dispositif de rendu de réalité étendue pour rendre un environnement de réalité étendue (XR) immersif sur un dispositif d'affichage en vue d'une visualisation par un participant parmi un groupe de participants qui ont des avatars associés représentant le groupe de participants qui sont rendus dans l'environnement de XR immersif pour effectuer des opérations. Les opérations consistent à déterminer que le participant interagit avec un objet physique dans l'environnement physique du participant. Les opérations consistent également à identifier des caractéristiques de l'objet physique dans l'environnement physique du participant avec lequel le participant interagit. Les opérations consistent également à déterminer une posture d'avatar de participant sur la base des interactions du participant avec l'objet physique dans l'environnement physique du participant. Les opérations consistent également à rendre l'avatar du participant interagissant avec un objet virtuel dans l'environnement de XR immersif sur la base des caractéristiques identifiées de l'objet physique et de la posture d'avatar de participant déterminée.
PCT/EP2022/076308 2022-09-22 2022-09-22 Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique WO2024061462A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/076308 WO2024061462A1 (fr) 2022-09-22 2022-09-22 Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/076308 WO2024061462A1 (fr) 2022-09-22 2022-09-22 Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique

Publications (1)

Publication Number Publication Date
WO2024061462A1 true WO2024061462A1 (fr) 2024-03-28

Family

ID=83506483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/076308 WO2024061462A1 (fr) 2022-09-22 2022-09-22 Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique

Country Status (1)

Country Link
WO (1) WO2024061462A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
US20170365084A1 (en) * 2016-06-15 2017-12-21 Fujitsu Limited Image generating apparatus and image generating method
US20190206141A1 (en) * 2017-12-29 2019-07-04 Facebook, Inc. Systems and methods for generating and displaying artificial environments based on real-world environments
US10924710B1 (en) * 2020-03-24 2021-02-16 Htc Corporation Method for managing avatars in virtual meeting, head-mounted display, and non-transitory computer readable storage medium
US20210392175A1 (en) * 2020-05-12 2021-12-16 True Meeting Inc. Sharing content during a virtual 3d video conference
KR20220026186A (ko) * 2020-08-25 2022-03-04 한국과학기술원 전신 아바타를 이용한 이종공간의 혼합현실 텔레프레즌스 시스템
US20220269336A1 (en) * 2021-02-25 2022-08-25 Quebec Inc. (Auger Groupe Conseil) Systems and methods for virtual interaction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9007422B1 (en) * 2014-09-03 2015-04-14 Center Of Human-Centered Interaction For Coexistence Method and system for mutual interaction using space based augmentation
US20170365084A1 (en) * 2016-06-15 2017-12-21 Fujitsu Limited Image generating apparatus and image generating method
US20190206141A1 (en) * 2017-12-29 2019-07-04 Facebook, Inc. Systems and methods for generating and displaying artificial environments based on real-world environments
US10924710B1 (en) * 2020-03-24 2021-02-16 Htc Corporation Method for managing avatars in virtual meeting, head-mounted display, and non-transitory computer readable storage medium
US20210392175A1 (en) * 2020-05-12 2021-12-16 True Meeting Inc. Sharing content during a virtual 3d video conference
KR20220026186A (ko) * 2020-08-25 2022-03-04 한국과학기술원 전신 아바타를 이용한 이종공간의 혼합현실 텔레프레즌스 시스템
US20220269336A1 (en) * 2021-02-25 2022-08-25 Quebec Inc. (Auger Groupe Conseil) Systems and methods for virtual interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG XUANYU ET AL: "Predict-and-Drive: Avatar Motion Adaption in Room-Scale Augmented Reality Telepresence with Heterogeneous Spaces", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE, USA, vol. 28, no. 11, 31 August 2022 (2022-08-31), pages 3705 - 3714, XP011924452, ISSN: 1077-2626, [retrieved on 20220901], DOI: 10.1109/TVCG.2022.3203109 *

Similar Documents

Publication Publication Date Title
US10554921B1 (en) Gaze-correct video conferencing systems and methods
US10810797B2 (en) Augmenting AR/VR displays with image projections
US9842246B2 (en) Fitting glasses frames to a user
TW201911082A (zh) 圖像處理方法、裝置及儲存介質
CN107210949A (zh) 利用角色的消息服务方法、执行所述方法的用户终端、包括所述方法的消息应用程序
US20210201002A1 (en) Moving image distribution computer program, server device, and method
JP2023529126A (ja) 三次元環境におけるアバターの提示
US20230130535A1 (en) User Representations in Artificial Reality
JP2016105279A (ja) 視覚データを処理するための装置および方法、ならびに関連するコンピュータプログラム製品
JP2022121451A (ja) プログラム、情報処理装置、情報処理システム、および情報処理方法
EP4340965A1 (fr) Hiérarchisation de rendu par un dispositif de rendu de réalité étendue en réponse à des règles de hiérarchisation de rendu
US20230298240A1 (en) Control program for terminal device, terminal device, control method for terminal device, control program for server device, server device, and control method for server device
US20220405996A1 (en) Program, information processing apparatus, and information processing method
WO2024061462A1 (fr) Rendu d'avatar d'utilisateur et d'objet numérique en réalité étendue sur la base d'interactions d'utilisateur avec un objet physique
JP7488420B2 (ja) コンテンツ提供装置
CN116530078A (zh) 用于显示从多个视角采集的经立体渲染的图像数据的3d视频会议系统和方法
JP6999538B2 (ja) 情報処理方法、情報処理プログラム、情報処理システム、および情報処理装置
US20240242449A1 (en) Extended reality rendering device prioritizing which avatar and/or virtual object to render responsive to rendering priority preferences
EP4341910A1 (fr) Dispositif de rendu en réalité étendue destiné à hiérarchiser l'avatar et/ou l'objet virtuel à rendre sensible à des préférences de priorité de rendu
US20240205370A1 (en) Extended reality servers preforming actions directed to virtual objects based on overlapping field of views of participants
WO2024037160A1 (fr) Procédé et appareil de traitement d'image vidéo, dispositif informatique et support de stockage
WO2024083302A1 (fr) Portail virtuel entre un espace physique et un espace virtuel dans des environnements de réalité étendue
JP2015516629A (ja) 多様な環境における構造化照明ベースのコンテンツ対話
JP7507437B2 (ja) コンピュータプログラム、方法、及び、サーバ
JP2020520487A (ja) Vrインタラクションの改良された方法およびシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22777991

Country of ref document: EP

Kind code of ref document: A1