WO2022259253A1 - System and method for providing interactive multi-user parallel real and virtual 3d environments - Google Patents

System and method for providing interactive multi-user parallel real and virtual 3d environments Download PDF

Info

Publication number
WO2022259253A1
WO2022259253A1 PCT/IL2022/050615 IL2022050615W WO2022259253A1 WO 2022259253 A1 WO2022259253 A1 WO 2022259253A1 IL 2022050615 W IL2022050615 W IL 2022050615W WO 2022259253 A1 WO2022259253 A1 WO 2022259253A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
user
environment
real
representation
Prior art date
Application number
PCT/IL2022/050615
Other languages
French (fr)
Inventor
Alon Melchner
Original Assignee
Alon Melchner
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alon Melchner filed Critical Alon Melchner
Publication of WO2022259253A1 publication Critical patent/WO2022259253A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the invention is in the field of extended reality, and in particular pertains to a system and method for providing interactive multi-user parallel real and virtual 3D environments.
  • U.S. patent application 2019/0050137 A1 discloses a method for generating a three- dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
  • the present invention improves upon the state-of-the-art in user visuals and interactions in a 3-D extended reality environment, as further described herein.
  • the present invention provides a system for providing an interactive multi-user 360° panoramic-image representation virtual 3D environment, the system comprising an environment representation database configured for storing a 360° panoramic representation of a 3D environment; a plurality virtual user modules each configured to acquire and update a virtual position and orientation, in the representation, of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to render, for each the virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to the virtual position of the virtual user; wherein the rendering module is further configured, for each viewing virtual user, to place an avatar of other virtual users on an avatar layer, disposed appropriate to the virtual position of the viewing virtual user and each of the other virtual users, and overlay the avatar layer upon the background layer; and the virtual user modules are further configured to display a virtual 3D environment comprising the combined layer to its virtual user.
  • the present invention further provides the above system, wherein the virtual user modules are further configured to acquire and update a virtual orientation, action, and/or posture of each of one or more virtual users and the rendering module is further configured to render the avatar on the avatar layer according to the orientation, action, and/or posture.
  • the present invention further provides any one of the above systems, wherein the rendering module is further configured to render the size of the avatars as a function of a distance between the positions of viewing and viewed virtual users and of a scaling factor of the size of 2D panoramic images relative to the size of the real 3D environment from which the panoramic images were taken.
  • the present invention further provides any one of the above systems, wherein if more than one virtual user occupies the same virtual position in the virtual 3D environment, the rendering module spaces their avatars apart, appearing arrayed or clustered within some radius of the virtual position of the avatars.
  • the present invention further provides any one of the above systems, further comprising an objects module, configured to store representations and positions of objects, the rendering module is further configured to overlay the virtual objects upon the background layer, according to virtual positions of the virtual objects and virtual position and line-of-sight direction of the viewing user.
  • an objects module configured to store representations and positions of objects
  • the rendering module is further configured to overlay the virtual objects upon the background layer, according to virtual positions of the virtual objects and virtual position and line-of-sight direction of the viewing user.
  • the present invention further provides the previous system, wherein the representation of a said virtual object is accompanied by a hotspot, displayed as a hotspot icon on the virtual user module; when a virtual user selects the hotspot icon, the virtual user module is configured to display a 3D model of the virtual object, wherein said 3D model is manipulable by the virtual user.
  • the present invention further provides any one of the above systems, wherein the environment representation database is further configured to store a 3D model of the 3D environment and the rendering module is further configured to do one or more of a. employ the 3D model in order to determine distances between the viewing virtual users and to other virtual users and to background walls and objects in the scene represented by the 2D panoramic images, and to size the avatars according to the distances; b. render occlusion of an avatar or an object fully or partially obscured by the virtual 3D environment, as computed from the 3D model.
  • the present invention further provides a system for providing an interactive multi-user virtual 3D environment, comprising an environment representation database configured for storing a representation, comprising a 3D model, of a 3D environment; one or more virtual user modules each configured to acquire and update a virtual position of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to receive the representation and accordingly render a virtual 3D environment for display on the virtual user modules; wherein the rendering module is further configured, for each viewing the virtual user, to render avatars of other virtual users, according to the virtual positions of the viewing virtual user and of each of the other virtual users.
  • the invention further provides any one of the above systems, wherein the virtual user modules comprise VR glasses, VR contact lenses, a computing device with a display screen, a Web3D station or mobile device, or any combination thereof.
  • the virtual user modules comprise VR glasses, VR contact lenses, a computing device with a display screen, a Web3D station or mobile device, or any combination thereof.
  • the invention further provides any one of the above systems, further comprising additional instances of the virtual representation, each virtual representation instance populated by a different group of the virtual users.
  • the invention further provides any one of the above systems, wherein visual data representing the avatars comprise one or more of a generic representation, an icon, a 2D image, a 3D model, a streaming video, or any combination thereof.
  • the invention further provides any one of the above systems, further comprising a virtual objects module, configured to store representations and positions of virtual objects, the rendering module is further configured to render the virtual objects in the 3D environment in the virtual positions.
  • the invention further provides any one of the above systems, wherein the rendering module is further configured to render a voice of another user, a volume of the voice adjusted according to a distance between the avatars of the other user and the viewing user.
  • the invention further provides any one of the above systems, wherein the rendering module is further configured to render the voice as if coming from the direction of the other user, e.g. by employing surround sound.
  • the invention further provides a system for providing an interactive multi-user parallel real and virtual 3D environment, comprising any one of the above systems of claims and further comprising one or more physical user modules disposed in a real the 3D environment, each the physical user module configured to track a real position of a physical user; wherein the parallel system is further configured to display avatars of the physical users, overlain upon or placed in the virtual 3D environment, on the virtual user modules; and the parallel system is further configured to display avatars of the virtual users, overlain upon the real 3D environment, on the physical user modules.
  • the invention further provides any one of the above systems, wherein the virtual representation is constructed and/or updated in real time from one or more depth images of the real 3D environment acquired from one or more of the physical user modules and/or user acquisition/identification module in the physical 3D environment.
  • the invention further provides any one of the above systems, wherein the real 3D environment is a real store and the system is further configured to enable interaction between a virtual sales representative and a physical user customer.
  • the invention further provides any one of the above systems, wherein the real 3D environment is a real store and the system is further configured to enable interaction between any combination of virtual and physical user sales representatives and virtual and physical user customers.
  • the invention further provides any one of the above systems, wherein the virtual sales representatives comprise a virtual user, a sales bot, or any combination thereof.
  • the invention further provides any one of the above systems, wherein the sales bots are responsive to motions of a physical user customer in the real store.
  • the invention further provides any one of the above systems, further configured for a physical or virtual user/object to virtually teleport to a real 3D environment, becoming a virtual user/object in the real 3D environment.
  • the invention further provides any one of the above systems, wherein the teleported virtual user is an interior decoration assistant, virtually teleported to a real home of a physical user customer therein, and the system is further configured for rendering avatars of the assistant and the customer interacting.
  • the invention further provides any one of the above systems, further configured for teleportation one or more virtual samples of any combination of furniture, ceramics, bathroom, home decor, carpets, floors, parquets, paint, wallpaper, outdoor furniture, swimming pools, garden design, awnings, windows, and doors to the home; and further configured for placement of the virtual samples in the home according to measurements made by the depth images.
  • the invention further provides any one of the above systems, further configured for one or more additional virtual users to virtually navigate or virtually teleport to the home and interact with the customer.
  • the invention further provides any one of the above systems, wherein the representation of the home is modified by a set of tools enabling removal of objects appearing in the real 3D environment from the representation of the home, by clearing or hiding 3D triangles and mesh elements from the representation.
  • the invention further provides any one of the above systems, wherein the virtual 3D environment is a virtual store with bot serving as a virtual sales representative.
  • the invention further provides any one of the above systems, further comprising an analytics module configured to collect and statistically analyze positions, orientations, and/or actions, of the virtual and/or physical users.
  • the invention further provides any one of the above systems, further configured to collect user actions, timestamps, and/or durations of the user actions and include them in the statistical analysis; and or compare virtual with real user activities.
  • the invention further provides any one of the above systems, further comprising tools for a teleportation platform, capturing a physical 3D environment, create the representation of the physical 3D environment, enable tracking users on syncing between, enabling teleportation and scanning creation or update of the real 3D environment, placement of objects and adding to the shared environment configured for programming or scripting one or more of: a virtual store; converting existing online stores to multi-user virtual stores; creating an AR/XR layer over a physical 3D environment; and adding a virtual layer over an existing virtual layer.
  • the invention further provides any one of the above systems, wherein the representation of the real 3D environment builds up from many physical users traversing the real 3D environment.
  • the invention further provides a system for providing an interactive 360° panoramic- image representation virtual 3D environment, the system comprising an environment representation database configured for storing a 360° panoramic representation of a 3D environment; one or more virtual user modules each configured to acquire and update a virtual position and orientation, in the representation, of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to render, for each virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to the virtual position of the virtual user; an objects module, configured to store representations and positions of objects, the rendering module is further configured overlay the virtual objects upon the background layer, according to virtual positions of the virtual objects and virtual position and line-of-sight direction of a virtual user, wherein representation of an object is accompanied by a hotspot, displayed as a hotspot icon on the virtual user module; when a virtual user selects the hotspot icon, the virtual user module is configured to display a 3D
  • the invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, comprising steps of determining the location of a first (viewing) virtual user; determining the location of a second (other) virtual user; transmitting the location of the second user to a remote server; transmitting the location of the second virtual user to a user device of the first virtual user; and placing an avatar of the second user within the 3D model.
  • the invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of a virtual object to a viewing real or virtual user, comprising steps of determining the location of a user; determining the location of an object; transmitting the location of the object to a remote server; transmitting the location of the object to a user device of the user; placing a virtual representation of the object within the 3D model.
  • the invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of the orientation of a 2D image representation of an object in a 3D-modeled 3D environment, comprising steps of determining the location of a user; determining the location of a 2D image model of an object; updating, on a server, the orientation of the 2D image to face the location of the user; if the 2D image object is moved by the user, updating the 2D image position on the server for real-time effect to the user; and if the 2D image object is transformed by the user, updating the 2D image on the server for real-time effect to the user.
  • the invention further provides a method for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual 3D environment, comprising steps of determining the location of a physical user in a real 3D environment; defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment; transmitting the virtual position of the real user to a remote server; transmitting virtual positions of each user to devices of each other user to other user; placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users; placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment.
  • a virtual object e.g. an avatar
  • Fig. 1 shows a functional block diagram of a computer-based system for providing interactive multi-user virtual or parallel real and virtual environments, according to some embodiments of the invention.
  • Figs. 2A and 2B show a view of a parallel real and virtual 3D environment, to a physical user and to a virtual user, respectively.
  • Fig. 3 shows a virtual 3D environment as represented in a 360° panoramic image representation, according to some embodiments of the invention.
  • Fig. 4 is a user’s view in a 360° panoramic environment, according to some embodiments of the invention.
  • Figs. 5A-5C show how an object in a virtual environment is accompanied by a hotspot icon for manipulation of a 3D model version of the object.
  • Figs. 6A and 6B show 2D models (images) in a 3D virtual environment, as viewed from two different perspectives in the virtual 3D environment, according to some embodiments of the invention.
  • Fig. 7 is a flowchart of a method for providing a live rendering, in an interactive, multi user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, according to some embodiments of the invention.
  • Fig. 8 is a flowchart of a method for live update of avatars of multiple users active in the same 2D/3D panoramic image within or not within a 3D modeled environment that is synced with the panoramic image location.
  • Fig. 9 is a flowchart of a method for live update of 2D and/or 3D models and images that are active the same 2D/3D panoramic image within or not within a 3D modeled environment that is synced with the panoramic image location.
  • Fig. 10 is a flow chart of a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of a virtual object to a viewing real or virtual user, according to some embodiments of the invention.
  • Fig. 11 is a flow chart of a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of the orientation of a 2D image representation of an object in a 3D-modeled environment, according to some embodiments of the invention.
  • Fig. 12 a flow chart of a method 1500 for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual environment, according to some embodiments of the invention.
  • Fig. 13 is a flow chart of a method for synchronizing the placement and/or orientation of objects in a multi-user, multi-platform virtual or parallel real and virtual 3D environment, according to some embodiments of the invention.
  • Real or physical 3D environment A physical space with non-movable objects, movable objects, and/or people therein.
  • Panoramic images A representation of a physical 3D environment comprising a 2D image or images covering a 360° angular range of viewing directions. The panoramic images are taken at various points in the physical 3D environment. A user may virtually navigate only to the points at which panoramic image were taken.
  • 3D model A representation of a physical 3D environment comprising a 3D image of the physical 3D environment.
  • a 3D image is acquired by taking depth images (e.g., using LIDAR) at one or more points and viewing angles in the physical 3D environment, thereby acquiring a 3D image of surfaces in the physical 3D environment.
  • the depth image camera may be accompanied by an additional camera for capturing the color and/or brightness at points of the 3D surfaces.
  • a 3D model can be designed, for example, using an animation program.
  • positions to which users may navigate are limited only by the resolution of the 3D model.
  • Interactive multi-user virtual 3D environment or simply virtual 3D environment:
  • a shared virtual 3D environment of panoramic images or 3D model in which multiple users can interact may refer to a visual rendering as viewed by a user and/or to a digital representation thereof, as is clear from the context.
  • Interactive multi-user parallel real and virtual 3D environments, or simply parallel 3D environment A real 3D environment and a virtual duplicate or representation of the real 3D environment, in which physical users and objects in the real 3D environment can interact with virtual users and objects in the virtual 3D environment, and vice-versa.
  • Virtual user A user engaged in viewing and being position-tracked in a virtual 3D environment.
  • the term “virtual” in “virtual user” does not mean or imply that a human user does not exist or that a non-human user (e.g., hot) exists only in cyberspace. (In contrast, in many contexts of this disclosure, the term “virtual” does conform to its traditional meaning of an object or action being in cyberspace and/or its rendering on an XR display.)
  • Physical user A user existing in a real 3D environment while being position-tracked and viewing AR content. The user’s existence is represented in a virtual 3D environment parallel to the real 3D environment.
  • Avatar As used in this disclosure, a 2D or 3D virtual object representing a user displayed in a virtual 3D environment or parallel 3D environment.
  • Virtual teleportation The introduction of a user or object from a first virtual or real 3D environment into a second virtual or real 3D environment, becoming a virtual user or object appearing in the second 3D environment.
  • FIG. 1 showing a computer-based system 10 for providing an interactive multi-user virtual 3D environment 125B or parallel real 3D environment 125A and virtual 3D environment 125B, according to some embodiments of the invention.
  • the system 10 comprises a representation database 105, storing a representation of one or more 3D environments 125A.
  • the 3D environments 125A can be any combination of stores, architectural settings, conference rooms, gaming fields, classrooms, or any other physical settings.
  • the representation may be a reproduction of a real 3D environment 125A or may be a simulation.
  • the representation may comprise 2D panorama images, a 3-D model, or a combination thereof.
  • the representation may be acquired by a camera and/or LIDAR acquisition of a real 3D environment, may be built from an animation, or any combination thereof.
  • the representation may be pre-made or acquired and/or dynamically updated from a physical environment in real time.
  • the system 10 further comprises one or more virtual user modules 115B.
  • a virtual user module 115B has a VR, AR, MR, or XR display and a virtual navigation pointer or tracker. Examples of a virtual user module 115B include a mobile or stationary computing device, VR/AR/MR/XR glasses, a Web3D station or mobile device, or any combination thereof.
  • Each virtual user module 115B acquires and updates the current virtual position of a virtual user 120B navigating with a pointer (e.g. joystick) within the virtual 3D environment 125B.
  • a pointer e.g. joystick
  • the virtual user 120B adjusts their virtual position by physically moving, in which case the virtual position can be determined using technologies such as GPS, optical flow, SLAM, or triangulation by locating -beacons placed in the physical environment of the virtual user 120B.
  • Virtual users 120B see other virtual users represented as avatars in the virtual 3D environment 125B, as further described herein.
  • the system 10 further comprises one or more physical user modules 115A.
  • a physical user module 115A has an AR or MR display and a physical user navigation tracker. Examples of a physical user module 115A include AR or MR glasses or contact lenses, and a mobile device.
  • Each physical user module 115A acquires and updates the physical location coordinates of a physical user 120A.
  • the physical user location can be determined using any technology known in the art, such as GPS, optical flow, SLAM, or triangulation by locating-beacons placed in the physical 3D environment.
  • Physical users 120A see virtual users 120B and/or physical users 120A (in other real 3D environments 125A) represented as avatars, overlain upon the real 3D environment 125A, as further described herein.
  • the system 10 further comprises a tracking module 112.
  • the tracking module 112 is in communicative connection with the physical user modules 115A and the virtual user modules 115B.
  • the tracking module 112 receives virtual positions of virtual users 120B from virtual user modules 115B and/or real positions of physical users 120A from physical user modules 115A.
  • the system 10 further comprises a rendering module 110.
  • the rendering module 110 acquires a representation of the physical 3D environment 125A and renders a virtual 3D environment 125A on a virtual user module 115B or virtual objects 135B on a physical user module 115A, as viewed by each virtual or physical user 120.
  • the rendered elements have a realistic appearance, according to the position and line-of-site of a viewing user 120. (References to 115 or 120 without a letter refer to one or more of any combination of -A and -B.)
  • the rendering module 110 also renders avatars of virtual users and physical users according to positions received from the tracking module 112, as further described herein.
  • the rendering module 110 renders the avatars so that when viewed through the user modules 115A- B appear to the user 120A-B to be in the virtual 3D environment 125B and/or physical 3D environment 125A.
  • the system further comprises an objects module 130, configured to store representations and positions of virtual objects 135B to be overlain upon or placed in a virtual and/or real 3D environment.
  • the virtual objects 135B may be virtual only or may be virtual representations of a real object 135A in a real 3D environment 125A.
  • the objects module 130 feeds representations and positions of virtual objects 135B to the rendering module 110, which renders the object overlain upon or placed in virtual 3D environments 125B and/or real 3D environments 125A.
  • modules described herein may be combined in one piece of hardware.
  • the rendering module 110 may be a part of the same unit as the virtual user module 115B.
  • a single described module may be distributed over a plurality of hardware units.
  • embodiments described throughout this disclosure are non limiting and exemplary in nature. Therefore, other embodiments of the invention may include described features from two or more different sections, paragraphs, and/or drawings of this disclosure.
  • the system 10 provides a multi-user virtual 3D environment 125B.
  • the virtual 3D environment 125B may be acquired from a real 3D environment and virtualized as 2D panoramic images, a 3D model, or a combination thereof.
  • Virtual users 120B are each in possession of a virtual user module 115B including a display and enabling virtual navigation through the virtual 3D environment 125B and accurate tracking of virtual positions therethrough.
  • the virtual user module 115B may comprise VR/AR/MR (XR) glasses or contact lenses, a computing device with a 2D display screen and navigating device (e.g. joystick), a smartphone (e.g.
  • the display presents a rendering of the virtual 3D environment 125B according to each user’s virtual position and virtual line-of-sight.
  • the rendering module 110 renders avatars of virtual users 120B in the virtual 3D environment 125B. To a viewing virtual user 120B, other virtual users 120B appear as avatars on the display of virtual user module 115B of the viewing user, overlain upon or placed in the background of the virtual 3D environment 125B.
  • the visual data representations of the avatars may be generic, iconic representations, 2D user images, 3D user models, streaming video, streaming depth scan, or any combination thereof.
  • the avatar representations may be provided to the rendering module 110 by the virtual users, e.g. through their virtual user modules 115B; or may be stored elsewhere on the system 10 or a computing device accessible thereto.
  • the avatars are displayed realistically on a viewing user’s display, positioned according to the virtual locations of the other virtual users 120B and the virtual location and line of sight of the viewing virtual user 120B.
  • the display is updated in real time, so that the viewing virtual user 120B sees the avatars moving as other virtual users 120B and the viewing virtual user 120B navigate through the virtual 3D environment 125B.
  • the virtual 3D environment 125B and avatars appear to move accordingly. Nearby virtual users 120B appear as they enter his virtual field-of-view, either by navigation of the virtual users 120B or rotation of the viewing user. If the virtual representation comprises a 360° panoramic image representation, possible positions of virtual users are limited to positions at which a 360° panoramic image has been acquired and stored in the representation database 105. If the virtual representation is a 3D model, position resolution of users and avatars are limited only by the resolution of the 3D model.
  • Any number of virtual users may share the same virtual 3D environment 125B, e.g. on a website. If more than one virtual user occupies the same virtual position in the virtual 3D environment 125B — which may be likely with a 360° panoramic image representation, since panoramic images are characteristically taken only at discrete positions along a path — the avatars may be spaced apart, appearing arrayed or clustered within some radius of the virtual position.
  • the rendering module 110 may render multiple instances of a virtual 3D environment 125B populated by different groups of virtual users, e.g. groups of friends. A small group of friends may thereby share a virtual 3D environment 125B among themselves, without overcrowding of avatars.
  • avatars rotate according to changes in orientation of the virtual user whose avatar is being viewed.
  • the rendering module 110 computes which portions of an avatar are occluded (i.e. a surface of the 3D model interposes between a viewing user and the avatar). The rendering module 110 then cuts out (makes transparent) the occluded portions of the avatar. On the display of the viewing physical or virtual user, the avatar appears, realistically, as partially obstructed by the obstructing surface in the 3D environment.
  • the occlusion rendering may be disabled and the avatar, or a faded/dotted-outline version, or a substitute icon displayed, so that the viewing user is aware of the hidden user’s presence in the virtual 3D environment 125B.
  • the rendering module 110 renders occlusion of one avatar by another avatar.
  • Occlusion rendering may be implemented where the 3D model is also used to visually render the 3D environment for viewing, or where the 3D model is used for computing occlusion rendering in combination with a 360° panorama representation is used for displaying the virtual 3D environment 125B.
  • Some embodiments provide a system 10 for an interactive multi-user parallel real 3D environment 125A and virtual 3D environment 125B, according to some embodiments of the invention.
  • the parallel 3D environments system duplicates a real environment, including physical users and objects, to a virtual one; and enables virtual users, in the duplicate virtual environment, to interact with the real users and objects.
  • the virtual users may interact with the real environment, including real users and real objects, and the physical users may interact with the virtual users and virtual objects.
  • the physical users 120A and real objects 135A are present and participating in parallel with the virtual users and virtual objects 135B.
  • the physical users may be each in possession of a physical-user module 115A such as glasses for viewing AR, MR, or XR content, while enabling accurate tracking of real positions of the physical user in the real 3D environment while viewing avatars of virtual users overlain upon or placed in the real 3D environment.
  • a physical user module 115 can be of another type, such as a smartphone, Bluetooth, RF, other transmitting devices, real-time scanners such as infrared or LIDAR that constantly scan and report the physical user’s position in the real 3D environment.
  • the physical user modules 115A can comprise a single display or projection visible to multiple physical users 120B.
  • the physical user modules 115A can comprise a single tracking mechanism that tracks positions of multiple physical users 120B.
  • a physical user 120A walking through a real 3D environment such as a shop, warehouse, shopping mall, museum or any place in the real world is thereby rendered with an avatar in the virtual 3D environment 125B according to his exact position in the real world.
  • the displays of virtual user modules 115B and physical-user modules 115A are updated in real time, so that each viewing real and virtual user sees the avatars of other virtual users and physical users moving as the other users navigate through the parallel 3D environments.
  • Identifying the position of a physical user or objects is made with any combination of 1) AR; 2) depth cameras, LIDAR, and/or other environmental detection technologies, optical flow, SLAM; and 3) sensors such as BLE, WiFi, RF, building navigation, infrared, etc.
  • identifying the position of one or more physical users may be alternatively be performed by a single piece of hardware (e.g., a user acquisition/identification module) placed in or near the real 3D environment.
  • a single piece of hardware e.g., a user acquisition/identification module
  • Fig. 2A showing the view of a physical user in a real 3D environment.
  • Richard, Sandra, and Mari are virtual users. They appear to the viewing physical user as AR avatars overlain on the real 3D environment.
  • Joe is a physical user in the physical 3D environment; he appears whether or not he is a participant in the parallel system (in this example, he is participating and being tracked).
  • Fig. 2B showing the view of the same parallel 3D environment to a virtual user.
  • Richard, Sandra, and Mari are virtual users. They appear to the viewing virtual user as virtual avatars in the virtual 3D environment. Similarly, Joe, even though in the real 3D environment, also appears to the viewing virtual user as a avatar in the virtual 3D environment.
  • a physical user module 115A may also display virtual objects 135B as as they appear in the virtual 3D environment 125B.
  • virtual user modules 115B display virtual objects 135B rendered from a representation of a real object 135A in the real environment 125A.
  • Fig. 3 shows a virtual 3D environment 125B as represented in a 360° panoramic image representation.
  • the rendering module 110 receives virtual positions of virtual users 120B in the virtual 3D environment.
  • each virtual user 120B is positioned at a point 100 centered at a hemispheric background projection 201 containing the image(s) constituting the 360° panoramic image representation.
  • the rendering module 110 renders, for each virtual user 120B, the hemispheric background projection 201 on a background image layer, in the format of the virtual user module 115B of the virtual user at the location 100.
  • the rendering module 110 further places, for each viewing virtual user 120B (e.g., at point 100) the rendering module places an avatar of other users (e.g., at point 102) on an avatar layer. To do so, the rendering module 110 computes the point on the background image layer corresponding to the other user’s position, and the avatar is placed at the corresponding point on the avatar layer. The rendering module 110 overlays the avatar layer upon on the background layer. The combined layers are received by the virtual user module 115B, for display to the viewing user. The avatars’ positions relative to the background thereby appear to be realistic to the viewing user, as they would be seen in a real background. The virtual positions are limited to points at which 360° panoramic images exist in the 3D environment representation database 105. If more than one virtual user 120B occupies the same position, the rendering module 110 may place avatars in some arrangement about the position.
  • the rendering module 110 scales the size of avatars according to the distance between the viewing user and other user and a scaling factor between the 360° panoramic image representation and avatars.
  • the environment representation DB 105 further stores a 3D model of the virtual 3D environment 125B.
  • the 3D model is aligned with the 2D panoramic images.
  • the rendering module 110 employs the 3D model in order to determine distances between the virtual user position and positions of background walls and objects in the scene represented by the 2D panoramic images.
  • the added 3-D model provides some advantages. As actual distances to the background are known from the 3D model, no scaling factor is required. Additionally, the 3D model enables rendering occlusion between objects in the background and avatars and between avatars themselves.
  • the virtual 3D environment is paralleled with a real 3D environment.
  • the 2D panoramic images may be taken earlier and stored or may be taken in real time from the real 3D environment.
  • the panoramic images may be accompanied by a 3D model, which may also be taken earlier and stored or may be taken in real time.
  • Fig. 4 is a user’s view in a 360° panoramic 3D environment, showing sizing and orientation of avatars 602, 603 as well as sizing of virtual objects 601 at different positions.
  • Avatars 602, 603 are represented on a different layer than the background layer 605 and when rendered overlain thereon.
  • FIG. 5A-5C showing a virtual object 660 accompanied by a hotspot icon 655, enabling selection and manipulation of a 3D model 665 of the virtual object 660, according to some embodiments of the invention.
  • a virtual object 660 in an environment represented by a 360° panoramic image 600 is accompanied by one or more hotspots, displayed on a virtual or physical user module as hotspot icons 655.
  • Each hotspot icon 655 can be a rollover button or a clickable button.
  • a manipulable 3D model 665 of the virtual object 660 appears overlain or nearby the virtual object 660.
  • a pointer tail 670 indicates to which virtual object 660 or which hotspot icon 655 the 3D model 665 is referring.
  • the user touches a hotspot icon 655 near a virtual sneaker display 660.
  • An enlarged 3D model 665 of the sneaker appears.
  • the user may use his finger to rotate the 3D model 665, in order to see the sneaker from different angles.
  • manipulation of the 3D model 665 by one user may be seen by other users.
  • multiple users may manipulate the 3D model 665 object at the same time.
  • a manipulable 3D object 665 may appear within the 360° panoramic environment as originally viewed by the user (e.g., without a selecting a hotspot icon).
  • a parallel real and virtual 3D environment system 10 is configured to construct, in real time, a virtual 3D environment 125B from a physical 3D environment of 125A a physical user 120A.
  • the virtual 3D environment 125B may be a 360° panoramic image representation or a 3D model.
  • the physical-user module 115A further comprises a panoramic camera and/or depth-image camera that acquires a virtual representation of the real 3D environment.
  • depth-image data of the the real 3D environment may be processed and displayed in real time.
  • the 3D model is stored in the representation DB 105 and shared in real time with virtual users.
  • the view of the 3D model may flow in real time with movement of the physical-user module 115A.
  • the depth-image camera may be implemented by technologies such as LIDAR, lasers, depth camera, etc.
  • the 3D model is stored in the representation DB.
  • the 3D model provides a virtual 3D environment 125B that is an accurate reproduction of the real 3D environment with regard to location, size, and angular information.
  • the virtual 3D environment is thus acquired in real time and the real time acquired 3D environment can be viewed in real time as well.
  • a collection of virtual 3D environments 125B comprising 2D panoramic images and/or 3D models stored or acquired in real time by physical users — are stitched together to form a composite virtual 3D environment 125B.
  • Physical 3D environments represented by existing virtual 3D environments 125B or being acquired need not be actually connected or even close together.
  • the system stitches the virtual 3D environments 125B together to match in location, sizes, and angles, such that the composite virtual 3D environment 125B appears realistic to virtual users navigating therethrough.
  • the appearance of one or more of the avatars may be obtained from a shared video stream.
  • the rendering module 110 renders the shared video stream to be presented at, and to move with, the position of the avatar.
  • the shared video stream may be represented as a head of the avatar, or as a screen appearing nearby or appearing to support the avatar.
  • the rendering module does any combination of recognizing the face (of a user 120B) and cutting it out; presenting a video screen as acquired by a camera; presenting a depth camera/scan stream; or rendering a 3D model according to movement of user.
  • An audio steam, generated by the rendering module 110, that accompanies an avatar or a video stream can be implemented with immersive 3D stereoscopy, in which the audio realistically sounds as it is from the avatar, i.e. from the virtual direction and distance of the avatar from the virtual user, and/or in accordance with virtual acoustics of the virtual 3D environment 125B.
  • the system 10 further comprises an objects module 130.
  • the objects module 130 stores representations of virtual objects 135B, such as merchandise for sale.
  • the rendering module 110 renders the objects for display in a virtual 3D environment 125B.
  • a virtual object 135B in the objects module 130 may represent a real object 135A in a physical 3D environment 125A; or may be purely a virtual object 135B.
  • an object 135 has only a virtual representation in the objects module 130.
  • An object representation is supplied to the objects module 130, for example, by a manufacturer of the physical object.
  • the objects module 130 stores object representations and enables placement (by a user interface, for example) into a location within the environment representation supplied by the representation module 105.
  • a virtual object 135B may be moved by virtual and/or physical users.
  • a real object 135A in a parallel system may be moved by a physical user 120A and its movement is updated in the objects module 130 and displayed in real time to virtual users 120B.
  • a real object 135A may be identified and its location determined within a real 3D environment 125A with a LIDAR or photographic camera.
  • the objects module 130 receives the location data and registers the location of the physical object’s 135A representation accordingly.
  • the physical object 135A may be moved (e.g., lifted, moved, rotated, tried on) by a physical user 120A.
  • the camera can enable real-time tracking of the object, in order to present motion of the object’s representation to virtual users 120B and/or physical users 120A in other physical 3D environments 125A.
  • the rendering module may render occlusion of a virtual object 135B by a surface, by an avatar, or by another virtual object 135A in the 3D environment. Additionally, the rendering module may render occlusion of an avatar by a virtual object 135B. In a real 3D environment 125A, virtual objects 135B may be occluded by real objects 135A, and vice-versa.
  • the virtual representation of an object may be a 2D image or a 3D model.
  • Figs. 4A and 4B show virtual 2D-image representations of three mixers: mixer 702, mixer 704, and mixer 706.
  • the background environment 700 may be real or virtual.
  • the 2D image representations of mixers 702 and 704 are rotated with the movement of a user, as distinguished between Fig. 5A and Fig. 5B, such that the 2D plane of the virtual object remains normal to the virtual viewing angle of the viewing user.
  • Mixer 706, shown partially occluded by the center counter in Fig. 5A, is fully occluded therefore not shown in Fig 4B.
  • the system further comprises a bot module (not shown).
  • the bot module provides a position of a bot avatar to the tracking module 112.
  • a bot and its avatar are not coordinated by a human user.
  • the motion and speech of a bot is computer controlled.
  • a bot avatar in a parallel system appears to physical users and virtual users.
  • the bot may be programmed to understand speech and typing of physical and virtual users and to respond to inquiries.
  • a bot may be replaced by a human virtual user 120B when, for example, the bot or a human recognizes a situation requiring human intervention, such as the bot’s reduced comprehension, a strong probability of a potential sale, an unhappy customer, etc.
  • a hot can appear as an AR avatar overlain upon a real 3D environment or in a virtual 3D environment.
  • the real 3D environment 125A is a store. Entry and movements of a physical user 120A — a customer — into the store is captured within the virtual and real parallel implementation of the system 10, by the physical user module 115A, or a camera (not shown) in the store, in communication with the tracking module 112.
  • the rendering module 110 renders a display of a virtual sales representative, presented to the physical user 120A as an AR or holographic avatar overlain upon the real store 3D environment 125A.
  • the movement and speech of the virtual sales representative may be provided by a salesperson who is a virtual user 125B or may be computer-generated by a sales hot.
  • the physical user 120A sees the virtual sales representative at a precise position, defined by the system 10, in relation to the real store.
  • the virtual sales representative When the physical user 120A customer first enters the store, the virtual sales representative — whether controlled by a virtual user 120B or by a sales hot — is enabled by the system 10 and implemented by the rendering module 110 to respond to the motions of the customer.
  • the virtual sales representative may approach the customer as she enters the store.
  • the virtual sales representative may recognize in what direction the customer is facing and at what object 135A of merchandise the customer is looking at.
  • the virtual sales representative may virtually offer assistance to the physical user 120A customer and may interact in a dialog with the customer about products in the real store.
  • the customer may purchase and pay for merchandise through the system 10, in connection with an e-commerce server (not shown) which can be remotely located from the system 10. For example, the customer may state her intention to buy a product or bring the product to the sales desk, and then present her credit card to the virtual sales representative.
  • the system 10 captures the credit card number and finalizes the sale.
  • the users in the real store may, oppositely, comprise a real user 120A sales representative and a virtual user 120B customer.
  • the tracking module 112 may track any combination of real user 120A and virtual user 120B sales representatives and real user 120A and virtual user 120B customers. If an item is removed from its display in the store, or if an item is redisplayed, the objects module 130 updates the virtual objects 135B (merchandise) appearing to virtual users 120B.
  • the virtual 3D environment 125B is a virtual store. Entry and movements of a virtual user 120B — a customer — into the virtual store is captured within the virtual implementation of the system 10, by the virtual user module 115B in communication with the tracking module 112.
  • the rendering module 110 renders a display of a virtual sales representative, presented to the virtual user 120B as a VR or holographic avatar overlain upon the virtual store 3D environment.
  • the movement and speech of the virtual sales representative may be provided by a salesperson who is a virtual user 125B or may be computer-generated by a sales hot.
  • the virtual user 120B sees the virtual sales representative at a precise position, defined by the system 10, in relation to the virtual store.
  • the virtual sales representative When the virtual user 120B customer first enters the store, the virtual sales representative — whether controlled by a virtual user 120B or by a sales hot — is enabled by the system 10 and implemented by the rendering module 110 to respond to the motions of the customer.
  • the virtual sales representative may approach the customer as he enters the store.
  • the virtual sales representative may recognize in what direction the customer is facing and at what virtual object 135B of merchandise the customer is looking at.
  • the virtual sales representative may offer assistance to the virtual user 120B customer and may interact in a dialog with the customer about products in the virtual store.
  • the customer may purchase and pay for merchandise through the system 10, in connection with a commerce module (not shown).
  • the customer may state his intention to buy a product or provide a predefined gesture through the virtual user module 115B, and then enter his credit card number or present his credit card to the virtual sales representative.
  • the system 10 captures the credit card number and finalizes the sale.
  • a user may virtually teleport himself to a real 3D environment 125A, becoming a virtual user 120B in a parallel real and virtual 3D environment.
  • a salesperson or interior decorator is a physical user 120A in a home furnishings store.
  • the physical user 120A has an XR user module 115A/B.
  • a customer with an AR user module 115A is a physical user 120A in her home.
  • the user module 115A may be furnished with a scanning depth camera, further described herein, taking measurements of a room in the home.
  • the customer may invite the salesperson into her home to give her advice about a product selection.
  • the salesperson virtually teleports to her home, appearing to the customer as an AR avatar in her home.
  • the position of the salesperson avatar is accurately updated and synced in real time, so that his accurate position in the real 3D environment is accurately represented and seen in real time by the customer, and so that the salesperson sees the home accurately in real time according to his position and orientation.
  • the salesperson avatar seems to the customer to like realistically moving be in her home.
  • the salesperson who is now a virtual user 120B in a parallel 3D environment, sees the inside of the customer’ s home and can virtually navigate through a room in the home and point out what products are best suited where, while the customer may follow the avatar, interact with the salesperson verbally and/or with gestures, and learn how to best furnish and decorate her home from the salesperson’s presentation.
  • the real 3D environment 125A (interior rooms of the home) may be pre-stored in the 3D environment representation database 105 or may be scanned in real time.
  • virtual objects 135B such as virtual samples of furniture, ceramics, bathroom, home decor, carpets, floors and parquets, paint, wallpaper, outdoor furniture, swimming pools, garden design, awnings, windows, and doors — may be virtually teleported into a real 3D environment 125A.
  • the customer can see through the AR glasses how virtual samples look in her home.
  • the virtual samples appear realistically — as 2D or 3D holographic objects — with regard to size, color, and placement in the real home.
  • Virtual user modules 115B may be enabled to virtually move or rotate the virtual samples within the home. If a virtual user 120B interior decorator is present (e.g., by virtual teleportation), the customer and salesperson see the virtual samples placed in the room and each other’s avatars in the parallel 3D environment, and can interact therein.
  • the physical user 120A may modify the virtual representation of the real 3D environment 125A. For example, if the home owner has an old sofa she intends to discard in the 3D model of her room, she may select to remove the virtual sofa from the virtual representation of the room. She may drag a selection box around the sofa, then resize it to zero. In response, the rendering module 110 removes the vertices, edges, and polygons representing the surface of the sofa in the 3D model. The rendering module 110 may then modify the 3D virtual representation to extend the wall-floor edge over the portion formerly hidden by the sofa, and extend the wall and floor surfaces in the virtual representation. The old sofa is thereby removed and replaced with available space in the room, over which other virtual furnishings may be arranged and placed.
  • other virtual users 120B may virtually navigate or virtually teleport to the customer’s home.
  • the friends’ avatars appear to co-exist with the real user 120A customer in her home, and may share in her experience of shopping for interior decor.
  • the virtual user 120B friends can help the customer decorate her home by making suggestions, while all see the results of various selections virtually applied in the home.
  • the virtual user friends may view the “editing” of the home, i.e. removal of old furniture from the virtual representation and teleportation and arrangement of new virtual furniture, as described,
  • the system 10 provides a VR version on the web of an online store.
  • the objects module 130 may receive images of virtual objects 135 of merchandise for sale, as well as related data (prices, etc.) of the merchandise, from an online store website.
  • Virtual user 120B salespeople or bots may assist virtual user 120B customers entering the store. The customers may be drawn to the virtual store via a link in the online store website.
  • the online virtual store may exist in parallel within a real store with physical users 120A therein, containing 1) real objects 135A with parallel virtual object 135B representations for virtual users 120B accessing the virtual store from the web; and/or 2) virtual objects 135B rendered to virtual users 120B on the web and by AR to physical users 125A.
  • physical users 120A may place merchandise comprising real objects 135A and virtual objects 135B in their online shopping cart.
  • the system 10 further comprises an analytics module 140.
  • the analytics module 140 receives positions of the virtual users 120B and physical users 120A from the tracking module 112.
  • the analytics module 140 collects and statistically analyzes the positional data.
  • the analytics module 140 may record and statistically analyze user actions such as movements, glances, interactions, speech, and timestamps/durations of user positions and user actions. If the system is connected with an e-commerce server, the analytics module 140 may track adding to the shopping cart and buying of merchandise.
  • the analytics module 140 may provide revenue indices to merchants using the system 10 to market their products.
  • the analytics module may provide psychological metrics such as buying behavior with and without friends, with and without a salesperson/designer, and comparison of purchasing ratios thereof.
  • a content creation platform for structuring and programming the system 10.
  • the platform can provide, for example, programming or scripting tools for building virtual stores, converting existing online stores to multi-user virtual stores, creating an AR/XR layer over a physical 3D environment 125A (to provide a parallel virtual 3D environment 125B), and/or adding a virtual layer over an existing virtual layer and (for overlaying on a parallel physical 3D environment).
  • the platform can be enabled for capturing a representation of a physical 3D environment, tracking and syncing of users, teleportation, and/or scanning or simulating a real 3D environment.
  • the capturing of a 3D environment may be assisted by tracking many physical users (e.g., over time) and building up the representation, much like assembling a puzzle. For example, assembling images of the real 3D environment taken by many physical users.
  • a podcast i example, an episodic series of digital audio or video files that a user can download to a personal device to listen to at a time of their choosing.
  • Streaming applications and podcasting services provide a convenient and integrated way to manage a personal consumption queue across many podcast sources and playback devices.
  • podcast search engines which help users find and share podcast episodes.
  • the content can be accessed using any computer or similar device that can play media files.
  • a 3 dimensional metaverse podcast of a virtual environment can be made by a user, stored and shared as a 3 dimensional active recording.
  • a real environment such as a real studio interview between people can be recorded as a 3 dimensional metaverse podcast and users of the system of the present invention can interact with the recording.
  • the recorded 3 dimensional metaverse virtual environment such as a 3 D metaverse podcast can be stored on the environment representation database.
  • Virtual user modules are configured to interact with the recorded 3 dimensional virtual environment and the rendering module is configured to render for each virtual user, a hemispheric background projection of the recorded 3 dimensional virtual environment on a background image layer, appropriate to the virtual position of the virtual user.
  • rendering is done by (or nearby) the user module. It is understood that the same steps and/or the same effects can be achieved if rendering is done remotely.
  • Fig. 6 a flow chart of a method 1000 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, according to some embodiments of the invention.
  • the method 1000 is typically repeated with two users exchanging roles (i.e., the viewing user becomes the other user and vice versa) and for every other combination of two virtual users participating in the interactive, multi-user 3D environment. Additionally, the method 1000 is repeated, periodically and frequently, or with motion of either user for real time updating.
  • the method 1000 comprises steps of determining the location of a first (viewing) virtual user 1005 and determining the location of a second (other) virtual user 1010.
  • the determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different virtual users.
  • the method 1000 further comprises a step of transmitting the location of the second user to a remote server 1015.
  • Each user device may transmit additional information, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
  • the method 1000 further comprises a step of transmitting the location of the second virtual user to a user device of the first virtual user 1020.
  • the additional information, if any, is also transmitted to the user device of the first user.
  • the method 1000 further comprises placing an avatar of the second user within the 3D model 1025. Placement of the avatar is made as a function of the direction of the virtual line- of-sight of the first virtual user to the second virtual user.
  • the avatar may be a 2D or 2D model, an image, or any representation of the second virtual user. If the second user module received additional information, the avatar may be depicted according to the present facing direction or a gesture of the second virtual user.
  • FIG. 7 a flow chart of a method for providing live update and representation of avatars of users in an interactive, multi-user, 360° panoramic, virtual or parallel real/virtual 3D environment, where the virtual 3D environment is a rendering of the real 3D environment.
  • FIG. 8 a flow chart of a method for providing live update and representation of 3D or 2D models and images in an interactive, multi-user, 360° panoramic, parallel virtual and/or real 3D environment, where the virtual 3D environment is a rendering of the real 3D environment.
  • Fig. 9 a flow chart of a method 1300 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of a virtual object to a viewing real or virtual user, according to some embodiments of the invention.
  • the method 1300 is typically repeated for each virtual object in the virtual field-of-view of the user and repeated periodically and frequently, or with motion of the user or object, for real-time updating.
  • the method 1300 comprises steps of determining the location of a user 1305 and determining the location of an object 1310.
  • the determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different users.
  • the method 1300 further comprises a step of transmitting the location of the object to a remote server 1315.
  • the objects module may transmit additional information, such as an ID of the object, the present orientation of the object, or an animation of the object.
  • the method 1300 further comprises a step of transmitting the location of the object to a user device of the user 1320.
  • the additional information, if any, is also transmitted to the user device of the first user.
  • the method 1300 further comprises placing a virtual representation of the object within the 3D model 1325. Placement of the object is made as a function of the direction of the virtual line-of-sight of the user to object.
  • the object representation may be a 2D image, a 3D model, or any representation of the object.
  • Fig. 10 a flow chart of a method 1400 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of the orientation of a 2D image representation of an object in a 3D-modeled 3D environment, according to some embodiments of the invention.
  • the method 1400 is typically repeated for each 2D-image represented object in the virtual field-of-view of the user and repeated periodically and frequently, or with motion of the user or 2D image object, for real-time updating.
  • the method 1400 comprises steps of determining the location of a user 1405 and determining the location of a 2D image model of an object 1410.
  • the determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different users.
  • the method 1400 further comprises a step of updating on a server, the orientation of the 2D image to face the location of the user 1415.
  • the method 1400 further comprises a step, if the 2D image object is moved by the user, of updating the 2D image position on the server for real-time effect to the user 1420.
  • the method 1400 further comprises a step, if the 2D image object is transformed (e.g., activated, used, folded, etc.) by the user, updating the 2D image on the server for real-time effect to the user 1425.
  • the 2D image object is transformed (e.g., activated, used, folded, etc.) by the user, updating the 2D image on the server for real-time effect to the user 1425.
  • Fig. 11 a flow chart of a method 1500 for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual 3D environment, according to some embodiments of the invention.
  • the method is typically repeated for each physical user and repeated periodically and frequently, or with motion of a user, for real-time updating.
  • the method 1500 comprises a step of determining the location of a physical user in a real 3D environment 1510. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow.
  • the method 1500 further comprises a step of defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment 1515.
  • the method 1500 further comprises a step of transmitting the virtual position of the real user to a remote server 1520.
  • the method 1500 further comprises a step of transmitting virtual positions of each user (which can include real and virtual users) to devices of each other user to other user 1525.
  • the transmission may include additional information of each user, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
  • the method 1500 further comprises a step of placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users 1530.
  • a virtual object e.g. an avatar
  • the method 1500 further comprises a step of placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment
  • Fig. 12 a flow chart of a method 1600 for synchronizing the placement and/or orientation of objects in a multi-user, multi-platform virtual or parallel real and virtual 3D environment, according to some embodiments of the invention.
  • the multi platforms may comprise, for example, PC, web, Web3D, mobile devices, glasses, and/or holographic screens.
  • the method is typically repeated for each physical user and repeated periodically and frequently, or with motion of a user, for real-time updating.
  • the method 1600 comprises a step of determining the location of a physical user in a real 3D environment 1510.
  • the determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow.
  • the method 1600 further comprises a step of defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment 1615.
  • the method 1600 further comprises a step of transmitting the virtual position of the real user to a remote server 1620.
  • the method 1600 further comprises a step of transmitting virtual positions of each user (which can include real and virtual users) to devices of each other user to other user 1625.
  • the transmission may include additional information of each user, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
  • the method 1600 further comprises a step of placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users 1630.
  • a virtual object e.g. an avatar
  • the method 1600 further comprises a step of placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention provides a system for providing an interactive multi-user 360° panoramic-image representation virtual 3D environment, the system comprising an environment representation database configured for storing a 360° panoramic representation of a 3D environment; a plurality virtual user modules each configured to acquire and update a virtual position and orientation, in the representation, of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to render, for each the virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to the virtual position of the virtual user; wherein the rendering module is further configured, for each viewing virtual user, to place an avatar of other virtual users on an avatar layer, disposed appropriate to the virtual position of the viewing virtual user and each of the other virtual users, and overlay the avatar layer upon the background layer; and the virtual user modules are further configured to display a virtual 3D environment comprising the combined layer to its virtual user.

Description

SYSTEM AND METHOD FOR PROVIDING INTERACTIVE MULTI-USER PARALLEL REAL AND VIRTUAL 3D ENVIRONMENTS
FIELD OF THE INVENTION
The invention is in the field of extended reality, and in particular pertains to a system and method for providing interactive multi-user parallel real and virtual 3D environments.
BACKGROUND TO THE INVENTION
Prior art systems for improving user visuals and interactions in an extended reality environment are disclosed.
U.S. patent application 2019/0050137 A1 discloses a method for generating a three- dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
The present invention improves upon the state-of-the-art in user visuals and interactions in a 3-D extended reality environment, as further described herein.
SUMMARY
The present invention provides a system for providing an interactive multi-user 360° panoramic-image representation virtual 3D environment, the system comprising an environment representation database configured for storing a 360° panoramic representation of a 3D environment; a plurality virtual user modules each configured to acquire and update a virtual position and orientation, in the representation, of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to render, for each the virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to the virtual position of the virtual user; wherein the rendering module is further configured, for each viewing virtual user, to place an avatar of other virtual users on an avatar layer, disposed appropriate to the virtual position of the viewing virtual user and each of the other virtual users, and overlay the avatar layer upon the background layer; and the virtual user modules are further configured to display a virtual 3D environment comprising the combined layer to its virtual user.
The present invention further provides the above system, wherein the virtual user modules are further configured to acquire and update a virtual orientation, action, and/or posture of each of one or more virtual users and the rendering module is further configured to render the avatar on the avatar layer according to the orientation, action, and/or posture.
The present invention further provides any one of the above systems, wherein the rendering module is further configured to render the size of the avatars as a function of a distance between the positions of viewing and viewed virtual users and of a scaling factor of the size of 2D panoramic images relative to the size of the real 3D environment from which the panoramic images were taken.
The present invention further provides any one of the above systems, wherein if more than one virtual user occupies the same virtual position in the virtual 3D environment, the rendering module spaces their avatars apart, appearing arrayed or clustered within some radius of the virtual position of the avatars.
The present invention further provides any one of the above systems, further comprising an objects module, configured to store representations and positions of objects, the rendering module is further configured to overlay the virtual objects upon the background layer, according to virtual positions of the virtual objects and virtual position and line-of-sight direction of the viewing user.
The present invention further provides the previous system, wherein the representation of a said virtual object is accompanied by a hotspot, displayed as a hotspot icon on the virtual user module; when a virtual user selects the hotspot icon, the virtual user module is configured to display a 3D model of the virtual object, wherein said 3D model is manipulable by the virtual user.
The present invention further provides any one of the above systems, wherein the environment representation database is further configured to store a 3D model of the 3D environment and the rendering module is further configured to do one or more of a. employ the 3D model in order to determine distances between the viewing virtual users and to other virtual users and to background walls and objects in the scene represented by the 2D panoramic images, and to size the avatars according to the distances; b. render occlusion of an avatar or an object fully or partially obscured by the virtual 3D environment, as computed from the 3D model.
The present invention further provides a system for providing an interactive multi-user virtual 3D environment, comprising an environment representation database configured for storing a representation, comprising a 3D model, of a 3D environment; one or more virtual user modules each configured to acquire and update a virtual position of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to receive the representation and accordingly render a virtual 3D environment for display on the virtual user modules; wherein the rendering module is further configured, for each viewing the virtual user, to render avatars of other virtual users, according to the virtual positions of the viewing virtual user and of each of the other virtual users.
The invention further provides any one of the above systems, wherein the virtual user modules comprise VR glasses, VR contact lenses, a computing device with a display screen, a Web3D station or mobile device, or any combination thereof.
The invention further provides any one of the above systems, further comprising additional instances of the virtual representation, each virtual representation instance populated by a different group of the virtual users. The invention further provides any one of the above systems, wherein visual data representing the avatars comprise one or more of a generic representation, an icon, a 2D image, a 3D model, a streaming video, or any combination thereof.
The invention further provides any one of the above systems, further comprising a virtual objects module, configured to store representations and positions of virtual objects, the rendering module is further configured to render the virtual objects in the 3D environment in the virtual positions.
The invention further provides any one of the above systems, wherein the rendering module is further configured to render a voice of another user, a volume of the voice adjusted according to a distance between the avatars of the other user and the viewing user.
The invention further provides any one of the above systems, wherein the rendering module is further configured to render the voice as if coming from the direction of the other user, e.g. by employing surround sound.
The invention further provides a system for providing an interactive multi-user parallel real and virtual 3D environment, comprising any one of the above systems of claims and further comprising one or more physical user modules disposed in a real the 3D environment, each the physical user module configured to track a real position of a physical user; wherein the parallel system is further configured to display avatars of the physical users, overlain upon or placed in the virtual 3D environment, on the virtual user modules; and the parallel system is further configured to display avatars of the virtual users, overlain upon the real 3D environment, on the physical user modules.
The invention further provides any one of the above systems, wherein the virtual representation is constructed and/or updated in real time from one or more depth images of the real 3D environment acquired from one or more of the physical user modules and/or user acquisition/identification module in the physical 3D environment.
The invention further provides any one of the above systems, wherein the real 3D environment is a real store and the system is further configured to enable interaction between a virtual sales representative and a physical user customer.
The invention further provides any one of the above systems, wherein the real 3D environment is a real store and the system is further configured to enable interaction between any combination of virtual and physical user sales representatives and virtual and physical user customers.
The invention further provides any one of the above systems, wherein the virtual sales representatives comprise a virtual user, a sales bot, or any combination thereof.
The invention further provides any one of the above systems, wherein the sales bots are responsive to motions of a physical user customer in the real store.
The invention further provides any one of the above systems, further configured for a physical or virtual user/object to virtually teleport to a real 3D environment, becoming a virtual user/object in the real 3D environment.
The invention further provides any one of the above systems, wherein the teleported virtual user is an interior decoration assistant, virtually teleported to a real home of a physical user customer therein, and the system is further configured for rendering avatars of the assistant and the customer interacting.
The invention further provides any one of the above systems, further configured for teleportation one or more virtual samples of any combination of furniture, ceramics, bathroom, home decor, carpets, floors, parquets, paint, wallpaper, outdoor furniture, swimming pools, garden design, awnings, windows, and doors to the home; and further configured for placement of the virtual samples in the home according to measurements made by the depth images.
The invention further provides any one of the above systems, further configured for one or more additional virtual users to virtually navigate or virtually teleport to the home and interact with the customer.
The invention further provides any one of the above systems, wherein the representation of the home is modified by a set of tools enabling removal of objects appearing in the real 3D environment from the representation of the home, by clearing or hiding 3D triangles and mesh elements from the representation.
The invention further provides any one of the above systems, wherein the virtual 3D environment is a virtual store with bot serving as a virtual sales representative.
The invention further provides any one of the above systems, further comprising an analytics module configured to collect and statistically analyze positions, orientations, and/or actions, of the virtual and/or physical users. The invention further provides any one of the above systems, further configured to collect user actions, timestamps, and/or durations of the user actions and include them in the statistical analysis; and or compare virtual with real user activities.
The invention further provides any one of the above systems, further comprising tools for a teleportation platform, capturing a physical 3D environment, create the representation of the physical 3D environment, enable tracking users on syncing between, enabling teleportation and scanning creation or update of the real 3D environment, placement of objects and adding to the shared environment configured for programming or scripting one or more of: a virtual store; converting existing online stores to multi-user virtual stores; creating an AR/XR layer over a physical 3D environment; and adding a virtual layer over an existing virtual layer.
The invention further provides any one of the above systems, wherein the representation of the real 3D environment builds up from many physical users traversing the real 3D environment.
The invention further provides a system for providing an interactive 360° panoramic- image representation virtual 3D environment, the system comprising an environment representation database configured for storing a 360° panoramic representation of a 3D environment; one or more virtual user modules each configured to acquire and update a virtual position and orientation, in the representation, of each of one or more virtual users; a tracking module, configured to receive and store the virtual positions from the virtual user modules; a rendering module, configured to render, for each virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to the virtual position of the virtual user; an objects module, configured to store representations and positions of objects, the rendering module is further configured overlay the virtual objects upon the background layer, according to virtual positions of the virtual objects and virtual position and line-of-sight direction of a virtual user, wherein representation of an object is accompanied by a hotspot, displayed as a hotspot icon on the virtual user module; when a virtual user selects the hotspot icon, the virtual user module is configured to display a 3D model of the virtual object, wherein the 3D model is manipulable by the virtual user.
The invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, comprising steps of determining the location of a first (viewing) virtual user; determining the location of a second (other) virtual user; transmitting the location of the second user to a remote server; transmitting the location of the second virtual user to a user device of the first virtual user; and placing an avatar of the second user within the 3D model.
The invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of a virtual object to a viewing real or virtual user, comprising steps of determining the location of a user; determining the location of an object; transmitting the location of the object to a remote server; transmitting the location of the object to a user device of the user; placing a virtual representation of the object within the 3D model.
The invention further provides a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of the orientation of a 2D image representation of an object in a 3D-modeled 3D environment, comprising steps of determining the location of a user; determining the location of a 2D image model of an object; updating, on a server, the orientation of the 2D image to face the location of the user; if the 2D image object is moved by the user, updating the 2D image position on the server for real-time effect to the user; and if the 2D image object is transformed by the user, updating the 2D image on the server for real-time effect to the user.
The invention further provides a method for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual 3D environment, comprising steps of determining the location of a physical user in a real 3D environment; defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment; transmitting the virtual position of the real user to a remote server; transmitting virtual positions of each user to devices of each other user to other user; placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users; placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 shows a functional block diagram of a computer-based system for providing interactive multi-user virtual or parallel real and virtual environments, according to some embodiments of the invention.
Figs. 2A and 2B show a view of a parallel real and virtual 3D environment, to a physical user and to a virtual user, respectively.
Fig. 3 shows a virtual 3D environment as represented in a 360° panoramic image representation, according to some embodiments of the invention.
Fig. 4 is a user’s view in a 360° panoramic environment, according to some embodiments of the invention.
Figs. 5A-5C show how an object in a virtual environment is accompanied by a hotspot icon for manipulation of a 3D model version of the object. Figs. 6A and 6B show 2D models (images) in a 3D virtual environment, as viewed from two different perspectives in the virtual 3D environment, according to some embodiments of the invention.
Fig. 7 is a flowchart of a method for providing a live rendering, in an interactive, multi user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, according to some embodiments of the invention.
Fig. 8 is a flowchart of a method for live update of avatars of multiple users active in the same 2D/3D panoramic image within or not within a 3D modeled environment that is synced with the panoramic image location.
Fig. 9 is a flowchart of a method for live update of 2D and/or 3D models and images that are active the same 2D/3D panoramic image within or not within a 3D modeled environment that is synced with the panoramic image location.
Fig. 10 is a flow chart of a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of a virtual object to a viewing real or virtual user, according to some embodiments of the invention.
Fig. 11 is a flow chart of a method for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of the orientation of a 2D image representation of an object in a 3D-modeled environment, according to some embodiments of the invention.
Fig. 12, a flow chart of a method 1500 for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual environment, according to some embodiments of the invention.
Fig. 13 is a flow chart of a method for synchronizing the placement and/or orientation of objects in a multi-user, multi-platform virtual or parallel real and virtual 3D environment, according to some embodiments of the invention.
DETAILED DESCRIPTION
Definitions
Real or physical 3D environment: A physical space with non-movable objects, movable objects, and/or people therein. Panoramic images: A representation of a physical 3D environment comprising a 2D image or images covering a 360° angular range of viewing directions. The panoramic images are taken at various points in the physical 3D environment. A user may virtually navigate only to the points at which panoramic image were taken.
3D model: A representation of a physical 3D environment comprising a 3D image of the physical 3D environment. A 3D image is acquired by taking depth images (e.g., using LIDAR) at one or more points and viewing angles in the physical 3D environment, thereby acquiring a 3D image of surfaces in the physical 3D environment. The depth image camera may be accompanied by an additional camera for capturing the color and/or brightness at points of the 3D surfaces. Alternatively, a 3D model can be designed, for example, using an animation program. Typically, positions to which users may navigate are limited only by the resolution of the 3D model.
Interactive multi-user virtual 3D environment, or simply virtual 3D environment:
A shared virtual 3D environment of panoramic images or 3D model in which multiple users can interact. The term may refer to a visual rendering as viewed by a user and/or to a digital representation thereof, as is clear from the context.
Interactive multi-user parallel real and virtual 3D environments, or simply parallel 3D environment: A real 3D environment and a virtual duplicate or representation of the real 3D environment, in which physical users and objects in the real 3D environment can interact with virtual users and objects in the virtual 3D environment, and vice-versa.
Virtual user: A user engaged in viewing and being position-tracked in a virtual 3D environment. The term “virtual” in “virtual user” does not mean or imply that a human user does not exist or that a non-human user (e.g., hot) exists only in cyberspace. (In contrast, in many contexts of this disclosure, the term “virtual” does conform to its traditional meaning of an object or action being in cyberspace and/or its rendering on an XR display.)
Physical user: A user existing in a real 3D environment while being position-tracked and viewing AR content. The user’s existence is represented in a virtual 3D environment parallel to the real 3D environment.
Avatar: As used in this disclosure, a 2D or 3D virtual object representing a user displayed in a virtual 3D environment or parallel 3D environment. Virtual teleportation: The introduction of a user or object from a first virtual or real 3D environment into a second virtual or real 3D environment, becoming a virtual user or object appearing in the second 3D environment.
Throughout this description, reference is made to Fig. 1, showing a computer-based system 10 for providing an interactive multi-user virtual 3D environment 125B or parallel real 3D environment 125A and virtual 3D environment 125B, according to some embodiments of the invention.
The system 10 comprises a representation database 105, storing a representation of one or more 3D environments 125A. The 3D environments 125A can be any combination of stores, architectural settings, conference rooms, gaming fields, classrooms, or any other physical settings. The representation may be a reproduction of a real 3D environment 125A or may be a simulation. The representation may comprise 2D panorama images, a 3-D model, or a combination thereof. The representation may be acquired by a camera and/or LIDAR acquisition of a real 3D environment, may be built from an animation, or any combination thereof. The representation may be pre-made or acquired and/or dynamically updated from a physical environment in real time.
The system 10 further comprises one or more virtual user modules 115B. A virtual user module 115B has a VR, AR, MR, or XR display and a virtual navigation pointer or tracker. Examples of a virtual user module 115B include a mobile or stationary computing device, VR/AR/MR/XR glasses, a Web3D station or mobile device, or any combination thereof. Each virtual user module 115B acquires and updates the current virtual position of a virtual user 120B navigating with a pointer (e.g. joystick) within the virtual 3D environment 125B. In some embodiments, such as VR glasses, the virtual user 120B adjusts their virtual position by physically moving, in which case the virtual position can be determined using technologies such as GPS, optical flow, SLAM, or triangulation by locating -beacons placed in the physical environment of the virtual user 120B. Virtual users 120B see other virtual users represented as avatars in the virtual 3D environment 125B, as further described herein.
In some embodiments, the system 10 further comprises one or more physical user modules 115A. A physical user module 115A has an AR or MR display and a physical user navigation tracker. Examples of a physical user module 115A include AR or MR glasses or contact lenses, and a mobile device. Each physical user module 115A acquires and updates the physical location coordinates of a physical user 120A. The physical user location can be determined using any technology known in the art, such as GPS, optical flow, SLAM, or triangulation by locating-beacons placed in the physical 3D environment. Physical users 120A see virtual users 120B and/or physical users 120A (in other real 3D environments 125A) represented as avatars, overlain upon the real 3D environment 125A, as further described herein.
The system 10 further comprises a tracking module 112. The tracking module 112 is in communicative connection with the physical user modules 115A and the virtual user modules 115B. The tracking module 112 receives virtual positions of virtual users 120B from virtual user modules 115B and/or real positions of physical users 120A from physical user modules 115A.
The system 10 further comprises a rendering module 110. The rendering module 110 acquires a representation of the physical 3D environment 125A and renders a virtual 3D environment 125A on a virtual user module 115B or virtual objects 135B on a physical user module 115A, as viewed by each virtual or physical user 120. The rendered elements have a realistic appearance, according to the position and line-of-site of a viewing user 120. (References to 115 or 120 without a letter refer to one or more of any combination of -A and -B.) The rendering module 110 also renders avatars of virtual users and physical users according to positions received from the tracking module 112, as further described herein. The rendering module 110 renders the avatars so that when viewed through the user modules 115A- B appear to the user 120A-B to be in the virtual 3D environment 125B and/or physical 3D environment 125A.
In some embodiments, the system further comprises an objects module 130, configured to store representations and positions of virtual objects 135B to be overlain upon or placed in a virtual and/or real 3D environment. The virtual objects 135B may be virtual only or may be virtual representations of a real object 135A in a real 3D environment 125A. The objects module 130 feeds representations and positions of virtual objects 135B to the rendering module 110, which renders the object overlain upon or placed in virtual 3D environments 125B and/or real 3D environments 125A.
It is understood that modules described herein may be combined in one piece of hardware. For example, the rendering module 110 may be a part of the same unit as the virtual user module 115B. Conversely, a single described module may be distributed over a plurality of hardware units. It is also understood that embodiments described throughout this disclosure are non limiting and exemplary in nature. Therefore, other embodiments of the invention may include described features from two or more different sections, paragraphs, and/or drawings of this disclosure.
Interactive Multi-User Virtual 3D environment System
In some embodiments, the system 10 provides a multi-user virtual 3D environment 125B. The virtual 3D environment 125B may be acquired from a real 3D environment and virtualized as 2D panoramic images, a 3D model, or a combination thereof. Virtual users 120B are each in possession of a virtual user module 115B including a display and enabling virtual navigation through the virtual 3D environment 125B and accurate tracking of virtual positions therethrough. The virtual user module 115B may comprise VR/AR/MR (XR) glasses or contact lenses, a computing device with a 2D display screen and navigating device (e.g. joystick), a smartphone (e.g. with a GPS module), a Web3D station, a holographic display or any means of projecting/displaying a hologram or 3D likeness, a 2D projector, and optionally a camera (e.g., if needed for navigation tracking). The display presents a rendering of the virtual 3D environment 125B according to each user’s virtual position and virtual line-of-sight. The rendering module 110 renders avatars of virtual users 120B in the virtual 3D environment 125B. To a viewing virtual user 120B, other virtual users 120B appear as avatars on the display of virtual user module 115B of the viewing user, overlain upon or placed in the background of the virtual 3D environment 125B. The visual data representations of the avatars may be generic, iconic representations, 2D user images, 3D user models, streaming video, streaming depth scan, or any combination thereof. The avatar representations may be provided to the rendering module 110 by the virtual users, e.g. through their virtual user modules 115B; or may be stored elsewhere on the system 10 or a computing device accessible thereto. The avatars are displayed realistically on a viewing user’s display, positioned according to the virtual locations of the other virtual users 120B and the virtual location and line of sight of the viewing virtual user 120B. The display is updated in real time, so that the viewing virtual user 120B sees the avatars moving as other virtual users 120B and the viewing virtual user 120B navigate through the virtual 3D environment 125B. Additionally, as the viewing virtual user 120B rotates his direction of sight, the virtual 3D environment 125B and avatars appear to move accordingly. Nearby virtual users 120B appear as they enter his virtual field-of-view, either by navigation of the virtual users 120B or rotation of the viewing user. If the virtual representation comprises a 360° panoramic image representation, possible positions of virtual users are limited to positions at which a 360° panoramic image has been acquired and stored in the representation database 105. If the virtual representation is a 3D model, position resolution of users and avatars are limited only by the resolution of the 3D model.
Any number of virtual users may share the same virtual 3D environment 125B, e.g. on a website. If more than one virtual user occupies the same virtual position in the virtual 3D environment 125B — which may be likely with a 360° panoramic image representation, since panoramic images are characteristically taken only at discrete positions along a path — the avatars may be spaced apart, appearing arrayed or clustered within some radius of the virtual position. The rendering module 110 may render multiple instances of a virtual 3D environment 125B populated by different groups of virtual users, e.g. groups of friends. A small group of friends may thereby share a virtual 3D environment 125B among themselves, without overcrowding of avatars.
In some embodiments, in additional to horizontal motion, avatars rotate according to changes in orientation of the virtual user whose avatar is being viewed.
In some embodiments, where the representation comprises a 3-D model, the rendering module 110 computes which portions of an avatar are occluded (i.e. a surface of the 3D model interposes between a viewing user and the avatar). The rendering module 110 then cuts out (makes transparent) the occluded portions of the avatar. On the display of the viewing physical or virtual user, the avatar appears, realistically, as partially obstructed by the obstructing surface in the 3D environment. If the avatar is wholly occluded, it will not appear at all; alternatively, the occlusion rendering may be disabled and the avatar, or a faded/dotted-outline version, or a substitute icon displayed, so that the viewing user is aware of the hidden user’s presence in the virtual 3D environment 125B. In some embodiments, the rendering module 110 renders occlusion of one avatar by another avatar.
Occlusion rendering may be implemented where the 3D model is also used to visually render the 3D environment for viewing, or where the 3D model is used for computing occlusion rendering in combination with a 360° panorama representation is used for displaying the virtual 3D environment 125B. Interactive Multi-User Parallel Real and Virtual 3D environment System
Some embodiments provide a system 10 for an interactive multi-user parallel real 3D environment 125A and virtual 3D environment 125B, according to some embodiments of the invention. The parallel 3D environments system duplicates a real environment, including physical users and objects, to a virtual one; and enables virtual users, in the duplicate virtual environment, to interact with the real users and objects. The virtual users may interact with the real environment, including real users and real objects, and the physical users may interact with the virtual users and virtual objects. The physical users 120A and real objects 135A are present and participating in parallel with the virtual users and virtual objects 135B.
The physical users may be each in possession of a physical-user module 115A such as glasses for viewing AR, MR, or XR content, while enabling accurate tracking of real positions of the physical user in the real 3D environment while viewing avatars of virtual users overlain upon or placed in the real 3D environment. A physical user module 115 can be of another type, such as a smartphone, Bluetooth, RF, other transmitting devices, real-time scanners such as infrared or LIDAR that constantly scan and report the physical user’s position in the real 3D environment. The physical user modules 115A can comprise a single display or projection visible to multiple physical users 120B. The physical user modules 115A can comprise a single tracking mechanism that tracks positions of multiple physical users 120B.
A physical user 120A walking through a real 3D environment such as a shop, warehouse, shopping mall, museum or any place in the real world is thereby rendered with an avatar in the virtual 3D environment 125B according to his exact position in the real world. The displays of virtual user modules 115B and physical-user modules 115A are updated in real time, so that each viewing real and virtual user sees the avatars of other virtual users and physical users moving as the other users navigate through the parallel 3D environments.
Identifying the position of a physical user or objects is made with any combination of 1) AR; 2) depth cameras, LIDAR, and/or other environmental detection technologies, optical flow, SLAM; and 3) sensors such as BLE, WiFi, RF, building navigation, infrared, etc.
It is appreciated that the function of identifying the position of one or more physical users may be alternatively be performed by a single piece of hardware (e.g., a user acquisition/identification module) placed in or near the real 3D environment.
To illustrate an interactive multi-user parallel real and virtual 3D environment system, reference is now made to Fig. 2A, showing the view of a physical user in a real 3D environment. Richard, Sandra, and Mari are virtual users. They appear to the viewing physical user as AR avatars overlain on the real 3D environment. Joe is a physical user in the physical 3D environment; he appears whether or not he is a participant in the parallel system (in this example, he is participating and being tracked).
Reference is now made to Fig. 2B, showing the view of the same parallel 3D environment to a virtual user. As in Fig. 2A, Richard, Sandra, and Mari are virtual users. They appear to the viewing virtual user as virtual avatars in the virtual 3D environment. Similarly, Joe, even though in the real 3D environment, also appears to the viewing virtual user as a avatar in the virtual 3D environment.
In some embodiments, a physical user module 115A may also display virtual objects 135B as as they appear in the virtual 3D environment 125B. In some embodiments, virtual user modules 115B display virtual objects 135B rendered from a representation of a real object 135A in the real environment 125A.
Interactive Multi-User 360° Panoramic-Image Representation Virtual or Parallel 3D environment System
Fig. 3 shows a virtual 3D environment 125B as represented in a 360° panoramic image representation. The rendering module 110 receives virtual positions of virtual users 120B in the virtual 3D environment. In the 360° panoramic image representation, each virtual user 120B is positioned at a point 100 centered at a hemispheric background projection 201 containing the image(s) constituting the 360° panoramic image representation. The rendering module 110 renders, for each virtual user 120B, the hemispheric background projection 201 on a background image layer, in the format of the virtual user module 115B of the virtual user at the location 100. Additionally, having received the virtual positions of the virtual users 120B, the rendering module 110 further places, for each viewing virtual user 120B (e.g., at point 100) the rendering module places an avatar of other users (e.g., at point 102) on an avatar layer. To do so, the rendering module 110 computes the point on the background image layer corresponding to the other user’s position, and the avatar is placed at the corresponding point on the avatar layer. The rendering module 110 overlays the avatar layer upon on the background layer. The combined layers are received by the virtual user module 115B, for display to the viewing user. The avatars’ positions relative to the background thereby appear to be realistic to the viewing user, as they would be seen in a real background. The virtual positions are limited to points at which 360° panoramic images exist in the 3D environment representation database 105. If more than one virtual user 120B occupies the same position, the rendering module 110 may place avatars in some arrangement about the position.
In some embodiment, the rendering module 110 scales the size of avatars according to the distance between the viewing user and other user and a scaling factor between the 360° panoramic image representation and avatars.
In some embodiments, the environment representation DB 105 further stores a 3D model of the virtual 3D environment 125B. The 3D model is aligned with the 2D panoramic images. The rendering module 110 employs the 3D model in order to determine distances between the virtual user position and positions of background walls and objects in the scene represented by the 2D panoramic images.
The added 3-D model provides some advantages. As actual distances to the background are known from the 3D model, no scaling factor is required. Additionally, the 3D model enables rendering occlusion between objects in the background and avatars and between avatars themselves.
In some embodiments, the virtual 3D environment is paralleled with a real 3D environment. The 2D panoramic images may be taken earlier and stored or may be taken in real time from the real 3D environment. The panoramic images may be accompanied by a 3D model, which may also be taken earlier and stored or may be taken in real time.
Fig. 4 is a user’s view in a 360° panoramic 3D environment, showing sizing and orientation of avatars 602, 603 as well as sizing of virtual objects 601 at different positions. Avatars 602, 603 are represented on a different layer than the background layer 605 and when rendered overlain thereon.
Reference is now made to Figs. 5A-5C, showing a virtual object 660 accompanied by a hotspot icon 655, enabling selection and manipulation of a 3D model 665 of the virtual object 660, according to some embodiments of the invention.
A virtual object 660 in an environment represented by a 360° panoramic image 600 (alternatively, a 3D model) is accompanied by one or more hotspots, displayed on a virtual or physical user module as hotspot icons 655. Each hotspot icon 655 can be a rollover button or a clickable button. When a user selects (e.g. rolls over, clicks on, or touches) the hotspot icon 655 with his finger or pointing device, as shown in Fig. 5B, a manipulable 3D model 665 of the virtual object 660, or an equivalent, appears overlain or nearby the virtual object 660. Optionally, a pointer tail 670 indicates to which virtual object 660 or which hotspot icon 655 the 3D model 665 is referring.
In the example shown, the user touches a hotspot icon 655 near a virtual sneaker display 660. An enlarged 3D model 665 of the sneaker appears. The user may use his finger to rotate the 3D model 665, in order to see the sneaker from different angles.
In some embodiments, manipulation of the 3D model 665 by one user may be seen by other users. Optionally, multiple users may manipulate the 3D model 665 object at the same time.
In some embodiments, a manipulable 3D object 665 may appear within the 360° panoramic environment as originally viewed by the user (e.g., without a selecting a hotspot icon).
Real-Time Virtualization of Real 3D environment
In some embodiments, a parallel real and virtual 3D environment system 10 is configured to construct, in real time, a virtual 3D environment 125B from a physical 3D environment of 125A a physical user 120A. The virtual 3D environment 125B may be a 360° panoramic image representation or a 3D model. The physical-user module 115A further comprises a panoramic camera and/or depth-image camera that acquires a virtual representation of the real 3D environment. To form a 3D model, depth-image data of the the real 3D environment may be processed and displayed in real time. The 3D model is stored in the representation DB 105 and shared in real time with virtual users. The view of the 3D model may flow in real time with movement of the physical-user module 115A. The depth-image camera may be implemented by technologies such as LIDAR, lasers, depth camera, etc. The 3D model is stored in the representation DB. The 3D model provides a virtual 3D environment 125B that is an accurate reproduction of the real 3D environment with regard to location, size, and angular information. The virtual 3D environment is thus acquired in real time and the real time acquired 3D environment can be viewed in real time as well. Composite Virtual 3D environments
In some embodiments of a virtual 3D environment 125B and parallel 3D environments systems, a collection of virtual 3D environments 125B — comprising 2D panoramic images and/or 3D models stored or acquired in real time by physical users — are stitched together to form a composite virtual 3D environment 125B. Physical 3D environments represented by existing virtual 3D environments 125B or being acquired need not be actually connected or even close together. The system stitches the virtual 3D environments 125B together to match in location, sizes, and angles, such that the composite virtual 3D environment 125B appears realistic to virtual users navigating therethrough.
Streaming Video/Audio Display of Avatars
In some embodiments, the appearance of one or more of the avatars may be obtained from a shared video stream. The rendering module 110 renders the shared video stream to be presented at, and to move with, the position of the avatar. The shared video stream may be represented as a head of the avatar, or as a screen appearing nearby or appearing to support the avatar. In some embodiments, to produce the shared video the rendering module does any combination of recognizing the face (of a user 120B) and cutting it out; presenting a video screen as acquired by a camera; presenting a depth camera/scan stream; or rendering a 3D model according to movement of user. An audio steam, generated by the rendering module 110, that accompanies an avatar or a video stream can be implemented with immersive 3D stereoscopy, in which the audio realistically sounds as it is from the avatar, i.e. from the virtual direction and distance of the avatar from the virtual user, and/or in accordance with virtual acoustics of the virtual 3D environment 125B.
Presentation of Objects
In some embodiments, the system 10 further comprises an objects module 130. The objects module 130 stores representations of virtual objects 135B, such as merchandise for sale. The rendering module 110 renders the objects for display in a virtual 3D environment 125B. In a parallel system, a virtual object 135B in the objects module 130 may represent a real object 135A in a physical 3D environment 125A; or may be purely a virtual object 135B. In a virtual system, an object 135 has only a virtual representation in the objects module 130. An object representation is supplied to the objects module 130, for example, by a manufacturer of the physical object. The objects module 130 stores object representations and enables placement (by a user interface, for example) into a location within the environment representation supplied by the representation module 105. In some embodiments, a virtual object 135B may be moved by virtual and/or physical users. In some embodiments, a real object 135A in a parallel system may be moved by a physical user 120A and its movement is updated in the objects module 130 and displayed in real time to virtual users 120B.
A real object 135A may be identified and its location determined within a real 3D environment 125A with a LIDAR or photographic camera. The objects module 130 receives the location data and registers the location of the physical object’s 135A representation accordingly. The physical object 135A may be moved (e.g., lifted, moved, rotated, tried on) by a physical user 120A. The camera can enable real-time tracking of the object, in order to present motion of the object’s representation to virtual users 120B and/or physical users 120A in other physical 3D environments 125A.
For a 3D environment represented with a 3D model, the rendering module may render occlusion of a virtual object 135B by a surface, by an avatar, or by another virtual object 135A in the 3D environment. Additionally, the rendering module may render occlusion of an avatar by a virtual object 135B. In a real 3D environment 125A, virtual objects 135B may be occluded by real objects 135A, and vice-versa.
The virtual representation of an object may be a 2D image or a 3D model. Figs. 4A and 4B show virtual 2D-image representations of three mixers: mixer 702, mixer 704, and mixer 706. The background environment 700 may be real or virtual. The 2D image representations of mixers 702 and 704 are rotated with the movement of a user, as distinguished between Fig. 5A and Fig. 5B, such that the 2D plane of the virtual object remains normal to the virtual viewing angle of the viewing user. Mixer 706, shown partially occluded by the center counter in Fig. 5A, is fully occluded therefore not shown in Fig 4B.
Interactive Bots
In some embodiments, the system further comprises a bot module (not shown). The bot module provides a position of a bot avatar to the tracking module 112. However, a bot and its avatar are not coordinated by a human user. The motion and speech of a bot is computer controlled. Like a user avatar, a bot avatar in a parallel system appears to physical users and virtual users. The bot may be programmed to understand speech and typing of physical and virtual users and to respond to inquiries. A bot may be replaced by a human virtual user 120B when, for example, the bot or a human recognizes a situation requiring human intervention, such as the bot’s reduced comprehension, a strong probability of a potential sale, an unhappy customer, etc. Like a virtual user salesperson, a hot can appear as an AR avatar overlain upon a real 3D environment or in a virtual 3D environment.
Real Stores in Parallel 3D environments
In some embodiments of the system 10, the real 3D environment 125A is a store. Entry and movements of a physical user 120A — a customer — into the store is captured within the virtual and real parallel implementation of the system 10, by the physical user module 115A, or a camera (not shown) in the store, in communication with the tracking module 112. In the system’s 10 response to the physical user’s 120A entry, the rendering module 110 renders a display of a virtual sales representative, presented to the physical user 120A as an AR or holographic avatar overlain upon the real store 3D environment 125A. The movement and speech of the virtual sales representative may be provided by a salesperson who is a virtual user 125B or may be computer-generated by a sales hot. The physical user 120A sees the virtual sales representative at a precise position, defined by the system 10, in relation to the real store.
When the physical user 120A customer first enters the store, the virtual sales representative — whether controlled by a virtual user 120B or by a sales hot — is enabled by the system 10 and implemented by the rendering module 110 to respond to the motions of the customer. The virtual sales representative may approach the customer as she enters the store. The virtual sales representative may recognize in what direction the customer is facing and at what object 135A of merchandise the customer is looking at. The virtual sales representative may virtually offer assistance to the physical user 120A customer and may interact in a dialog with the customer about products in the real store. The customer may purchase and pay for merchandise through the system 10, in connection with an e-commerce server (not shown) which can be remotely located from the system 10. For example, the customer may state her intention to buy a product or bring the product to the sales desk, and then present her credit card to the virtual sales representative. The system 10, captures the credit card number and finalizes the sale.
The users in the real store may, oppositely, comprise a real user 120A sales representative and a virtual user 120B customer. The tracking module 112 may track any combination of real user 120A and virtual user 120B sales representatives and real user 120A and virtual user 120B customers. If an item is removed from its display in the store, or if an item is redisplayed, the objects module 130 updates the virtual objects 135B (merchandise) appearing to virtual users 120B.
Virtual Stores
In some embodiments of the system 10, the virtual 3D environment 125B is a virtual store. Entry and movements of a virtual user 120B — a customer — into the virtual store is captured within the virtual implementation of the system 10, by the virtual user module 115B in communication with the tracking module 112. In response to the virtual user’s entry, the rendering module 110 renders a display of a virtual sales representative, presented to the virtual user 120B as a VR or holographic avatar overlain upon the virtual store 3D environment. The movement and speech of the virtual sales representative may be provided by a salesperson who is a virtual user 125B or may be computer-generated by a sales hot. The virtual user 120B sees the virtual sales representative at a precise position, defined by the system 10, in relation to the virtual store.
When the virtual user 120B customer first enters the store, the virtual sales representative — whether controlled by a virtual user 120B or by a sales hot — is enabled by the system 10 and implemented by the rendering module 110 to respond to the motions of the customer. The virtual sales representative may approach the customer as he enters the store. The virtual sales representative may recognize in what direction the customer is facing and at what virtual object 135B of merchandise the customer is looking at. The virtual sales representative may offer assistance to the virtual user 120B customer and may interact in a dialog with the customer about products in the virtual store. The customer may purchase and pay for merchandise through the system 10, in connection with a commerce module (not shown). For example, the customer may state his intention to buy a product or provide a predefined gesture through the virtual user module 115B, and then enter his credit card number or present his credit card to the virtual sales representative. The system 10, captures the credit card number and finalizes the sale.
Virtual Teleportation and Home Dicor
In some embodiments, a user (virtual or physical) may virtually teleport himself to a real 3D environment 125A, becoming a virtual user 120B in a parallel real and virtual 3D environment. For example, a salesperson or interior decorator is a physical user 120A in a home furnishings store. The physical user 120A has an XR user module 115A/B. A customer with an AR user module 115A is a physical user 120A in her home. The user module 115A may be furnished with a scanning depth camera, further described herein, taking measurements of a room in the home. The customer may invite the salesperson into her home to give her advice about a product selection. The salesperson virtually teleports to her home, appearing to the customer as an AR avatar in her home. The position of the salesperson avatar is accurately updated and synced in real time, so that his accurate position in the real 3D environment is accurately represented and seen in real time by the customer, and so that the salesperson sees the home accurately in real time according to his position and orientation. The salesperson avatar seems to the customer to like realistically moving be in her home. The salesperson, who is now a virtual user 120B in a parallel 3D environment, sees the inside of the customer’ s home and can virtually navigate through a room in the home and point out what products are best suited where, while the customer may follow the avatar, interact with the salesperson verbally and/or with gestures, and learn how to best furnish and decorate her home from the salesperson’s presentation. The real 3D environment 125A (interior rooms of the home) may be pre-stored in the 3D environment representation database 105 or may be scanned in real time.
In some embodiments, virtual objects 135B — such as virtual samples of furniture, ceramics, bathroom, home decor, carpets, floors and parquets, paint, wallpaper, outdoor furniture, swimming pools, garden design, awnings, windows, and doors — may be virtually teleported into a real 3D environment 125A. The customer can see through the AR glasses how virtual samples look in her home. The virtual samples appear realistically — as 2D or 3D holographic objects — with regard to size, color, and placement in the real home. Virtual user modules 115B may be enabled to virtually move or rotate the virtual samples within the home. If a virtual user 120B interior decorator is present (e.g., by virtual teleportation), the customer and salesperson see the virtual samples placed in the room and each other’s avatars in the parallel 3D environment, and can interact therein.
In some embodiments, the physical user 120A may modify the virtual representation of the real 3D environment 125A. For example, if the home owner has an old sofa she intends to discard in the 3D model of her room, she may select to remove the virtual sofa from the virtual representation of the room. She may drag a selection box around the sofa, then resize it to zero. In response, the rendering module 110 removes the vertices, edges, and polygons representing the surface of the sofa in the 3D model. The rendering module 110 may then modify the 3D virtual representation to extend the wall-floor edge over the portion formerly hidden by the sofa, and extend the wall and floor surfaces in the virtual representation. The old sofa is thereby removed and replaced with available space in the room, over which other virtual furnishings may be arranged and placed.
In some embodiments, other virtual users 120B (e.g., friends) may virtually navigate or virtually teleport to the customer’s home. The friends’ avatars appear to co-exist with the real user 120A customer in her home, and may share in her experience of shopping for interior decor. The virtual user 120B friends can help the customer decorate her home by making suggestions, while all see the results of various selections virtually applied in the home. The virtual user friends may view the “editing” of the home, i.e. removal of old furniture from the virtual representation and teleportation and arrangement of new virtual furniture, as described,
Online Virtual Stores
In some embodiments, the system 10 provides a VR version on the web of an online store. The objects module 130 may receive images of virtual objects 135 of merchandise for sale, as well as related data (prices, etc.) of the merchandise, from an online store website. Virtual user 120B salespeople or bots may assist virtual user 120B customers entering the store. The customers may be drawn to the virtual store via a link in the online store website.
In some embodiments, the online virtual store may exist in parallel within a real store with physical users 120A therein, containing 1) real objects 135A with parallel virtual object 135B representations for virtual users 120B accessing the virtual store from the web; and/or 2) virtual objects 135B rendered to virtual users 120B on the web and by AR to physical users 125A. Optionally, physical users 120A may place merchandise comprising real objects 135A and virtual objects 135B in their online shopping cart.
Analytics
In some embodiments, the system 10 further comprises an analytics module 140. The analytics module 140 receives positions of the virtual users 120B and physical users 120A from the tracking module 112. The analytics module 140 collects and statistically analyzes the positional data. The analytics module 140 may record and statistically analyze user actions such as movements, glances, interactions, speech, and timestamps/durations of user positions and user actions. If the system is connected with an e-commerce server, the analytics module 140 may track adding to the shopping cart and buying of merchandise. The analytics module 140 may provide revenue indices to merchants using the system 10 to market their products. The analytics module may provide psychological metrics such as buying behavior with and without friends, with and without a salesperson/designer, and comparison of purchasing ratios thereof.
Content Creation Platform
In some embodiments, there is provided a content creation platform for structuring and programming the system 10. The platform can provide, for example, programming or scripting tools for building virtual stores, converting existing online stores to multi-user virtual stores, creating an AR/XR layer over a physical 3D environment 125A (to provide a parallel virtual 3D environment 125B), and/or adding a virtual layer over an existing virtual layer and (for overlaying on a parallel physical 3D environment). The platform can be enabled for capturing a representation of a physical 3D environment, tracking and syncing of users, teleportation, and/or scanning or simulating a real 3D environment. The capturing of a 3D environment may be assisted by tracking many physical users (e.g., over time) and building up the representation, much like assembling a puzzle. For example, assembling images of the real 3D environment taken by many physical users.
3 Dimensional Podcasts
A podcast i
Figure imgf000027_0001
example, an episodic series of digital audio or video files that a user can download to a personal device to listen to at a time of their choosing. Streaming applications and podcasting services provide a convenient and integrated way to manage a personal consumption queue across many podcast sources and playback devices. There also exist podcast search engines, which help users find and share podcast episodes.^ The content can be accessed using any computer or similar device that can play media files. In some embodiments of the present invention, a 3 dimensional metaverse podcast of a virtual environment can be made by a user, stored and shared as a 3 dimensional active recording.
In some embodiments of the present invention a real environment such as a real studio interview between people can be recorded as a 3 dimensional metaverse podcast and users of the system of the present invention can interact with the recording. The recorded 3 dimensional metaverse virtual environment such as a 3 D metaverse podcast can be stored on the environment representation database.
Virtual user modules are configured to interact with the recorded 3 dimensional virtual environment and the rendering module is configured to render for each virtual user, a hemispheric background projection of the recorded 3 dimensional virtual environment on a background image layer, appropriate to the virtual position of the virtual user.
Methods of the Invention
In embodiments of methods of the invention described herein, rendering is done by (or nearby) the user module. It is understood that the same steps and/or the same effects can be achieved if rendering is done remotely.
Reference is now made to Fig. 6, a flow chart of a method 1000 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, according to some embodiments of the invention. The method 1000 is typically repeated with two users exchanging roles (i.e., the viewing user becomes the other user and vice versa) and for every other combination of two virtual users participating in the interactive, multi-user 3D environment. Additionally, the method 1000 is repeated, periodically and frequently, or with motion of either user for real time updating.
The method 1000 comprises steps of determining the location of a first (viewing) virtual user 1005 and determining the location of a second (other) virtual user 1010. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different virtual users.
The method 1000 further comprises a step of transmitting the location of the second user to a remote server 1015. Each user device may transmit additional information, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
The method 1000 further comprises a step of transmitting the location of the second virtual user to a user device of the first virtual user 1020. The additional information, if any, is also transmitted to the user device of the first user. The method 1000 further comprises placing an avatar of the second user within the 3D model 1025. Placement of the avatar is made as a function of the direction of the virtual line- of-sight of the first virtual user to the second virtual user. The avatar may be a 2D or 2D model, an image, or any representation of the second virtual user. If the second user module received additional information, the avatar may be depicted according to the present facing direction or a gesture of the second virtual user.
Reference is now made Fig. 7, a flow chart of a method for providing live update and representation of avatars of users in an interactive, multi-user, 360° panoramic, virtual or parallel real/virtual 3D environment, where the virtual 3D environment is a rendering of the real 3D environment.
Reference is now made Fig. 8, a flow chart of a method for providing live update and representation of 3D or 2D models and images in an interactive, multi-user, 360° panoramic, parallel virtual and/or real 3D environment, where the virtual 3D environment is a rendering of the real 3D environment.
Reference is now made to Fig. 9, a flow chart of a method 1300 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of a virtual object to a viewing real or virtual user, according to some embodiments of the invention. The method 1300 is typically repeated for each virtual object in the virtual field-of-view of the user and repeated periodically and frequently, or with motion of the user or object, for real-time updating.
The method 1300 comprises steps of determining the location of a user 1305 and determining the location of an object 1310. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different users.
The method 1300 further comprises a step of transmitting the location of the object to a remote server 1315. The objects module may transmit additional information, such as an ID of the object, the present orientation of the object, or an animation of the object.
The method 1300 further comprises a step of transmitting the location of the object to a user device of the user 1320. The additional information, if any, is also transmitted to the user device of the first user. The method 1300 further comprises placing a virtual representation of the object within the 3D model 1325. Placement of the object is made as a function of the direction of the virtual line-of-sight of the user to object. The object representation may be a 2D image, a 3D model, or any representation of the object.
Reference is now made to Fig. 10, a flow chart of a method 1400 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of the orientation of a 2D image representation of an object in a 3D-modeled 3D environment, according to some embodiments of the invention. The method 1400 is typically repeated for each 2D-image represented object in the virtual field-of-view of the user and repeated periodically and frequently, or with motion of the user or 2D image object, for real-time updating.
The method 1400 comprises steps of determining the location of a user 1405 and determining the location of a 2D image model of an object 1410. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. Different techniques may be used for different users.
The method 1400 further comprises a step of updating on a server, the orientation of the 2D image to face the location of the user 1415.
The method 1400 further comprises a step, if the 2D image object is moved by the user, of updating the 2D image position on the server for real-time effect to the user 1420.
The method 1400 further comprises a step, if the 2D image object is transformed (e.g., activated, used, folded, etc.) by the user, updating the 2D image on the server for real-time effect to the user 1425.
Reference is now made to Fig. 11, a flow chart of a method 1500 for synchronizing the placement and/or orientation of objects in a multi-user parallel real and virtual 3D environment, according to some embodiments of the invention. The method is typically repeated for each physical user and repeated periodically and frequently, or with motion of a user, for real-time updating.
The method 1500 comprises a step of determining the location of a physical user in a real 3D environment 1510. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow. The method 1500 further comprises a step of defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment 1515.
The method 1500 further comprises a step of transmitting the virtual position of the real user to a remote server 1520.
The method 1500 further comprises a step of transmitting virtual positions of each user (which can include real and virtual users) to devices of each other user to other user 1525. The transmission may include additional information of each user, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
The method 1500 further comprises a step of placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users 1530.
The method 1500 further comprises a step of placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment
1535.
Reference is now made to Fig. 12, a flow chart of a method 1600 for synchronizing the placement and/or orientation of objects in a multi-user, multi-platform virtual or parallel real and virtual 3D environment, according to some embodiments of the invention. The multi platforms may comprise, for example, PC, web, Web3D, mobile devices, glasses, and/or holographic screens. The method is typically repeated for each physical user and repeated periodically and frequently, or with motion of a user, for real-time updating.
The method 1600 comprises a step of determining the location of a physical user in a real 3D environment 1510. The determinations may be made by any technique(s) known in the art, such as GPS, SLAM, and optical flow.
The method 1600 further comprises a step of defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment 1615.
The method 1600 further comprises a step of transmitting the virtual position of the real user to a remote server 1620.
The method 1600 further comprises a step of transmitting virtual positions of each user (which can include real and virtual users) to devices of each other user to other user 1625. The transmission may include additional information of each user, such as an ID of the user, the present direction the user is facing, or a present action or gesture of the user.
The method 1600 further comprises a step of placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users 1630.
The method 1600 further comprises a step of placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment
1635.

Claims

1. A system 10 for providing an interactive multi-user 360° panoramic-image representation virtual 3D environment 125B, said system 10 comprising an environment representation database 105 configured for storing a 360° panoramic representation of a 3D environment; a plurality virtual user modules 115B each configured to acquire and update a virtual position and orientation, in said representation, of each of one or more virtual users 120B; a tracking module 112, configured to receive and store said virtual positions from said virtual user modules 115B; a rendering module 110, configured to render, for each said virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to said virtual position of said virtual user; wherein the rendering module 110 is further configured, for each viewing said virtual user, to place an avatar of other said virtual users on an avatar layer, disposed appropriate to said virtual position of the viewing virtual user and each of the other virtual users, and overlay the avatar layer upon the background layer; and the virtual user modules 115B are further configured to display a virtual 3D environment 125B comprising the combined layer to its virtual user 120B.
2. The system of claim 1, wherein said virtual user modules are further configured to acquire and update a virtual orientation, action, and/or posture of each of one or more virtual users and said rendering module is further configured to render said avatar on said avatar layer according to said orientation, action, and/or posture.
3. The system of claim 1, wherein said rendering module is further configured to render the size of said avatars as a function of a distance between said positions of viewing and viewed virtual users and of a scaling factor of the size of 2D panoramic images relative to the size of a real 3D environment from which the panoramic images were taken.
4. The system of claim 1, wherein if more than one virtual user occupies the same virtual position in the virtual 3D environment, the rendering module spaces their avatars apart, appearing arrayed or clustered within some radius of the virtual position of the avatars.
5. The system of claim 1, further comprising an objects module, configured to store representations and positions of objects, said rendering module is further configured to overlay said virtual objects upon said background layer, according to virtual positions of said virtual objects and virtual position and line-of-sight direction of said viewing user.
6. The system of claim 5, wherein the representation of a said virtual object is accompanied by a hotspot, displayed as a hotspot icon on said virtual user module; when a said virtual user selects the hotspot icon, said virtual user module is configured to display a 3D model of the virtual object, wherein said 3D model is manipulable by said virtual user.
7. The system of claim 1 or 5, wherein said environment representation database is further configured to store a 3D model of said 3D environment and said rendering module is further configured to do one or more of a. employ the 3D model in order to determine distances between said viewing virtual users and to other virtual users and to background walls and objects in the scene represented by the 2D panoramic images, and to size said avatars according to said distances; b. render occlusion of a said avatar or said object fully or partially obscured by said virtual 3D environment, as computed from said 3D model.
8. The system of claim 1 or 5 wherein said environment representation database is further configured to store a recorded 3 dimensional metaverse environment such as a 3 D metaverse podcast, said rendering plurality of virtual user modules is configured to interact with said recorded 3 dimensional virtual environment and said rendering module is configured to render for each said virtual user, a hemispheric background projection of said recorded 3 dimensional virtual environment on a background image layer, appropriate to said virtual position of said virtual user.
9. The system of claim 8 wherein said recorded 3 dimensional metaverse environment such as a 3 D metaverse podcast is recorded from the real world.
10. A system 10 for providing an interactive multi-user virtual 3D environment 125B, comprising an environment representation database 105 configured for storing a representation, comprising a 3D model, of a 3D environment; one or more virtual user modules 115B each configured to acquire and update a virtual position of each of one or more virtual users 120B; a tracking module 112, configured to receive and store said virtual positions from said virtual user modules 115B; a rendering module 110, configured to receive said representation and accordingly render a virtual 3D environment 125B for display on said virtual user modules
115B; wherein the rendering module 110 is further configured, for each viewing said virtual user, to render avatars of other said virtual users, according to the virtual positions of the viewing virtual user and of each of the other virtual users; and the virtual user modules 115B are further configured to display a virtual 3D environment 125B comprising the rendered representation of the 3D model and avatars.
11. The system of claim 8, wherein said virtual user modules comprise VR glasses, VR contact lenses, a computing device with a display screen, a Web3D station or mobile device, or any combination thereof.
Multiple instances of the virtual representation for different groups of users
12. The system of claim 8, further comprising additional instances of said virtual representation, each said virtual representation instance populated by a different group of said virtual users.
13. The system of claim 1 or 8, wherein visual data representing said avatars comprise one or more of a generic representation, an icon, a 2D image, a 3D model, a streaming video, or any combination thereof.
14. The system of claim 8, further comprising a virtual objects module, configured to store representations and positions of virtual objects, said rendering module is further configured to render said virtual objects in said virtual 3D environment in said virtual positions.
15. The system of claim 1 or 8, wherein said rendering module is further configured to render a voice of another user, a volume of said voice adjusted according to a distance between the avatars of the other user and said viewing user.
16. The system of claim 13, wherein said rendering module is further configured to render said voice as if coming from the direction of said other user, e.g. by employing surround sound.
17. A system for providing interactive multi-user parallel real and virtual 3D environments, comprising any one of the systems of claims 1-13 and further comprising one or more physical user modules 115A disposed in a real said 3D environment 125A represented in said environment representation database 105, each said physical user module configured to track a real position of a physical user 120A; wherein the rendering module is further configured to render avatars, of physical users in a real 3D environment 125A, overlain upon or placed in said virtual 3D environment 125B (duplicating said real 3D environment), for display on said virtual user modules; and the rendering module is further configured to render avatars of said virtual users, overlain upon said real 3D environment, for display on said physical user modules.
18. The system of claim 15, wherein said representation is constructed and/or updated from one or more depth image scans of said real 3D environment acquired from one or more of said physical user modules and/or an acquisition module in said physical 3D environment.
19. The system of claim 16, wherein said virtual representation is constructed and/or updated and displayed to said in real time during participation of said real and virtual users in said parallel environment.
20. The system of claim 16 or 17, wherein said representation is constructed and/or updated from by cumulative said scans of by a plurality physical user devices.
21. The system of claim 16, wherein said scan is made by a professional-grade depth image camera.
22. The system of claim 15, wherein said real 3D environment is a real store and said system 10 is further configured to enable interaction between a virtual sales representative and a physical user customer.
23. The system of claim 15, wherein said real 3D environment is a real store and said system 10 is further configured to enable interaction between any combination of virtual and physical user sales representatives and virtual and physical user customers.
24. The system of claim 20 or 21, wherein said virtual sales representatives comprise a virtual user, a sales hot, or any combination thereof.
25. The system of claim 22, wherein said sales bots are responsive to motions of a said physical user customer in said real store.
26. The system of claim 15 or 16, further configured for a said physical or virtual user/object to virtually teleport to a real 3D environment, becoming a said virtual user/object in said real 3D environment.
27. The system of claim 24, wherein said teleported virtual user is an interior decoration assistant, virtually teleported to a real home of a physical user customer therein, and said system is further configured for rendering avatars of said assistant and said customer interacting.
28. The system of claim 25, further configured for teleportation one or more virtual samples of any combination of furniture, ceramics, bathroom, home decor, carpets, floors, parquets, paint, wallpaper, outdoor furniture, swimming pools, garden design, awnings, windows, and doors to said home; and further configured for placement of said virtual samples in said home according to measurements made by said depth images.
29. The system of claim 26, further configured for one or more additional virtual users to virtually navigate or virtually teleport to said home and interact with said customer.
30. The system of claim 26 or 27, wherein the representation of said home is modified by a set of tools enabling removal of objects appearing in the real 3D environment from the representation of the home, by clearing or hiding 3D triangles and mesh elements from the representation.
31. The system of claim 1 or claim 8, wherein said virtual 3D environment is a virtual store with hot serving as a virtual sales representative.
32. The system of claim 1, 8, or 15, further comprising an analytics module configured to collect and statistically analyze positions, orientations, and/or actions, of said virtual and/or physical users.
33. The system of claim 30, further configured to collect user actions, timestamps, and/or durations of said user actions and include them in said statistical analysis; and or compare virtual with real user activities.
34. The system of claim 1, 8, or 15, further comprising tools for a teleportation platform, capturing a physical 3D environment, create said representation of said physical 3D environment, enable tracking users on syncing between, enabling teleportation and scanning creation or update of the real 3D environment, placement of objects and adding to the shared environment configured for programming or scripting one or more of: a virtual store; converting existing online stores to multi-user virtual stores; creating an AR/XR layer over a physical 3D environment; and adding a virtual layer over an existing virtual layer.
35. The system of claim 32, wherein the representation of the real 3D environment builds up from many physical users traversing the real 3D environment.
36. A system 10 for providing an interactive 360° panoramic-image representation virtual 3D environment 125B, said system 10 comprising an environment representation database 105 configured for storing a 360° panoramic representation of a 3D environment; one or more virtual user modules 115B each configured to acquire and update a virtual position and orientation, in said representation, of each of one or more virtual users 120B; a tracking module 112, configured to receive and store said virtual positions from said virtual user modules 115B; a rendering module 110, configured to render, for each said virtual user, a hemispheric background projection of the 360° panoramic representation on a background image layer, appropriate to said virtual position of said virtual user; an objects module, configured to store representations and positions of objects, said rendering module is further configured overlay said virtual objects upon said background layer, according to virtual positions of said virtual objects and virtual position and line-of-sight direction of a said virtual user, wherein a said representation of a said object is accompanied by a hotspot, displayed as a hotspot icon on said virtual user module; when a said virtual user selects the hotspot icon, said virtual user module is configured to display 3D model of the virtual object, wherein the 3D model is manipulable by said virtual user.
37. A method 1000 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, to a viewing real virtual user of an avatar representing another virtual user, comprising steps of determining the location of a first (viewing) virtual user 1005; determining the location of a second (other) virtual user 1010; transmitting the location of the second user to a remote server 1015; transmitting the location of the second virtual user to a user device of the first virtual user 1020; and placing an avatar of the second user within the 3D model 1025.
38. A method 1300 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual 3D environment, of a virtual object to a viewing real or virtual user, comprising steps of determining the location of a user 1305; determining the location of an object 1310; transmitting the location of the object to a remote server 1315; transmitting the location of the object to a user device of the user 1320; placing a virtual representation of the object within the 3D model 1325.
39. A method 1400 for providing a live rendering, in an interactive, multi-user 3D-modeled virtual environment, of the orientation of a 2D image representation of an object in a 3D- modeled 3D environment, comprising steps of determining the location of a user 1405; determining the location of a 2D image model of an object 1410; updating, on a server, the orientation of the 2D image to face the location of the user
1415; if the 2D image object is moved by the user, updating the 2D image position on the server for real-time effect to the user 1420; and if the 2D image object is transformed by the user, updating the 2D image on the server for real-time effect to the user 1425.
40. A method 1500 for synchronizing the placement and/or orientation of objects in a multi user parallel real and virtual 3D environment, comprising steps of determining the location of a physical user in a real 3D environment 1510; defining the virtual position of the real user within a parallel 3D model representation of the real 3D environment 1515; transmitting the virtual position of the real user to a remote server 1520; transmitting virtual positions of each user to devices of each other user to other user
1525; placing a virtual object (e.g. an avatar), within the 3D model representation, at the virtual position of the physical user, for display to virtual users 1530; placing virtual objects, at the virtual positions of virtual users, for display by AR to the physical user in the real 3D environment 1535.
PCT/IL2022/050615 2021-06-09 2022-06-09 System and method for providing interactive multi-user parallel real and virtual 3d environments WO2022259253A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163208518P 2021-06-09 2021-06-09
US63/208,518 2021-06-09

Publications (1)

Publication Number Publication Date
WO2022259253A1 true WO2022259253A1 (en) 2022-12-15

Family

ID=84425820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2022/050615 WO2022259253A1 (en) 2021-06-09 2022-06-09 System and method for providing interactive multi-user parallel real and virtual 3d environments

Country Status (1)

Country Link
WO (1) WO2022259253A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681869A (en) * 2023-06-21 2023-09-01 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN117440140A (en) * 2023-12-21 2024-01-23 四川师范大学 Multi-person remote festival service system based on virtual reality technology

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9274595B2 (en) * 2011-08-26 2016-03-01 Reincloud Corporation Coherent presentation of multiple reality and interaction models
US20160093108A1 (en) * 2014-09-30 2016-03-31 Sony Computer Entertainment Inc. Synchronizing Multiple Head-Mounted Displays to a Unified Space and Correlating Movement of Objects in the Unified Space
US20160133230A1 (en) * 2014-11-11 2016-05-12 Bent Image Lab, Llc Real-time shared augmented reality experience
US20160379415A1 (en) * 2015-06-23 2016-12-29 Paofit Holdings Pte Ltd Systems and Methods for Generating 360 Degree Mixed Reality Environments
US20170038829A1 (en) * 2015-08-07 2017-02-09 Microsoft Technology Licensing, Llc Social interaction for remote communication
US20190197785A1 (en) * 2017-12-22 2019-06-27 Magic Leap, Inc. Methods and system for managing and displaying virtual content in a mixed reality system
US20190304009A1 (en) * 2016-12-22 2019-10-03 Capital One Services, Llc Systems and methods of sharing an augmented environment with a companion
WO2020070630A1 (en) * 2018-10-02 2020-04-09 Within Unlimited, Inc. Methods, systems and devices supporting real-time shared virtual reality environment
US20200133618A1 (en) * 2018-10-31 2020-04-30 Doubleme, Inc Surrogate Visitor Mixed-Reality Live Environment Sharing System with Remote Visitors
US20200211251A1 (en) * 2018-12-27 2020-07-02 Facebook Technologies, Llc Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9274595B2 (en) * 2011-08-26 2016-03-01 Reincloud Corporation Coherent presentation of multiple reality and interaction models
US20160093108A1 (en) * 2014-09-30 2016-03-31 Sony Computer Entertainment Inc. Synchronizing Multiple Head-Mounted Displays to a Unified Space and Correlating Movement of Objects in the Unified Space
US20160133230A1 (en) * 2014-11-11 2016-05-12 Bent Image Lab, Llc Real-time shared augmented reality experience
US20160379415A1 (en) * 2015-06-23 2016-12-29 Paofit Holdings Pte Ltd Systems and Methods for Generating 360 Degree Mixed Reality Environments
US20170038829A1 (en) * 2015-08-07 2017-02-09 Microsoft Technology Licensing, Llc Social interaction for remote communication
US20190304009A1 (en) * 2016-12-22 2019-10-03 Capital One Services, Llc Systems and methods of sharing an augmented environment with a companion
US20190197785A1 (en) * 2017-12-22 2019-06-27 Magic Leap, Inc. Methods and system for managing and displaying virtual content in a mixed reality system
WO2020070630A1 (en) * 2018-10-02 2020-04-09 Within Unlimited, Inc. Methods, systems and devices supporting real-time shared virtual reality environment
US20200133618A1 (en) * 2018-10-31 2020-04-30 Doubleme, Inc Surrogate Visitor Mixed-Reality Live Environment Sharing System with Remote Visitors
US20200211251A1 (en) * 2018-12-27 2020-07-02 Facebook Technologies, Llc Virtual spaces, mixed reality spaces, and combined mixed reality spaces for improved interaction and collaboration

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681869A (en) * 2023-06-21 2023-09-01 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN116681869B (en) * 2023-06-21 2023-12-19 西安交通大学城市学院 Cultural relic 3D display processing method based on virtual reality application
CN117440140A (en) * 2023-12-21 2024-01-23 四川师范大学 Multi-person remote festival service system based on virtual reality technology
CN117440140B (en) * 2023-12-21 2024-03-12 四川师范大学 Multi-person remote festival service system based on virtual reality technology

Similar Documents

Publication Publication Date Title
US10755485B2 (en) Augmented reality product preview
US11367250B2 (en) Virtual interaction with three-dimensional indoor room imagery
US11823256B2 (en) Virtual reality platform for retail environment simulation
US11403829B2 (en) Object preview in a mixed reality environment
CN105981076B (en) Synthesize the construction of augmented reality environment
CA2927447C (en) Three-dimensional virtual environment
TWI567659B (en) Theme-based augmentation of photorepresentative view
US20120192088A1 (en) Method and system for physical mapping in a virtual world
WO2022259253A1 (en) System and method for providing interactive multi-user parallel real and virtual 3d environments
US20200379625A1 (en) Augmented system and method for manipulating furniture
US11471775B2 (en) System and method for providing a computer-generated environment
JP2021103526A (en) Information providing device, information providing system, information providing method, and information providing program
US20230298050A1 (en) Virtual price tag for augmented reality and virtual reality
CN113313840A (en) Real-time virtual system and real-time virtual interaction method
TWI799195B (en) Method and system for implementing third-person perspective with a virtual object
Aydoğdu Usage of augmented reality technologies a case study: augmented reality in museums
Laviole Spatial augmented reality for physical drawing
Grancharova Author’s Declaration
CN115509348A (en) Virtual furniture display method and related product
JP2001325608A (en) Method and device for displaying image, recording medium having image display program recorded thereon and electronic settlement method
Zhu Dynamic contextualization using augmented reality
LAVIOLE L'UNIVERSITÉ BORDEAUX

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819769

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE