WO2023222194A1 - A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces - Google Patents

A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces Download PDF

Info

Publication number
WO2023222194A1
WO2023222194A1 PCT/EP2022/063276 EP2022063276W WO2023222194A1 WO 2023222194 A1 WO2023222194 A1 WO 2023222194A1 EP 2022063276 W EP2022063276 W EP 2022063276W WO 2023222194 A1 WO2023222194 A1 WO 2023222194A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual environment
shared area
location
shared
environment
Prior art date
Application number
PCT/EP2022/063276
Other languages
French (fr)
Inventor
Pex TUFVESSON
Alexander Hunt
Michael Björn
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2022/063276 priority Critical patent/WO2023222194A1/en
Publication of WO2023222194A1 publication Critical patent/WO2023222194A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention relates to an arrangement, an arrangement comprising computer software modules, an arrangement comprising circuits, a device and a method for providing for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces at different physical locations.
  • Virtual reality (VR)/ Extended reality (XR) collaboration is a way of engaging in social activities such as gaming or having meetings with friends and colleagues, etc.
  • Today's virtual meeting places includes competitive laser tagging games, first person shooters, virtual paintball areas as well as office settings on remote islands.
  • Collaborating means teleporting or moving around in a common virtual reality, with objects like virtual whiteboards, virtual pens, virtual 3D objects in virtual surroundings.
  • the inventors have realized a problem that exist in contemporary virtual surroundings or environments, namely that of integration of the physical objects in each respective collaborator's physical space such as physical chairs, desks, whiteboards, sofas, beds, snacks and coffee mugs - everyday things that makes prolonged collaboration possible - with the shared digital objects in the virtual collaboration session.
  • Two or more users each have their respective collaboration zone. Without physically moving furniture, white boards or flip charts the resulting area will be very much misaligned
  • An object of the present teachings is to overcome or at least reduce or mitigate the problems discussed in the background section. This is achieved by providing a possibility for allowing certain objects in all users' rooms to become collaborative surfaces - flat objects like desks, papers, walls and whiteboards, but also the top of dressers, side tables or other furniture with a flat surface. A user can make one of his physical objects shared with the other users.
  • a user can then map other users' objects to his own shard areas. For example, user B can align user A's table with his own. User A's can place User B's whiteboard on his own window, since User A has no whiteboard to join it with. Or it could be that a single sheet of paper is aligned to User B's whiteboard. Surfaces can be remapped and realigned by the users themselves.
  • User A will need to place User B's and User C's furniture in his room. He chooses to place User B's whiteboard on his own window, he joins User B's sofa with his own bed, and puts User B's chair slightly to the left across his table. He joins User B's and C's tables with his own, and their tables adjust their rectangular forms and sizes to align with his own round table.
  • a virtual environment server which comprises a controller, wherein the controller is configured to: receive an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; receive an indication of the first shared area in a second virtual environment; receive an indication of a second shared area in the first virtual environment; receive an indication of the second shared area in the second virtual environment; receive an indication of a location of an object in the first virtual environment; and determine a corresponding location in the second virtual environment for a corresponding object, wherein the controller is further configured to determine the corresponding location by: determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and if so translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area, and if not translate the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment.
  • the solution may be implemented as a software solution, a hardware solution or a mix of software and hardware components.
  • the solution discussed herein provides for the possibility of collaborating in XR while varying body position from sitting to standing and walking around gives the body the freedom of movement, which is good for creative and immersive discussions.
  • the solution discussed herein also provides for the possibility of moving collaborative surfaces from whiteboards to papers to tables, using real pens or virtual pens gives the freedom to an effortless rearrangement of the workplace.
  • the controller is further configured to: receive an indication of the first shared coordinate system for the first shared area adapted to the first virtual environment; determine a location in the first shared area in the first virtual environment as coordinates in the first shared coordinate system; receive an indication of the first shared coordinate system for the first shared area adapted to the second virtual environment; and determine a location in the first shared area in the second virtual environment as coordinates in the first shared coordinate system, wherein the controller is further configured to translate the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area by assigning the same location the same coordinates in the first shared coordinate system in the same and the second virtual environment.
  • the controller is further configured to: receive an indication of a first environment coordinate system for the first virtual environment; and determine a location of the first shared area in the first virtual environment as coordinates in the first environment coordinate system, wherein the controller is further configured to determine a location of an object in the first shared area as coordinates in the first environment coordinate system based on the coordinates for the location of the object extending from the coordinates of the location of the first shared area in the first virtual environment.
  • the controller is further configured to: receive an indication of a movement of the object in the first virtual environment; and translate the movement relative the first shared area and the second shared area in in the first virtual environment into a relative movement between the first shared area and the second shared area (250:2A, 260) in the second virtual environment.
  • the indication of the movement comprises start coordinates in the first environment coordinate system (245:1) for a start location for the object at the first shared area and end coordinates in the first environment coordinate system (245:1) for an end location for the object at the second shared area (25O:1B) in the first virtual environment, wherein the controller is further configured to: translate the end coordinates in the first environment coordinate system (245:1) into end coordinates in a second environment coordinate system (245:2) based on the location of the second shared surface (25O:1B) in the first environment coordinate system (245:1) so that the movement from the end position to the location of the first shared surface are the same in the first and the second environment coordinate system.
  • the controller is further configured to: translate the movement relative the first shared area and the second shared area in the first environment coordinate system and the first shared area and the second shared area in the second environment coordinate system for the first and second virtual environment respectively, wherein a movement from the second shared area to the first shared area in the first virtual environment is translated as a movement from the second shared area to the first shared area in the second virtual environment, and a movement from inside the first shared area to inside the first shared area in the first virtual environment is translated as a same movement from inside the first shared area in the second virtual environment.
  • the controller is further configured to: translate the movement relative the locations of the first shared area and the second shared area in the first environment coordinate system and the locations of the first shared area and the second shared area in the second environment coordinate system for the first and second virtual environment respectively, wherein the movement from the first shared area to the second shared area in the first environment coordinate system of the first virtual environment is translated as the movement from the first shared area to the second shared area in the second environment coordinate system of the second virtual environment, and the movement is from the start location in the first shared area to the end location in the first shared area in the first virtual environment is translated as a same movement from a start position in the first shared area to an end position in the first shared area in the second virtual environment, wherein the start locations and the end locations in the first shared area are the same in the first shared coordinate system irrespective of virtual environment.
  • controller is further configured to portion the movement into a plurality of partial movements, and to translate the movement by translating each partial movement.
  • the controller is further configured to translate the movement by determining if the movement is from one of the first shared area and the second shared area to the other one of the first shared area and the second shared area, and if so determine a direction in the second environment coordinate system of the movement and translate any partial movement in the first shared area as being in the determined direction.
  • the first shared coordinate system is absolute. In some embodiments the first environment coordinate system and the second environment coordinate system are relative.
  • the second shared area is a shared object.
  • a location of a shared area in a virtual environment is determined based on user input.
  • the controller is further configured to determine that the object being moved is a real life object, and in response thereto generate a virtual representation of the object to be used in the virtual environment.
  • controller is further configured to generate a virtual marking of the real life object to be used in the first virtual environment to indicate that the object is rendered passive.
  • the first virtual environment is associated with a first virtual display arrangement and the second virtual environment is associated with a second virtual display arrangement.
  • a virtual display system wherein the system comprises a virtual environment server according to any previous claim.
  • system further comprises the first virtual display arrangement and the second virtual display arrangement.
  • virtual display arrangement comprising a server according to herein.
  • a method for use in a virtual environment server comprising: receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; receiving an indication of the first shared area in a second virtual environment; receiving an indication of a second shared area in the first virtual environment; receiving an indication of the second shared area in the second virtual environment; receiving an indication of a location of an object in the first virtual environment; and determining a corresponding location in the second virtual environment for a corresponding object, wherein the method further comprises determining the corresponding location by: determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and if so translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area, and if not translating the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment.
  • a computer-readable medium carrying computer instructions that when loaded into and executed by a controller of a virtual display arrangement enables the viewing device to implement a method according to herein.
  • a software component arrangement for use in a virtual environment server, wherein the software component arrangement comprises: a software component for receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; a software component for receiving an indication of the first shared area in a second virtual environment; a software component for receiving an indication of a second shared area in the first virtual environment; a software component for receiving an indication of the second shared area in the second virtual environment; a software component for receiving an indication of a location of an object in the first virtual environment; and a software component for determining a corresponding location in the second virtual environment for a corresponding object, wherein the software component for determining the corresponding location comprises: a software component for determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and a software component for translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area if so, and
  • virtual environment server comprising: a circuitry for receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; a circuitry for receiving an indication of the first shared area in a second virtual environment; a circuitry for receiving an indication of a second shared area in the first virtual environment; a circuitry for receiving an indication of the second shared area in the second virtual environment; a circuitry for receiving an indication of a location of an object in the first virtual environment; and a circuitry for determining a corresponding location in the second virtual environment for a corresponding object, wherein the circuitry for determining the corresponding location comprises: a circuitry for determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and a circuitry for translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area if so, and a circuitry for translating the location of the object relative the first shared area and
  • Figure 1A shows a schematic view of a virtual display arrangement according to some embodiments of the present invention
  • Figure IB shows a schematic view of a virtual display arrangement according to some embodiments of the present invention.
  • Figure 1C shows a schematic view of a virtual display arrangement according to some embodiments of the present invention.
  • Figure ID shows a schematic view of a virtual display arrangement according to some embodiments of the present invention.
  • Figure 2A shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2B shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2C shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • FIG. 2D shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2E shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2F shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2G shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 2H shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • Figure 21 shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
  • FIG. 3 shows a flowchart of a general method according to some embodiments of the teachings herein;
  • Figure 4 shows a component view for a software component arrangement according to some embodiments of the teachings herein;
  • Figure 5 shows a component view for an arrangement comprising circuits according to some embodiments of the teachings herein;
  • Figure 6 shows a schematic view of a computer-readable medium carrying computer instructions that when loaded into and executed by a controller of an arrangement enables the arrangement to implement some embodiments of the teachings herein.
  • Augmented reality augments the real world and its physical objects by overlaying virtual content.
  • This virtual content is often produced digitally and incorporates sound, graphics, and video.
  • a shopper wearing augmented reality glasses while shopping in a supermarket might see nutritional information for each object as they place it in their shopping cart.
  • the glasses augment reality with pertinent information.
  • VR VIRTUAL REALITY Virtual reality
  • AR which augments reality
  • VR is intended to immerse users inside an entirely simulated experience.
  • all visuals and sounds are produced digitally and does not have any input from the user's actual physical environment.
  • VR is increasingly integrated into manufacturing, whereby trainees practice building machinery before starting on the production line.
  • MIXED REALITY Mixed reality combines elements of both AR and VR. Similarly to AR, MR environments overlay digital effects on top of the user's physical environment. However, MR integrates additional, richer information about the user's physical environment such as depth, dimensionality, and surface textures. In MR environments, the end user experience therefore more closely resembles the real world. To concretize this, consider two users hitting a MR tennis ball in on a real-world tennis court. MR will incorporate information about the hardness of the surface (grass versus clay), the direction and force the racket struck the ball, and the players' height.
  • Augmented reality and mixed reality are often used to refer the same idea and, for simplification, in this document, the term augmented reality also refers to the mixed reality, and virtual reality will refer to both virtual reality and augmented reality (as well as mixed reality) unless specifically indicated.
  • VR DEVICE The device which will be used as an interface for the user to perceive both virtual and/or real content in the context of virtual reality.
  • Such device will typically have a display which could display both the environment (real or virtual) and virtual content together (i.e., video see-through), or overlay virtual content through a semi-transparent display (optical see-through, the device technically being an AR device).
  • the VR device would need to acquire information about the environment using sensors (typically cameras and inertial sensors) to map the environment while simultaneously keeping track of the device's location within it.
  • Figure 1A shows a schematic view of a virtual display arrangement 100 according to some embodiments of the present invention.
  • the virtual display arrangement 100 may comprise a single device or may be distributed across several devices and apparatuses. Some specific examples will be discussed in relation to figures IB, 1C and ID.
  • the virtual display arrangement 100 is in some embodiments a VR device.
  • the virtual display arrangement 100 is in some embodiments an augmented reality device.
  • the virtual display arrangement 100 comprises or is operably connected to a controller 101 and a memory 102.
  • the controller 101 is configured to control the overall operation of the virtual display arrangement 100.
  • the controller 101 is a graphics controller.
  • the controller 101 is a general purpose controller.
  • the controller 101 is a combination of a graphics controller and a general purpose controller.
  • a controller such as using Field - Programmable Gate Arrays circuits, AISIC, GPU, etc. in addition or as an alternative. For the purpose of this application, all such possibilities and alternatives will be referred to simply as the controller 101.
  • the memory 102 is configured to store graphics data and computer-readable instructions that when loaded into the controller 101 indicates how the virtual display arrangement 100 is to be controlled.
  • the memory 102 may comprise several memory units or devices, but they will be perceived as being part of the same overall memory 102. There may be one memory unit for a display arrangement storing graphics data, one memory unit for imaging device storing settings, one memory for the communications interface (see below) for storing settings, and so on. As a skilled person would understand there are many possibilities of how to select where data should be stored and a general memory 102 for the virtual display arrangement 100 is therefore seen to comprise any and all such memory units for the purpose of this application.
  • non-volatile memory circuits such as EEPROM memory circuits
  • volatile memory circuits such as RAM memory circuits.
  • all such alternatives will be referred to simply as the memory 102.
  • the virtual display arrangement 100 also comprises an image capturing device 106 (such as a camera or image sensor) capable of capturing an image or series of images (video) through receiving light (for example visual, ultraviolet or infrared to mention a few examples), possibly in cooperation with the controller 101.
  • the virtual display arrangement 100 also comprises an imaging device capable of receiving data representing an image or series of images possibly in cooperation with the controller 101, such as a streaming device.
  • the imaging device 106 possibly in combination with the controller 101, is thus configured to receive an image or series of images and detect an object (indicated RLO (Real Life Object) in figure IB) therein, possibly in combination with the controller 101.
  • RLO Real Life Object
  • the imaging device 106 may be comprised in the virtual display arrangement 100 by being housed in a same housing as the virtual display arrangement, or by being operably connected to it, by a wired connection or wirelessly.
  • the virtual display arrangement 100 is also connected to or comprises a display arrangement
  • Figure IB shows a schematic view of a virtual display arrangement 100 being a viewing device 100 according to some embodiments of the present invention.
  • the viewing device 100 is a smartphone or a tablet computer, being examples of Virtual See Through (VST) devices.
  • the viewing device further comprises a (physical) display arrangement 105, which may be a touch display, and the imaging device 106 may be a camera of the smartphone or tablet computer.
  • the virtual display arrangement comprises a camera, it may still, as an alternative or additional feature, receive the image or series of images form a remote imaging device, or an image storage.
  • Such embodiments apply to all embodiments discussed in relation to figures 1A to ID.
  • controller 101 is configured to receive an image from the camera
  • the camera 106 is arranged on a backside (opposite side of the display 105, as is indicated by the dotted contour of the cameras 106) of the virtual display arrangement 100 for enabling real life objects (indicated RLO in figure IB) behind the virtual display arrangement 100 to be captured and shown to a user (as a displayed RLO DRLO as indicted by the dotted lines from the RLO, through the camera to the DRLO on the display 105) on the display 105 along with any virtual content to be displayed.
  • the displayed virtual content may be information and/or graphics indicating and/or giving information.
  • the viewing device 100 being a smartphone may be carried in a head mount, whereby the viewing device effectively becomes or operates as a head-mounted virtual see through device as will be discussed in relation to figure ID.
  • Figure 1C shows a schematic view of a virtual display arrangement being an optical see- through (OST) viewing device 100 according to some embodiments of the present invention, where a user looks in through one end and sees the real-life objects (RLO) in the line of sight (LOS) at the other end of the viewing device 100.
  • the viewing device 100 is a head-mounted viewing device 100 to be worn by a user (not shown explicitly in figure 1C) for looking through the viewing device 100.
  • the viewing device 100 is arranged as glasses, or other eye wear including goggles, to be worn by a user.
  • the viewing device 100 is in some embodiments arranged to be hand-held, whereby a user can hold up the viewing device 100 to look through it.
  • the viewing device 100 is in some embodiments arranged to be mounted on for example a tripod, whereby a user can mount the viewing device 100 in a convenient arrangement for looking through it.
  • the viewing device 100 may be mounted on a dashboard or in a side-window of a car or other vehicle.
  • the viewing device 100 comprises an at least semi-transparent display arrangement 105 for presenting virtual content VC to a viewer, whereby virtual content VC may be displayed to supplement the real-life view being viewed in line of sight.
  • Figure ID shows a schematic view of a virtual display arrangement being a virtual see-through (VST) viewing device 100, where the imaging capturing device 106 captures (or receives) images of objects RLO from for example behind the device and displays them on a non-transparent display 105 where virtual content VC is mixed with virtual representations of real-life objects VRLO.
  • VST virtual see-through
  • the virtual display arrangement 100 may be a combination of a smartphone 100 and a head mount.
  • the virtual display arrangement 100 may further comprise a communication interface 103.
  • the communication interface may be wired and/or wireless.
  • the communication interface may comprise several interfaces.
  • the communication interface comprises a USB (Universal Serial Bus) interface. In some embodiments the communication interface comprises a HDMI (High Definition Multimedia Interface) interface. In some embodiments the communication interface comprises a Display Port interface. In some embodiments the communication interface comprises an Ethernet interface. In some embodiments the communication interface comprises a MIPI (Mobile Industry Processor Interface) interface. In some embodiments the communication interface comprises an analog interface, a CAN (Controller Area Network) bus interface, an I2C (Inter-Integrated Circuit) interface, or other interface.
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the communication interface comprises a Display Port interface.
  • the communication interface comprises an Ethernet interface.
  • the communication interface comprises a MIPI (Mobile Industry Processor Interface) interface.
  • the communication interface comprises an analog interface, a CAN (Controller Area Network) bus interface, an I2C (Inter-Integrated Circuit) interface, or other interface.
  • the communication interface comprises a radio frequency (RF) communications interface.
  • the communication interface comprises a BluetoothTM interface, a WiFiTM interface, a ZigBeeTM interface, a RFIDTM (Radio Frequency IDentifier) interface, Wireless Display (WiDi) interface, Miracast interface, and/or other RF interface commonly used for short range RF communication.
  • the communication interface comprises a cellular communications interface such as a fifth generation (5G) cellular communication interface, an LTE (Long Term Evolution) interface, a GSM (Global Systeme Mobile) interface and/or other interface commonly used for cellular communication.
  • the communications interface is configured to communicate using the UPnP (Universal Plug n Play) protocol.
  • the communications interface is configured to communicate using the DLNA (Digital Living Network Appliance) protocol.
  • the communications interface 103 is configured to enable communication through more than one of the example technologies given above.
  • a wired interface such as MIPI could be used for establishing an interface between the display arrangement, the controller and the user interface
  • a wireless interface for example WiFiTM could be used to enable communication between the virtual display arrangement 100 and an external host device (not shown).
  • the communications interface 103 may be configured to enable the virtual display arrangement 100 to communicate with other devices, such as other virtual display arrangements 100 and/or smartphones, Internet tablets, computer tablets or other computers, media devices, such as television sets, gaming consoles, video viewers or projectors (not shown), or image capturing devices for receiving the image data streams.
  • the communication interface enables the virtual display arrangement 100 to communicate with a server (referenced 210 in figure 2A).
  • a user interface 104 is in some embodiments comprised in the virtual display arrangement 100 (only shown in figures IB, 1C and ID). Additionally, or alternatively, (at least a part of) the user interface 104 may be comprised remotely in the virtual display arrangement 100 in a separate device connected through the communication interface 103, the user interface then (at least a part of it) not being a physical means in the virtual display arrangement 100, but implemented by receiving user input through a remote device through the communication interface 103.
  • a remote device is a game controller, a mobile phone handset, a tablet computer or a computer.
  • Figure 2A shows a schematic view of virtual display system 200 according to the teachings herein.
  • the virtual display system 200 comprises a first virtual display arrangement 100:1 according to any of the embodiments disclosed above and herein and a second virtual display arrangement 100:2 also according to any of the embodiments disclosed above and herein.
  • the first and second virtual display arrangements 100:1, 100:2 are exemplified as head mounted viewing devices 100:1, 100:2, but it should be noted that any virtual display arrangement may be used for either the first or second virtual display arrangement 100:1, 100:2.
  • the first viewing device 100:1 is arranged to operate or be used in a first area, represented by a first virtual reality environment 240:1, and the second viewing device 100:2 is arranged to operate or be used in a second area, represented by a second virtual reality environment 240:2.
  • first virtual reality environment 240:1 a first virtual reality environment 240:1
  • second virtual reality environment 240:2 a second virtual reality environment 240:2.
  • RLO real-life objects
  • the real-life objects may be shown to a user as real-life objects or as virtual objects, depending on the type of viewing device 100 used and the settings of the viewing device 100.
  • the first viewing device 100:1 and the second viewing device 100:2 are arranged to communicate through a server 210.
  • the virtual environment server 210 is a stand-alone arrangement, possibly part of a cloud service, comprising a controller 211, a memory 212 and a communication interface 213.
  • the virtual environment server 210 is arranged to be executed by one or both of the viewing devices 100 wherein the controller 211 is the controller 101 of the viewing device 100, the memory 212 is the memory 102 of the viewing device 100, and the communication interface 213 is the communication interface 103 of the viewing device 100. In some such embodiments the work to be performed is distributed between the two viewing devices (or possibly only to one viewing device).
  • Figure 2B shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2A.
  • the lower half of figure 2B shows real areas to be shared and the upper half of figure 2B shows shared virtual areas of the real areas.
  • FIG 2B it is shown how one or more surfaces 250 may be shared between the two virtual reality environments 240. In the example of figure 2B it is also shown how some surfaces may be objects 260 that area shared. In the example of figure 2B there are two shared surfaces 250A and 250B, where the first reality environment 240:1 shows a first shared surface 25O:1A and a second shared surface 25O:1B, and where the second reality environment 240:2 shows a corresponding first shared surface 250:2A and a corresponding second shared surface 250:2B.
  • the first shared surface 25O:1A of the first shared environment 240:1 is a work desk and the first shared surface 250:2A of the second shared environment 240:2 is a conference desk, the two surfaces thus being of different dimensions, which is also indicated in figure 2B.
  • the second shared surface 25O:1B of the first shared environment 240:1 is a whiteboard and the second shared surface 250:2B of the second shared environment 240:2 is also a whiteboard, however the two surfaces are also different dimensions, which is also indicated in figure 2B. Even if the shared surfaces are shown as being of different sizes, they may be of the same size.
  • the first reality environment 240:1 also shows a shared object 260:1 being a structural object and the second reality environment 240:2 also shows a corresponding shared object 260:2 also being a structural object.
  • the shared object 260:1 of the first shared environment 240:1 is a bed and the shared object 260:2 of the second shared environment 240:2 is a sofa, the two surfaces thus being of different dimensions, which is also indicated in figure 2B.
  • the shared objects may be used to align locations where users are for example sitting.
  • any shared object 260 may be a shared surface 250 and vice-versa. In the following, reference will only be given to the shared surfaces 250. It should also be noted that even if two shared surfaces 250 and one shared object 260 is shown, there may be any number of shared surfaces 250 and/or shared objects 260.
  • the shared surfaces 250 may be arranged at different locations in the corresponding environment 240, but are mapped to one another by being shared.
  • the mapping may be performed by the virtual environment server 210.
  • Figure 2C shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B.
  • FIG 2C only one shared environment 240 is shown along with the shared surfaces 250 and shared object(s) 260.
  • Figure 2C shows an environment coordinate system 245 for the shared environment 240 and also surface coordinate systems 255 for the shared surfaces 250. Even if only the coordinate systems are shown for the first shared environment 240:1, it would be understood that the same applies for the second shared environment 240:2.
  • the shared surfaces may be at different locations in the coordinate system for the shared environments 240:1, 240:2, they are mapped to one another so that their coordinate systems overlap. That is the coordinate system 255:1A for the first shared surface 25O:1A in the first shared area will be mapped to overlap with the coordinate system of the corresponding first shared surface 250:2A of the second shared environment 240:2.
  • a mapping of a first coordinate system to a second coordinate system may be done in many different and well-known manners, including alignment of origos and scaling of coordinates, and will not be discussed in further detail herein.
  • an object 220/230 real or virtual, will thus have coordinates for its position in the area in the environment coordinate system 245 and coordinates for its position in the surface coordinate system 255.
  • a shared surface 250 may be located at different coordinates in the environment coordinate systems 245. They will however, and as discussed above, have corresponding surface coordinate systems so that an object appearing at a position in a shared surface 250:1 in the first reality 240:1 appears at the same (or at least corresponding) position in the corresponding shared surface 250:2 in the second reality 240:2.
  • Figure 2D shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2B, where objects 230, 220 are shown in the shared surfaces 250.
  • FIG 2D there is a real-life object 220:1ARLO and a virtual object 230:1 in the first shared surface 25O:1A in the first environment 240:1.
  • FIG. 2D There is also a real-life object 220:2ARLO and a virtual object 230:2 in the first shared surface 250:2A in the second environment 240:2.
  • the virtual object 230:1 in the first shared surface 25O:1A in the first environment 240:1 is a virtual representation of the real-life object 220:2A in the first shared surface 250:2A in the second environment 240:2.
  • the virtual object 230:2 220:lAV in the first shared surface 250:2A in the first shared surface 250:2A in the second environment 240:2 is a virtual representation of the real-life object 220:lA in the first shared surface 25O:1A in the first environment 240:1.
  • the virtual display system is thus configured to mirror the real-life object 220 with a virtual object 230 in the corresponding surface to be seen (and possibly used) by the other users.
  • any shared real-life object will thus have a virtual counterpart, for example a digital twin.
  • the second shared surface 25O:1B, 250:2B is shown to have a real-life object each that is mirrored to the corresponding surface, but also a virtual object 230:1, 230:2 that does not have a real- life object counterpart. This allows for also sharing virtual objects.
  • any number of objects, virtual or real-life may be in a shared surface, and the example of one virtual and one real-life object in each first shared surface 25O:1A, 250:2A is only one example.
  • the sizes of the shared surfaces may be different between the two environments 240:1, 240:2.
  • the objects 220, 230 in the shared surfaces 250 are shown to be at a same or corresponding position in the shared surfaces 250, and will thus appear to each user to be in a same position, even if the positions of the shared surfaces are completely different.
  • the virtual environment server 210 may be in a standalone arrangement or in a virtual display arrangement 100, and in some embodiments the server, being a virtual environment server is thus configured to receive an indication of a first shared area 25O:1A, 250:2A, 260 (such as a shared surface 250 or shared object 260) in a first area represented by a first virtual environment 240:1 as is discussed in the above.
  • the indication of the first shared area includes one or more of a location in the environment 240 and dimensions of the first shared area.
  • the indication is received through user input marking an area to be shared. In some embodiments, the indication is received from the memory having stored coordinates for shared areas. In some embodiments, the indication is received through image analysis, where areas to be shared may have been marked using specific objects and/or colors.
  • the location in the area is according to an environment coordinate system associated with the environment 240, and the dimensions defines a first shared coordinate system 255:1A, 255:2A associated with the first shared area 25O:1A, 250:2A, 260.
  • the virtual environment server 210 is also configured to receive an indication of the first shared area (25O:1A, 250:2A, 260) in a second area represented by a second virtual environment (240:2).
  • the second virtual environment 240:2 is also associated with an environment coordinate system 245:2 and the first shared area 25O:1A, 250:2A, 260 in the second environment 240:2 is associated with a surface coordinate system 255.
  • the surface coordinate systems 255 of the shared areas 250,260 are mapped to one another, regardless of position in the environment 240 and regardless of dimensions.
  • the virtual environment server 210 is also configured to receive an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and to receive an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2.
  • the second shared area is similarly to the first shared area associated with a second surface coordinate system 255:2.
  • the virtual environment server 210 is also configured to receive an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1, and determine a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2. As noted above, regardless whether the object is virtual or real life, it will be displayed as a virtual object in the other system.
  • the virtual environment server 210 is configured to determine the corresponding location by determining whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and if so translating the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area.
  • this allows for an object to be displayed at a same position in a shared surface and thus be experienced similarly by the two (or more) users, even if the shared surfaces are at different positions and/or of different sizes.
  • first viewing device 100:1 in the first area 100:1 being representative of a first user
  • second viewing device 100:2 in the second area 100:2 being representative of a second user.
  • the server is thus also configured to determine a location of an object, such as a user or the avatar, relative one of the shared surfaces even when the object is not in a shared surface.
  • the virtual environment server 210 is thus configured to translate the location of the object 220:1, 230:1, 100:1, 100:2 relative the first shared area 25O:1A and/or the second shared area 25O:1B, 260 in the first virtual environment 240:1 into a same position relative the first shared area 250:2A and/or the second shared area 250:2B, 260 in the second virtual environment 240:2.
  • the positions of the avatars are at corresponding positions to the positions of the viewing devices 100:1, 100:2 with regards to the first shared surface 25O:1A, 250:2A.
  • the positions may be determined based on the coordinate systems, and the virtual environment server 210 is thus, in some embodiments, further configured to receive an indication of the first shared coordinate system 255:1A for the first shared area 25O:1A adapted to the first virtual environment 240:1 and to determine a location in the first shared area 25O:1A in the first virtual environment 240:1 as coordinates in the first shared coordinate system 255:1A.
  • the controller is further configured to translate the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area by assigning the same location the same coordinates in the first shared coordinate system 255:1A, 255:2A in the same and the second virtual environment 240:1, 240:2.
  • the virtual environment server 210 is thus in some embodiments further configured to receive an indication of a first environment coordinate system 245:1 for the first virtual environment 240:1 and determine a location of the first shared area 25O:1A in the first virtual environment 240:1 as coordinates in the first environment coordinate system 245:1, wherein the server is further configured to determine a location of an object in the first shared area 25O:1A as coordinates in the first environment coordinate system 245:1 based on the coordinates for the location of the object extending from the coordinates of the location of the first shared area 25O:1A in the first virtual environment 240:1.
  • FIG 2D it is indicated how the real-life object 220:2A is moved in the first shared surface 250:2A of the second environment 240:2 and how a corresponding movement is shown of the corresponding virtual object 230:1 220:2AV in the first shared surface 25O:1A of the first environment 240:1.
  • Figure 2E shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B, where it is shown how a user, or rather the viewing device 100 of a user is moved and how the representation of the viewing device 100, the avatar 100R is also moved.
  • the viewing device 100 is not in the shared surface 250, its location is determined relative the shared surface, and the movement, or rather the resulting position is also shown relative the shared surface(s) 250.
  • the viewing device 100:1 in the first environment 240:1 is shown as moved from the first shared surface 25O:1A to the second shared surface 25O:1B (the user walking from the white board to the desk) and the avatar performs basically the same movement, but in the second environment 240:2 thus being from the first shared surface 250:2A to the second shared surface 250:2B in the second environment 240:2.
  • the virtual environment server 210 is thus in some embodiments further configured to receive an indication of a movement of the object 220:1, 230:1 in the first virtual environment 240:1 and to translate the movement relative the first shared area 25O:1A and the second shared area 25O:1B, 260 in in the first virtual environment 240:1 into a relative movement between the first shared area 250:2A and the second shared area 250:2A, 260 in the second virtual environment 240:2.
  • the indication of the movement comprises start coordinates in the first environment coordinate system 245:1 for a start location for the object 220:1, 230:1, 100:1 at (next to or inside) the first shared area 25O:1A and end coordinates in the first environment coordinate system 245:1 for an end location for the object 220:1, 230:1, 100R at the second shared area 25O:1B in the first virtual environment 240:1.
  • the server 210 is further configured to translate the end coordinates in the first environment coordinate system 245:1 into end coordinates in a second environment coordinate system 245:2 based on the location of the second shared surface (25O:1B) in the first environment coordinate system 245:1 so that the distance (i.e. the movement vector) from the end position to the location of the first shared surface are the same in the first and the second environment coordinate system.
  • Figure 2F shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B, where it is shown how a real-life object 220:lA RLO is moved from the first shared surface 25O:1A in the first environment 240:1 to the second shared surface 25O:1B in the first environment 240:1. As is shown the movement is shown to be the same as regards the shared surfaces 250, even if not the same as regards the environments 240.
  • Figure 2G shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2D, where it is shown how a virtual object 230 is moved from the second shared surface 25O:1B in the first environment 240:1 to the first shared surface 25O:1A in the first environment 240:1. As is shown the movement is shown to be the same as regards the shared surfaces 250, even if not the same as regards the environments 240.
  • the server is thus configured to translate a movement both with regards to the surface coordinate system and to the surface coordinate systems, wherein a movement inside a shared surface is seen as absolute and performed by a mapping of the shared surface coordinate systems 255:1, 255:2, and wherein a movement between shared surfaces 250 is seen as relative.
  • the server is thus configured to translate the movement relative the first shared area 250 and the second shared area 250, 260 in the first environment coordinate system 245:1 and the first shared area 250 and the second shared area 250, 260 in the second environment coordinate system 245:2 for the first and second virtual environment respectively, wherein a movement from the second shared area to the first shared area in the first virtual environment is translated as a movement from the second shared area to the first shared area in the second virtual environment, and a movement from inside the first shared area to inside the first shared area in the first virtual environment is translated as a same movement from inside the first shared area in the second virtual environment.
  • the server 210 is thus further configured to translate the movement relative the locations of the first shared area 250 and the second shared area 250, 260 in the first environment coordinate system 245:1 and the locations of the first shared area 250 and the second shared area 250, 260 in the second environment coordinate system 245:2 for the first and second virtual environment respectively, wherein the movement from the first shared area to the second shared area in the first environment coordinate system 245:1 of the first virtual environment is translated as the movement from the first shared area to the second shared area in the second environment coordinate system 245:2 of the second virtual environment, and the movement is from the start location in the first shared area to the end location in the first shared area in the first virtual environment is translated as a same movement from a start position in the first shared area to an end position in the first shared area in the second virtual environment, wherein the start locations and the end locations in the first shared area are the same in the first shared coordinate system irrespective of virtual environment.
  • a movement may correspond to both an absolute movement and a relative movement, and the movement may be portioned into partial movements.
  • the server 210 is thus configured to portion the movement into a plurality of partial movements, and to translate the movement by translating each partial movement.
  • the server 210 is in some embodiments configured to translate the movement by determining if the movement is from one of the first shared area and the second shared area to the other one of the first shared area and the second shared area, and if so determine a direction in the second environment coordinate system 245:2 of the movement and translate any partial movement in the first shared area as being in the determined direction.
  • a real-life object 220 is shown as a virtual object in the other environments 240. This enables the user having the real-life object 220 to use it when for example demonstrating something to the other users.
  • Figure 2H and Figure 21 each show a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2D, in which views it is shown how the other users may be enabled to utilize the same objects, even though not having access to the real-life object, to for example continue a demonstration of something.
  • figure 2H it is shown how the virtual object 230:1 220:2BV in the second shared surface 25O:1B in the first environment 240:1 representing the real-life object 220:2A in the second shared surface 250:2B in the second environment 240:2 is moved. Consequently, figure 21 shows how the real- life object 220:2B in the second shared surface 250:2B in the second environment 240:2 is marked as having been moved, which is indicated by the dashed square, and how a new virtual object 230:2 220:2BV is generated to be used instead of the real-life object 220:2B.
  • the marking may be made through graphics, overlaying with content, or through color markings. In some embodiments, the marking is made by refraining from showing the real-life object.
  • the server 210 is thus configured to determine that the object 220 being moved is a real life object 220, and in response thereto generate a virtual representation 230:2 of the object 220 to be used in the virtual environment. In some embodiments the server 210 is further configured to generate a virtual marking of the real life object 220 to be used in the first virtual environment to indicate that the object 220 is rendered passive.
  • the server 210 is further configured to refrain from displaying the real life object.
  • Figure 3 shows a flowchart of a general method according to some embodiments of the teachings herein.
  • the method utilizes a virtual display arrangement 100 as taught herein. Details on how the method is to be performed has been given above with reference to figures 1A, IB, 1C, ID, 2A, 2B, 2C, 2D, 2E, 2F, 2G, 2H and 21.
  • the virtual display arrangement 100 is configured for receiving 310 an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255 :2A) and receiving 315 an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2).
  • the method also comprises receiving 320 an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1) and receiving 325 an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2).
  • the method further comprises receiving 330 an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and determining 335 a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the method further comprises determining the corresponding location by determining 340 whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and if so translating 345 the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area, and if not translating 350 the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area
  • the software component arrangement 400 comprises a software component for receiving 410 an indication of a first shared area 25O:1A, 250:2A, 260 in a first virtual environment 240:1, wherein the first shared area 25O:1A, 250:2A, 260 is associated with a first shared coordinate system 255:1A, 255:2A and a software component for receiving 415 an indication of the first shared area 25O:1A, 250:2A, 260 in a second virtual environment 240:2.
  • the software component arrangement 400 comprises receiving 420 an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and receiving 425 an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2.
  • the software component arrangement 400 further comprises a software component for receiving 430 an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1; and a software component for determining 435 a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2, wherein the software component arrangement 400 further comprises a software component for determining the corresponding location FURTHER COMPRISES a software component for determining 440 whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and a software component for translating 445 the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area if so, and a software component for translating 450 the location of the object 220:1, 230:1 relative the first shared area 25O:1A and the second shared area 25O:1B
  • the software component arrangement 400 also comprises a software component 460 for implementing or executing further functionality as discussed herein
  • Figure 5 shows a component view for an arrangement 500 comprising circuitry for providing a virtual display arrangement 100, 500 according to some embodiments of the teachings herein.
  • the arrangement comprising circuitry is adapted to be used in a virtual display arrangement 100 as taught herein.
  • the arrangement comprising circuitry 500 of figure 5 comprises a circuitry for receiving 510 an indication of a first shared area 25O:1A, 250:2A, 260 in a first virtual environment 240:1, wherein the first shared area 25O:1A, 250:2A, 260 is associated with a first shared coordinate system 255:1A, 255:2A and a circuitry for receiving 515 an indication of the first shared area 25O:1A, 250:2A, 260 in a second virtual environment 240:2.
  • the arrangement comprising circuitry 500 comprises receiving 520 an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and receiving 525 an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2.
  • the arrangement comprising circuitry 500 further comprises a circuitry for receiving 530 an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1; and a circuitry for determining 535 a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2, wherein the arrangement comprising circuitry 500 further comprises a circuitry for determining the corresponding location FURTHER COMPRISES a circuitry for determining 540 whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and a circuitry for translating 545 the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area if so, and a circuitry for translating 550 the location of the object 220:1, 230:1 relative the first shared area 25O:1A and the second shared area 25
  • circuitry 500 also comprises a circuitry 560 for implementing or executing other functionality as discussed herein
  • Figure 6 shows a schematic view of a computer-readable medium 120 carrying computer instructions 121 that when loaded into and executed by a controller of a server 210 enables the server 210 to implement the present invention.
  • the computer-readable medium 120 may be tangible such as a hard drive or a flash memory, for example a USB memory stick or a cloud server.
  • the computer-readable medium 120 may be intangible such as a signal carrying the computer instructions enabling the computer instructions to be downloaded through a network connection, such as an internet connection.
  • a computer-readable medium 120 is shown as being a computer disc 120 carrying computer-readable computer instructions 121, being inserted in a computer disc reader 122.
  • the computer disc reader 122 may be part of a cloud server 123 - or other server - or the computer disc reader may be connected to a cloud server 123 - or other server.
  • the cloud server 123 may be part of the internet or at least connected to the internet.
  • the cloud server 123 may alternatively be connected through a proprietary or dedicated connection.
  • the computer instructions are stored at a remote server 123 and be downloaded to the memory 102 of the virtual display arrangement 100 for being executed by the controller 101.
  • the computer disc reader 122 may also or alternatively be connected to (or possibly inserted into) a server 210 for transferring the computer-readable computer instructions 121 to a controller of the server (presumably via a memory of the server 210).
  • Figure 6 shows both the situation when a server 210 receives the computer-readable computer instructions 121 via a server connection and the situation when another server 210 receives the computer-readable computer instructions 121 through a wired interface. This enables for computer- readable computer instructions 121 being downloaded into a server 210 thereby enabling the server 210to operate according to and implement the invention as disclosed herein.

Abstract

1. A virtual environment server (210) comprising a controller (211), wherein the controller (211) is configured to: receive an indication of a first shared area (250:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (250:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255:2A); receive an indication of the first shared area (250:1A, 250:2A, 260) in a second virtual environment (240:2); receive an indication of a second shared area (250:1B, 250:2B, 260) in the first virtual environment (240:1); receive an indication of the second shared area (250:1B, 250:2B, 260) in the second virtual environment (240:2); receive an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and determine a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the controller is further configured to determine the corresponding location by: determining whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (250:1A, 250:2A), and if so translating the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area, and if not translate the location of the object (220:1, 230:1) relative the first shared area (250:1A) and the second shared area (250:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2).

Description

A COMPUTER SOFTWARE MODULE ARRANGEMENT, A CIRCUITRY ARRANGEMENT, AN ARRANGEMENT AND A METHOD FOR PROVIDING A VIRTUAL DISPLAY FOR SIMULTANEOUS DISPLAY OF
REPRESENTATIONS OF REAL LIFE OBJECTS IN SHARED SURFACES
TECHNICAL FIELD
The present invention relates to an arrangement, an arrangement comprising computer software modules, an arrangement comprising circuits, a device and a method for providing for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces at different physical locations.
BACKGROUND
Virtual reality (VR)/ Extended reality (XR) collaboration is a way of engaging in social activities such as gaming or having meetings with friends and colleagues, etc. Today's virtual meeting places includes competitive laser tagging games, first person shooters, virtual paintball areas as well as office settings on remote islands.
Existing solutions utilize an unoccupied area of your home for VR collaboration. A guardian zone defines the boundary between your free floor area and the rest of your furnished home. Accidentally stepping out of the zone will result in personal or domestic damage.
Collaborating means teleporting or moving around in a common virtual reality, with objects like virtual whiteboards, virtual pens, virtual 3D objects in virtual surroundings.
The inventors have realized a problem that exist in contemporary virtual surroundings or environments, namely that of integration of the physical objects in each respective collaborator's physical space such as physical chairs, desks, whiteboards, sofas, beds, snacks and coffee mugs - everyday things that makes prolonged collaboration possible - with the shared digital objects in the virtual collaboration session. Two or more users each have their respective collaboration zone. Without physically moving furniture, white boards or flip charts the resulting area will be very much misaligned
SUMMARY
An object of the present teachings is to overcome or at least reduce or mitigate the problems discussed in the background section. This is achieved by providing a possibility for allowing certain objects in all users' rooms to become collaborative surfaces - flat objects like desks, papers, walls and whiteboards, but also the top of dressers, side tables or other furniture with a flat surface. A user can make one of his physical objects shared with the other users.
A user can then map other users' objects to his own shard areas. For example, user B can align user A's table with his own. User A's can place User B's whiteboard on his own window, since User A has no whiteboard to join it with. Or it could be that a single sheet of paper is aligned to User B's whiteboard. Surfaces can be remapped and realigned by the users themselves.
For these users to collaborate in VR, User A will need to place User B's and User C's furniture in his room. He chooses to place User B's whiteboard on his own window, he joins User B's sofa with his own bed, and puts User B's chair slightly to the left across his table. He joins User B's and C's tables with his own, and their tables adjust their rectangular forms and sizes to align with his own round table.
According to one aspect a virtual environment server is provided which comprises a controller, wherein the controller is configured to: receive an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; receive an indication of the first shared area in a second virtual environment; receive an indication of a second shared area in the first virtual environment; receive an indication of the second shared area in the second virtual environment; receive an indication of a location of an object in the first virtual environment; and determine a corresponding location in the second virtual environment for a corresponding object, wherein the controller is further configured to determine the corresponding location by: determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and if so translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area, and if not translate the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment.
The solution may be implemented as a software solution, a hardware solution or a mix of software and hardware components.
The solution discussed herein provides for the possibility of collaborating in XR while varying body position from sitting to standing and walking around gives the body the freedom of movement, which is good for creative and immersive discussions. The solution discussed herein also provides for the possibility of moving collaborative surfaces from whiteboards to papers to tables, using real pens or virtual pens gives the freedom to an effortless rearrangement of the workplace.
In some embodiments the controller is further configured to: receive an indication of the first shared coordinate system for the first shared area adapted to the first virtual environment; determine a location in the first shared area in the first virtual environment as coordinates in the first shared coordinate system; receive an indication of the first shared coordinate system for the first shared area adapted to the second virtual environment; and determine a location in the first shared area in the second virtual environment as coordinates in the first shared coordinate system, wherein the controller is further configured to translate the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area by assigning the same location the same coordinates in the first shared coordinate system in the same and the second virtual environment.
In some embodiments the controller is further configured to: receive an indication of a first environment coordinate system for the first virtual environment; and determine a location of the first shared area in the first virtual environment as coordinates in the first environment coordinate system, wherein the controller is further configured to determine a location of an object in the first shared area as coordinates in the first environment coordinate system based on the coordinates for the location of the object extending from the coordinates of the location of the first shared area in the first virtual environment.
In some embodiments the controller is further configured to: receive an indication of a movement of the object in the first virtual environment; and translate the movement relative the first shared area and the second shared area in in the first virtual environment into a relative movement between the first shared area and the second shared area (250:2A, 260) in the second virtual environment. In some embodiments the indication of the movement comprises start coordinates in the first environment coordinate system (245:1) for a start location for the object at the first shared area and end coordinates in the first environment coordinate system (245:1) for an end location for the object at the second shared area (25O:1B) in the first virtual environment, wherein the controller is further configured to: translate the end coordinates in the first environment coordinate system (245:1) into end coordinates in a second environment coordinate system (245:2) based on the location of the second shared surface (25O:1B) in the first environment coordinate system (245:1) so that the movement from the end position to the location of the first shared surface are the same in the first and the second environment coordinate system.
In some embodiments the controller is further configured to: translate the movement relative the first shared area and the second shared area in the first environment coordinate system and the first shared area and the second shared area in the second environment coordinate system for the first and second virtual environment respectively, wherein a movement from the second shared area to the first shared area in the first virtual environment is translated as a movement from the second shared area to the first shared area in the second virtual environment, and a movement from inside the first shared area to inside the first shared area in the first virtual environment is translated as a same movement from inside the first shared area in the second virtual environment.
In some embodiments the controller is further configured to: translate the movement relative the locations of the first shared area and the second shared area in the first environment coordinate system and the locations of the first shared area and the second shared area in the second environment coordinate system for the first and second virtual environment respectively, wherein the movement from the first shared area to the second shared area in the first environment coordinate system of the first virtual environment is translated as the movement from the first shared area to the second shared area in the second environment coordinate system of the second virtual environment, and the movement is from the start location in the first shared area to the end location in the first shared area in the first virtual environment is translated as a same movement from a start position in the first shared area to an end position in the first shared area in the second virtual environment, wherein the start locations and the end locations in the first shared area are the same in the first shared coordinate system irrespective of virtual environment.
In some embodiments the controller is further configured to portion the movement into a plurality of partial movements, and to translate the movement by translating each partial movement.
In some embodiments the controller is further configured to translate the movement by determining if the movement is from one of the first shared area and the second shared area to the other one of the first shared area and the second shared area, and if so determine a direction in the second environment coordinate system of the movement and translate any partial movement in the first shared area as being in the determined direction.
In some embodiments the first shared coordinate system is absolute. In some embodiments the first environment coordinate system and the second environment coordinate system are relative.
In some embodiments the second shared area is a shared object.
In some embodiments a location of a shared area in a virtual environment is determined based on user input.
In some embodiments the controller is further configured to determine that the object being moved is a real life object, and in response thereto generate a virtual representation of the object to be used in the virtual environment.
In some embodiments the controller is further configured to generate a virtual marking of the real life object to be used in the first virtual environment to indicate that the object is rendered passive.
In some embodiments the controller is further configured to refrain from displaying the real life object
In some embodiments the first virtual environment is associated with a first virtual display arrangement and the second virtual environment is associated with a second virtual display arrangement.
According to another aspect there is provided a virtual display system wherein the system comprises a virtual environment server according to any previous claim.
In some embodiments the system further comprises the first virtual display arrangement and the second virtual display arrangement.
According to another aspect there is provided virtual display arrangement comprising a server according to herein.
According to one aspect a method for use in a virtual environment server, wherein the method comprises: receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; receiving an indication of the first shared area in a second virtual environment; receiving an indication of a second shared area in the first virtual environment; receiving an indication of the second shared area in the second virtual environment; receiving an indication of a location of an object in the first virtual environment; and determining a corresponding location in the second virtual environment for a corresponding object, wherein the method further comprises determining the corresponding location by: determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and if so translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area, and if not translating the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment.
According to one aspect there is provided a computer-readable medium carrying computer instructions that when loaded into and executed by a controller of a virtual display arrangement enables the viewing device to implement a method according to herein.
According to one aspect there is provided a software component arrangement for use in a virtual environment server, wherein the software component arrangement comprises: a software component for receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; a software component for receiving an indication of the first shared area in a second virtual environment; a software component for receiving an indication of a second shared area in the first virtual environment; a software component for receiving an indication of the second shared area in the second virtual environment; a software component for receiving an indication of a location of an object in the first virtual environment; and a software component for determining a corresponding location in the second virtual environment for a corresponding object, wherein the software component for determining the corresponding location comprises: a software component for determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and a software component for translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area if so, and a software component for translating the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment if not so.
According to one aspect there is provided virtual environment server comprising: a circuitry for receiving an indication of a first shared area in a first virtual environment, wherein the first shared area is associated with a first shared coordinate system; a circuitry for receiving an indication of the first shared area in a second virtual environment; a circuitry for receiving an indication of a second shared area in the first virtual environment; a circuitry for receiving an indication of the second shared area in the second virtual environment; a circuitry for receiving an indication of a location of an object in the first virtual environment; and a circuitry for determining a corresponding location in the second virtual environment for a corresponding object, wherein the circuitry for determining the corresponding location comprises: a circuitry for determining whether the location for the object in the first virtual environment indicates a position in the first shared area, and a circuitry for translating the location of the object in the first virtual environment into a location in the second virtual environment representing a same location in the first shared area if so, and a circuitry for translating the location of the object relative the first shared area and the second shared area in the first virtual environment into a same position relative the first shared area and the second shared area in the second virtual environment if not so.
Further embodiments and advantages of the present invention will be given in the detailed description. It should be noted that the teachings herein find use in object detection and virtual display arrangements in many areas of computer vision, including image retrieval, industrial use, robotic vision, augmented reality and video surveillance.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will be described in the following, reference being made to the appended drawings which illustrate non-limiting examples of how the inventive concept can be reduced into practice.
Figure 1A shows a schematic view of a virtual display arrangement according to some embodiments of the present invention;
Figure IB shows a schematic view of a virtual display arrangement according to some embodiments of the present invention;
Figure 1C shows a schematic view of a virtual display arrangement according to some embodiments of the present invention;
Figure ID shows a schematic view of a virtual display arrangement according to some embodiments of the present invention;
Figure 2A shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2B shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2C shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
7
RECTIFIED SHEET (RULE 91 ) ISA/EP Figure 2D shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2E shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2F shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2G shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 2H shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figure 21 shows a schematic view of a virtual display system according to some embodiments of the teachings herein;
Figures 3 shows a flowchart of a general method according to some embodiments of the teachings herein;
Figure 4 shows a component view for a software component arrangement according to some embodiments of the teachings herein;
Figure 5 shows a component view for an arrangement comprising circuits according to some embodiments of the teachings herein; and
Figure 6 shows a schematic view of a computer-readable medium carrying computer instructions that when loaded into and executed by a controller of an arrangement enables the arrangement to implement some embodiments of the teachings herein.
DEFINITIONS
AUGMENTED REALITY Augmented reality (AR) augments the real world and its physical objects by overlaying virtual content. This virtual content is often produced digitally and incorporates sound, graphics, and video. For instance, a shopper wearing augmented reality glasses while shopping in a supermarket might see nutritional information for each object as they place it in their shopping cart. The glasses augment reality with pertinent information.
VIRTUAL REALITY Virtual reality (VR) uses digital technology to create an entirely simulated environment. Unlike AR — which augments reality — VR is intended to immerse users inside an entirely simulated experience. In a fully VR experience, all visuals and sounds are produced digitally and does not have any input from the user's actual physical environment. For instance, VR is increasingly integrated into manufacturing, whereby trainees practice building machinery before starting on the production line.
MIXED REALITY Mixed reality (MR) combines elements of both AR and VR. Similarly to AR, MR environments overlay digital effects on top of the user's physical environment. However, MR integrates additional, richer information about the user's physical environment such as depth, dimensionality, and surface textures. In MR environments, the end user experience therefore more closely resembles the real world. To concretize this, consider two users hitting a MR tennis ball in on a real-world tennis court. MR will incorporate information about the hardness of the surface (grass versus clay), the direction and force the racket struck the ball, and the players' height.
Augmented reality and mixed reality are often used to refer the same idea and, for simplification, in this document, the term augmented reality also refers to the mixed reality, and virtual reality will refer to both virtual reality and augmented reality (as well as mixed reality) unless specifically indicated.
VR DEVICE The device which will be used as an interface for the user to perceive both virtual and/or real content in the context of virtual reality. Such device will typically have a display which could display both the environment (real or virtual) and virtual content together (i.e., video see-through), or overlay virtual content through a semi-transparent display (optical see-through, the device technically being an AR device). The VR device would need to acquire information about the environment using sensors (typically cameras and inertial sensors) to map the environment while simultaneously keeping track of the device's location within it.
DETAILED DESCRIPTION
Figure 1A shows a schematic view of a virtual display arrangement 100 according to some embodiments of the present invention. It should be noted that the virtual display arrangement 100 may comprise a single device or may be distributed across several devices and apparatuses. Some specific examples will be discussed in relation to figures IB, 1C and ID. The virtual display arrangement 100 is in some embodiments a VR device. And, the virtual display arrangement 100 is in some embodiments an augmented reality device.
The virtual display arrangement 100 comprises or is operably connected to a controller 101 and a memory 102. The controller 101 is configured to control the overall operation of the virtual display arrangement 100. In some embodiments, the controller 101 is a graphics controller. In some embodiments, the controller 101 is a general purpose controller. In some embodiments, the controller 101 is a combination of a graphics controller and a general purpose controller. As a skilled person would understand there are many alternatives for how to implement a controller, such as using Field - Programmable Gate Arrays circuits, AISIC, GPU, etc. in addition or as an alternative. For the purpose of this application, all such possibilities and alternatives will be referred to simply as the controller 101.
The memory 102 is configured to store graphics data and computer-readable instructions that when loaded into the controller 101 indicates how the virtual display arrangement 100 is to be controlled. The memory 102 may comprise several memory units or devices, but they will be perceived as being part of the same overall memory 102. There may be one memory unit for a display arrangement storing graphics data, one memory unit for imaging device storing settings, one memory for the communications interface (see below) for storing settings, and so on. As a skilled person would understand there are many possibilities of how to select where data should be stored and a general memory 102 for the virtual display arrangement 100 is therefore seen to comprise any and all such memory units for the purpose of this application. As a skilled person would understand there are many alternatives of how to implement a memory, for example using non-volatile memory circuits, such as EEPROM memory circuits, or using volatile memory circuits, such as RAM memory circuits. For the purpose of this application all such alternatives will be referred to simply as the memory 102.
In some embodiments the virtual display arrangement 100 also comprises an image capturing device 106 (such as a camera or image sensor) capable of capturing an image or series of images (video) through receiving light (for example visual, ultraviolet or infrared to mention a few examples), possibly in cooperation with the controller 101. In some alternative embodiments the virtual display arrangement 100 also comprises an imaging device capable of receiving data representing an image or series of images possibly in cooperation with the controller 101, such as a streaming device. The imaging device 106, possibly in combination with the controller 101, is thus configured to receive an image or series of images and detect an object (indicated RLO (Real Life Object) in figure IB) therein, possibly in combination with the controller 101. The imaging device 106 may be comprised in the virtual display arrangement 100 by being housed in a same housing as the virtual display arrangement, or by being operably connected to it, by a wired connection or wirelessly. The virtual display arrangement 100 is also connected to or comprises a display arrangement
105 (not shown in figure 1A, but discussed in relation to figures IB, 1C and ID) for displaying received/captured images as well as virtual content.
Figure IB shows a schematic view of a virtual display arrangement 100 being a viewing device 100 according to some embodiments of the present invention. In some such embodiments, the viewing device 100 is a smartphone or a tablet computer, being examples of Virtual See Through (VST) devices. In some such embodiments, the viewing device further comprises a (physical) display arrangement 105, which may be a touch display, and the imaging device 106 may be a camera of the smartphone or tablet computer. It should be noted that even though the virtual display arrangement comprises a camera, it may still, as an alternative or additional feature, receive the image or series of images form a remote imaging device, or an image storage. Such embodiments apply to all embodiments discussed in relation to figures 1A to ID.
In some embodiments the controller 101 is configured to receive an image from the camera
106 and display the image on the display arrangement 105 along with virtual content VC. The virtual content is generated by the controller or received from the memory 102 or an external device through a communication interface 103 that will be discussed in further detail in the below. In the example embodiment of figure IB, the camera 106 is arranged on a backside (opposite side of the display 105, as is indicated by the dotted contour of the cameras 106) of the virtual display arrangement 100 for enabling real life objects (indicated RLO in figure IB) behind the virtual display arrangement 100 to be captured and shown to a user (as a displayed RLO DRLO as indicted by the dotted lines from the RLO, through the camera to the DRLO on the display 105) on the display 105 along with any virtual content to be displayed. The displayed virtual content may be information and/or graphics indicating and/or giving information.
The viewing device 100 being a smartphone may be carried in a head mount, whereby the viewing device effectively becomes or operates as a head-mounted virtual see through device as will be discussed in relation to figure ID.
Figure 1C shows a schematic view of a virtual display arrangement being an optical see- through (OST) viewing device 100 according to some embodiments of the present invention, where a user looks in through one end and sees the real-life objects (RLO) in the line of sight (LOS) at the other end of the viewing device 100. In some embodiments the viewing device 100 is a head-mounted viewing device 100 to be worn by a user (not shown explicitly in figure 1C) for looking through the viewing device 100. In one such embodiment the viewing device 100 is arranged as glasses, or other eye wear including goggles, to be worn by a user.
The viewing device 100 is in some embodiments arranged to be hand-held, whereby a user can hold up the viewing device 100 to look through it.
The viewing device 100 is in some embodiments arranged to be mounted on for example a tripod, whereby a user can mount the viewing device 100 in a convenient arrangement for looking through it. In one such embodiment, the viewing device 100 may be mounted on a dashboard or in a side-window of a car or other vehicle.
The viewing device 100 comprises an at least semi-transparent display arrangement 105 for presenting virtual content VC to a viewer, whereby virtual content VC may be displayed to supplement the real-life view being viewed in line of sight.
Figure ID shows a schematic view of a virtual display arrangement being a virtual see-through (VST) viewing device 100, where the imaging capturing device 106 captures (or receives) images of objects RLO from for example behind the device and displays them on a non-transparent display 105 where virtual content VC is mixed with virtual representations of real-life objects VRLO.
As mentioned in relation to figure IB, the virtual display arrangement 100 may be a combination of a smartphone 100 and a head mount.
In the following, simultaneous reference will be made to the virtual display arrangements 100 of figures 1A, IB, 1C and ID.
In some embodiments the virtual display arrangement 100 may further comprise a communication interface 103. The communication interface may be wired and/or wireless. The communication interface may comprise several interfaces.
In some embodiments the communication interface comprises a USB (Universal Serial Bus) interface. In some embodiments the communication interface comprises a HDMI (High Definition Multimedia Interface) interface. In some embodiments the communication interface comprises a Display Port interface. In some embodiments the communication interface comprises an Ethernet interface. In some embodiments the communication interface comprises a MIPI (Mobile Industry Processor Interface) interface. In some embodiments the communication interface comprises an analog interface, a CAN (Controller Area Network) bus interface, an I2C (Inter-Integrated Circuit) interface, or other interface.
In some embodiments the communication interface comprises a radio frequency (RF) communications interface. In one such embodiment the communication interface comprises a Bluetooth™ interface, a WiFi™ interface, a ZigBee™ interface, a RFID™ (Radio Frequency IDentifier) interface, Wireless Display (WiDi) interface, Miracast interface, and/or other RF interface commonly used for short range RF communication. In an alternative or supplemental such embodiment, the communication interface comprises a cellular communications interface such as a fifth generation (5G) cellular communication interface, an LTE (Long Term Evolution) interface, a GSM (Global Systeme Mobile) interface and/or other interface commonly used for cellular communication. In some embodiments the communications interface is configured to communicate using the UPnP (Universal Plug n Play) protocol. In some embodiments the communications interface is configured to communicate using the DLNA (Digital Living Network Appliance) protocol.
In some embodiments, the communications interface 103 is configured to enable communication through more than one of the example technologies given above. As an example, a wired interface, such as MIPI could be used for establishing an interface between the display arrangement, the controller and the user interface, and a wireless interface, for example WiFi™ could be used to enable communication between the virtual display arrangement 100 and an external host device (not shown).
The communications interface 103 may be configured to enable the virtual display arrangement 100 to communicate with other devices, such as other virtual display arrangements 100 and/or smartphones, Internet tablets, computer tablets or other computers, media devices, such as television sets, gaming consoles, video viewers or projectors (not shown), or image capturing devices for receiving the image data streams. Specifically, the communication interface enables the virtual display arrangement 100 to communicate with a server (referenced 210 in figure 2A).
A user interface 104 is in some embodiments comprised in the virtual display arrangement 100 (only shown in figures IB, 1C and ID). Additionally, or alternatively, (at least a part of) the user interface 104 may be comprised remotely in the virtual display arrangement 100 in a separate device connected through the communication interface 103, the user interface then (at least a part of it) not being a physical means in the virtual display arrangement 100, but implemented by receiving user input through a remote device through the communication interface 103. One example of such a remote device is a game controller, a mobile phone handset, a tablet computer or a computer.
Figure 2A shows a schematic view of virtual display system 200 according to the teachings herein. The virtual display system 200 comprises a first virtual display arrangement 100:1 according to any of the embodiments disclosed above and herein and a second virtual display arrangement 100:2 also according to any of the embodiments disclosed above and herein. In the example of figure 2A, the first and second virtual display arrangements 100:1, 100:2 are exemplified as head mounted viewing devices 100:1, 100:2, but it should be noted that any virtual display arrangement may be used for either the first or second virtual display arrangement 100:1, 100:2.
The first viewing device 100:1 is arranged to operate or be used in a first area, represented by a first virtual reality environment 240:1, and the second viewing device 100:2 is arranged to operate or be used in a second area, represented by a second virtual reality environment 240:2. In both areas and corresponding virtual areas, there may be real-life objects (RLO) 220 and virtual content 230. It should be noted that the real-life objects may be shown to a user as real-life objects or as virtual objects, depending on the type of viewing device 100 used and the settings of the viewing device 100.
The first viewing device 100:1 and the second viewing device 100:2 are arranged to communicate through a server 210.
In some embodiments, the virtual environment server 210 is a stand-alone arrangement, possibly part of a cloud service, comprising a controller 211, a memory 212 and a communication interface 213.
In some embodiments, the virtual environment server 210 is arranged to be executed by one or both of the viewing devices 100 wherein the controller 211 is the controller 101 of the viewing device 100, the memory 212 is the memory 102 of the viewing device 100, and the communication interface 213 is the communication interface 103 of the viewing device 100. In some such embodiments the work to be performed is distributed between the two viewing devices (or possibly only to one viewing device).
Figure 2B shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2A. The lower half of figure 2B shows real areas to be shared and the upper half of figure 2B shows shared virtual areas of the real areas.
In figure 2B it is shown how one or more surfaces 250 may be shared between the two virtual reality environments 240. In the example of figure 2B it is also shown how some surfaces may be objects 260 that area shared. In the example of figure 2B there are two shared surfaces 250A and 250B, where the first reality environment 240:1 shows a first shared surface 25O:1A and a second shared surface 25O:1B, and where the second reality environment 240:2 shows a corresponding first shared surface 250:2A and a corresponding second shared surface 250:2B.
In the example of figure 2B, the first shared surface 25O:1A of the first shared environment 240:1 is a work desk and the first shared surface 250:2A of the second shared environment 240:2 is a conference desk, the two surfaces thus being of different dimensions, which is also indicated in figure 2B. Furthermore, in the example of figure 2B, the second shared surface 25O:1B of the first shared environment 240:1 is a whiteboard and the second shared surface 250:2B of the second shared environment 240:2 is also a whiteboard, however the two surfaces are also different dimensions, which is also indicated in figure 2B. Even if the shared surfaces are shown as being of different sizes, they may be of the same size.
The first reality environment 240:1 also shows a shared object 260:1 being a structural object and the second reality environment 240:2 also shows a corresponding shared object 260:2 also being a structural object. In the example of figure 2B, the shared object 260:1 of the first shared environment 240:1 is a bed and the shared object 260:2 of the second shared environment 240:2 is a sofa, the two surfaces thus being of different dimensions, which is also indicated in figure 2B. The shared objects may be used to align locations where users are for example sitting.
It should be noted that any shared object 260 may be a shared surface 250 and vice-versa. In the following, reference will only be given to the shared surfaces 250. It should also be noted that even if two shared surfaces 250 and one shared object 260 is shown, there may be any number of shared surfaces 250 and/or shared objects 260.
As is indicted the shared surfaces 250 may be arranged at different locations in the corresponding environment 240, but are mapped to one another by being shared. The mapping may be performed by the virtual environment server 210.
Figure 2C shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B.
In figure 2C only one shared environment 240 is shown along with the shared surfaces 250 and shared object(s) 260. Figure 2C shows an environment coordinate system 245 for the shared environment 240 and also surface coordinate systems 255 for the shared surfaces 250. Even if only the coordinate systems are shown for the first shared environment 240:1, it would be understood that the same applies for the second shared environment 240:2.
It should be noted that even if the shared surfaces may be at different locations in the coordinate system for the shared environments 240:1, 240:2, they are mapped to one another so that their coordinate systems overlap. That is the coordinate system 255:1A for the first shared surface 25O:1A in the first shared area will be mapped to overlap with the coordinate system of the corresponding first shared surface 250:2A of the second shared environment 240:2.
A mapping of a first coordinate system to a second coordinate system may be done in many different and well-known manners, including alignment of origos and scaling of coordinates, and will not be discussed in further detail herein.
As is indicated in figure 2C, an object 220/230, real or virtual, will thus have coordinates for its position in the area in the environment coordinate system 245 and coordinates for its position in the surface coordinate system 255.
Comparing figures 2B and 2C it becomes apparent that a shared surface 250 may be located at different coordinates in the environment coordinate systems 245. They will however, and as discussed above, have corresponding surface coordinate systems so that an object appearing at a position in a shared surface 250:1 in the first reality 240:1 appears at the same (or at least corresponding) position in the corresponding shared surface 250:2 in the second reality 240:2.
Figure 2D shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2B, where objects 230, 220 are shown in the shared surfaces 250. In the example of figure 2D there is a real-life object 220:1ARLO and a virtual object 230:1 in the first shared surface 25O:1A in the first environment 240:1. There is also a real-life object 220:2ARLO and a virtual object 230:2 in the first shared surface 250:2A in the second environment 240:2.
As is indicated in the figure, the virtual object 230:1 in the first shared surface 25O:1A in the first environment 240:1 is a virtual representation of the real-life object 220:2A in the first shared surface 250:2A in the second environment 240:2. This is indicated by the reference of the virtual object 230:1 also including 220:2AV. Similarly, the virtual object 230:2 220:lAV in the first shared surface 250:2A in the first shared surface 250:2A in the second environment 240:2 is a virtual representation of the real-life object 220:lA in the first shared surface 25O:1A in the first environment 240:1. To enable sharing of physical objects 220, the virtual display system according to herein is thus configured to mirror the real-life object 220 with a virtual object 230 in the corresponding surface to be seen (and possibly used) by the other users. As would be understood, any shared real-life object will thus have a virtual counterpart, for example a digital twin.
The second shared surface 25O:1B, 250:2B is shown to have a real-life object each that is mirrored to the corresponding surface, but also a virtual object 230:1, 230:2 that does not have a real- life object counterpart. This allows for also sharing virtual objects.
It should be noted that any number of objects, virtual or real-life may be in a shared surface, and the example of one virtual and one real-life object in each first shared surface 25O:1A, 250:2A is only one example.
As is shown in figure 2D, the sizes of the shared surfaces may be different between the two environments 240:1, 240:2. However, the objects 220, 230 in the shared surfaces 250 are shown to be at a same or corresponding position in the shared surfaces 250, and will thus appear to each user to be in a same position, even if the positions of the shared surfaces are completely different.
As noted above, the virtual environment server 210 may be in a standalone arrangement or in a virtual display arrangement 100, and in some embodiments the server, being a virtual environment server is thus configured to receive an indication of a first shared area 25O:1A, 250:2A, 260 (such as a shared surface 250 or shared object 260) in a first area represented by a first virtual environment 240:1 as is discussed in the above. The indication of the first shared area includes one or more of a location in the environment 240 and dimensions of the first shared area.
In some embodiments, the indication is received through user input marking an area to be shared. In some embodiments, the indication is received from the memory having stored coordinates for shared areas. In some embodiments, the indication is received through image analysis, where areas to be shared may have been marked using specific objects and/or colors.
As discussed above, the location in the area is according to an environment coordinate system associated with the environment 240, and the dimensions defines a first shared coordinate system 255:1A, 255:2A associated with the first shared area 25O:1A, 250:2A, 260.
The virtual environment server 210 is also configured to receive an indication of the first shared area (25O:1A, 250:2A, 260) in a second area represented by a second virtual environment (240:2). The second virtual environment 240:2 is also associated with an environment coordinate system 245:2 and the first shared area 25O:1A, 250:2A, 260 in the second environment 240:2 is associated with a surface coordinate system 255.
As discussed in the above, the surface coordinate systems 255 of the shared areas 250,260 are mapped to one another, regardless of position in the environment 240 and regardless of dimensions.
As is also shown and discussed above, the virtual environment server 210 is also configured to receive an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and to receive an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2. The second shared area is similarly to the first shared area associated with a second surface coordinate system 255:2.
Furthermore, the virtual environment server 210 is also configured to receive an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1, and determine a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2. As noted above, regardless whether the object is virtual or real life, it will be displayed as a virtual object in the other system. The virtual environment server 210 is configured to determine the corresponding location by determining whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and if so translating the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area.
As discussed in the above, this allows for an object to be displayed at a same position in a shared surface and thus be experienced similarly by the two (or more) users, even if the shared surfaces are at different positions and/or of different sizes.
Returning to figure 2D, also shown is a first viewing device 100:1 in the first area 100:1 being representative of a first user and a second viewing device 100:2 in the second area 100:2 being representative of a second user. Also shown is how a virtual representation, or avatar, 100:2V is shown of the second user in the first environment 240:1 at a position which is the same as regards the first shared surface 250:2A in the second environment 240:2 and how a virtual representation, or avatar, 100:1V is shown of the first user in the second environment 240:2 at a position which is the same as regards the first shared surface 25O:1A in the first environment 240:1.
The server is thus also configured to determine a location of an object, such as a user or the avatar, relative one of the shared surfaces even when the object is not in a shared surface. The virtual environment server 210 is thus configured to translate the location of the object 220:1, 230:1, 100:1, 100:2 relative the first shared area 25O:1A and/or the second shared area 25O:1B, 260 in the first virtual environment 240:1 into a same position relative the first shared area 250:2A and/or the second shared area 250:2B, 260 in the second virtual environment 240:2. As can be seen in figure 2D, the positions of the avatars are at corresponding positions to the positions of the viewing devices 100:1, 100:2 with regards to the first shared surface 25O:1A, 250:2A.
As discussed with relation to figure 2C, the positions may be determined based on the coordinate systems, and the virtual environment server 210 is thus, in some embodiments, further configured to receive an indication of the first shared coordinate system 255:1A for the first shared area 25O:1A adapted to the first virtual environment 240:1 and to determine a location in the first shared area 25O:1A in the first virtual environment 240:1 as coordinates in the first shared coordinate system 255:1A. And similarly receive an indication of the first shared coordinate system 255:2A for the first shared area 250:2A adapted to the second virtual environment 240:2; and determine a location in the first shared area 250:2A in the second virtual environment 240:2 as coordinates in the first shared coordinate system 255:2A, wherein the controller is further configured to translate the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area by assigning the same location the same coordinates in the first shared coordinate system 255:1A, 255:2A in the same and the second virtual environment 240:1, 240:2.
As with regards to objects outside the actual shared surface, such as the viewing device, the position may be determined by extending the shared coordinate system also outside the shared surface. The virtual environment server 210 is thus in some embodiments further configured to receive an indication of a first environment coordinate system 245:1 for the first virtual environment 240:1 and determine a location of the first shared area 25O:1A in the first virtual environment 240:1 as coordinates in the first environment coordinate system 245:1, wherein the server is further configured to determine a location of an object in the first shared area 25O:1A as coordinates in the first environment coordinate system 245:1 based on the coordinates for the location of the object extending from the coordinates of the location of the first shared area 25O:1A in the first virtual environment 240:1.
In figure 2D it is indicated how the real-life object 220:2A is moved in the first shared surface 250:2A of the second environment 240:2 and how a corresponding movement is shown of the corresponding virtual object 230:1 220:2AV in the first shared surface 25O:1A of the first environment 240:1. Figure 2E shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B, where it is shown how a user, or rather the viewing device 100 of a user is moved and how the representation of the viewing device 100, the avatar 100R is also moved. As is shown even though the viewing device 100 is not in the shared surface 250, its location is determined relative the shared surface, and the movement, or rather the resulting position is also shown relative the shared surface(s) 250. In the example of figure 2E, the viewing device 100:1 in the first environment 240:1 is shown as moved from the first shared surface 25O:1A to the second shared surface 25O:1B (the user walking from the white board to the desk) and the avatar performs basically the same movement, but in the second environment 240:2 thus being from the first shared surface 250:2A to the second shared surface 250:2B in the second environment 240:2.
The virtual environment server 210 is thus in some embodiments further configured to receive an indication of a movement of the object 220:1, 230:1 in the first virtual environment 240:1 and to translate the movement relative the first shared area 25O:1A and the second shared area 25O:1B, 260 in in the first virtual environment 240:1 into a relative movement between the first shared area 250:2A and the second shared area 250:2A, 260 in the second virtual environment 240:2.
In some embodiments the indication of the movement comprises start coordinates in the first environment coordinate system 245:1 for a start location for the object 220:1, 230:1, 100:1 at (next to or inside) the first shared area 25O:1A and end coordinates in the first environment coordinate system 245:1 for an end location for the object 220:1, 230:1, 100R at the second shared area 25O:1B in the first virtual environment 240:1. In such cases, the server 210 is further configured to translate the end coordinates in the first environment coordinate system 245:1 into end coordinates in a second environment coordinate system 245:2 based on the location of the second shared surface (25O:1B) in the first environment coordinate system 245:1 so that the distance (i.e. the movement vector) from the end position to the location of the first shared surface are the same in the first and the second environment coordinate system.
Figure 2F shows a schematic view of a virtual display system according to the teachings herein, such as the virtual display system 200 of figure 2B, where it is shown how a real-life object 220:lA RLO is moved from the first shared surface 25O:1A in the first environment 240:1 to the second shared surface 25O:1B in the first environment 240:1. As is shown the movement is shown to be the same as regards the shared surfaces 250, even if not the same as regards the environments 240. Figure 2G shows a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2D, where it is shown how a virtual object 230 is moved from the second shared surface 25O:1B in the first environment 240:1 to the first shared surface 25O:1A in the first environment 240:1. As is shown the movement is shown to be the same as regards the shared surfaces 250, even if not the same as regards the environments 240.
In some embodiments the server is thus configured to translate a movement both with regards to the surface coordinate system and to the surface coordinate systems, wherein a movement inside a shared surface is seen as absolute and performed by a mapping of the shared surface coordinate systems 255:1, 255:2, and wherein a movement between shared surfaces 250 is seen as relative. In some embodiments the server is thus configured to translate the movement relative the first shared area 250 and the second shared area 250, 260 in the first environment coordinate system 245:1 and the first shared area 250 and the second shared area 250, 260 in the second environment coordinate system 245:2 for the first and second virtual environment respectively, wherein a movement from the second shared area to the first shared area in the first virtual environment is translated as a movement from the second shared area to the first shared area in the second virtual environment, and a movement from inside the first shared area to inside the first shared area in the first virtual environment is translated as a same movement from inside the first shared area in the second virtual environment.
As would be understood, such movements may be determined based on the coordinate systems. In some embodiments the server 210 is thus further configured to translate the movement relative the locations of the first shared area 250 and the second shared area 250, 260 in the first environment coordinate system 245:1 and the locations of the first shared area 250 and the second shared area 250, 260 in the second environment coordinate system 245:2 for the first and second virtual environment respectively, wherein the movement from the first shared area to the second shared area in the first environment coordinate system 245:1 of the first virtual environment is translated as the movement from the first shared area to the second shared area in the second environment coordinate system 245:2 of the second virtual environment, and the movement is from the start location in the first shared area to the end location in the first shared area in the first virtual environment is translated as a same movement from a start position in the first shared area to an end position in the first shared area in the second virtual environment, wherein the start locations and the end locations in the first shared area are the same in the first shared coordinate system irrespective of virtual environment. As is shown in figures 2F and 2G, a movement may correspond to both an absolute movement and a relative movement, and the movement may be portioned into partial movements. In some embodiments the server 210 is thus configured to portion the movement into a plurality of partial movements, and to translate the movement by translating each partial movement.
As can be seen in figure 2F, the resulting movement in the second environment 240:2 does not have a natural flow, and to overcome this the server 210 is in some embodiments configured to translate the movement by determining if the movement is from one of the first shared area and the second shared area to the other one of the first shared area and the second shared area, and if so determine a direction in the second environment coordinate system 245:2 of the movement and translate any partial movement in the first shared area as being in the determined direction.
Such a movement is indicated in figure 2F by the dashed arrow marked 2, replacing the arrow marked 1.
As discussed in the above, a real-life object 220 is shown as a virtual object in the other environments 240. This enables the user having the real-life object 220 to use it when for example demonstrating something to the other users.
Figure 2H and Figure 21 each show a schematic view of a virtual display system 200 according to the teachings herein, such as the virtual display system 200 of figure 2D, in which views it is shown how the other users may be enabled to utilize the same objects, even though not having access to the real-life object, to for example continue a demonstration of something.
In figure 2H it is shown how the virtual object 230:1 220:2BV in the second shared surface 25O:1B in the first environment 240:1 representing the real-life object 220:2A in the second shared surface 250:2B in the second environment 240:2 is moved. Consequently, figure 21 shows how the real- life object 220:2B in the second shared surface 250:2B in the second environment 240:2 is marked as having been moved, which is indicated by the dashed square, and how a new virtual object 230:2 220:2BV is generated to be used instead of the real-life object 220:2B. The marking may be made through graphics, overlaying with content, or through color markings. In some embodiments, the marking is made by refraining from showing the real-life object.
In some embodiments, the server 210 is thus configured to determine that the object 220 being moved is a real life object 220, and in response thereto generate a virtual representation 230:2 of the object 220 to be used in the virtual environment. In some embodiments the server 210 is further configured to generate a virtual marking of the real life object 220 to be used in the first virtual environment to indicate that the object 220 is rendered passive.
In some embodiments the server 210 is further configured to refrain from displaying the real life object.
It should be noted that even though the description has been focused herein on two environments 240, it may be understood that more than two environments 240 may be used.
Figure 3 shows a flowchart of a general method according to some embodiments of the teachings herein. The method utilizes a virtual display arrangement 100 as taught herein. Details on how the method is to be performed has been given above with reference to figures 1A, IB, 1C, ID, 2A, 2B, 2C, 2D, 2E, 2F, 2G, 2H and 21.
Through the method of figure 3, based on the teachings herein the virtual display arrangement 100 is configured for receiving 310 an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255 :2A) and receiving 315 an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2). The method also comprises receiving 320 an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1) and receiving 325 an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2). The method further comprises receiving 330 an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and determining 335 a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the method further comprises determining the corresponding location by determining 340 whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and if so translating 345 the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area, and if not translating 350 the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2). Figure 4 shows a component view for a software component (or module) arrangement 400 according to some embodiments of the teachings herein. The software component arrangement 400 is adapted to be used in a virtual display arrangement 100 as taught herein.
The software component arrangement 400 comprises a software component for receiving 410 an indication of a first shared area 25O:1A, 250:2A, 260 in a first virtual environment 240:1, wherein the first shared area 25O:1A, 250:2A, 260 is associated with a first shared coordinate system 255:1A, 255:2A and a software component for receiving 415 an indication of the first shared area 25O:1A, 250:2A, 260 in a second virtual environment 240:2. The software component arrangement 400 comprises receiving 420 an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and receiving 425 an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2. The software component arrangement 400 further comprises a software component for receiving 430 an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1; and a software component for determining 435 a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2, wherein the software component arrangement 400 further comprises a software component for determining the corresponding location FURTHER COMPRISES a software component for determining 440 whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and a software component for translating 445 the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area if so, and a software component for translating 450 the location of the object 220:1, 230:1 relative the first shared area 25O:1A and the second shared area 25O:1B, 260 in the first virtual environment 240:1 into a same position relative the first shared area 250:2A and the second shared area 250:2B, 260 in the second virtual environment 240:2 if not so.
The software component arrangement 400 also comprises a software component 460 for implementing or executing further functionality as discussed herein
Figure 5 shows a component view for an arrangement 500 comprising circuitry for providing a virtual display arrangement 100, 500 according to some embodiments of the teachings herein. The arrangement comprising circuitry is adapted to be used in a virtual display arrangement 100 as taught herein.
The arrangement comprising circuitry 500 of figure 5 comprises a circuitry for receiving 510 an indication of a first shared area 25O:1A, 250:2A, 260 in a first virtual environment 240:1, wherein the first shared area 25O:1A, 250:2A, 260 is associated with a first shared coordinate system 255:1A, 255:2A and a circuitry for receiving 515 an indication of the first shared area 25O:1A, 250:2A, 260 in a second virtual environment 240:2. The arrangement comprising circuitry 500 comprises receiving 520 an indication of a second shared area 25O:1B, 250:2B, 260 in the first virtual environment 240:1 and receiving 525 an indication of the second shared area 25O:1B, 250:2B, 260 in the second virtual environment 240:2. The arrangement comprising circuitry 500 further comprises a circuitry for receiving 530 an indication of a location of an object 220:1, 230:1 in the first virtual environment 240:1; and a circuitry for determining 535 a corresponding location in the second virtual environment 240:2 for a corresponding object 230:2, wherein the arrangement comprising circuitry 500 further comprises a circuitry for determining the corresponding location FURTHER COMPRISES a circuitry for determining 540 whether the location for the object 220:1, 230:1 in the first virtual environment 240:1 indicates a position in the first shared area 25O:1A, 250:2A, and a circuitry for translating 545 the location of the object 220:1, 230:1 in the first virtual environment 240:1 into a location in the second virtual environment 240:2 representing a same location in the first shared area if so, and a circuitry for translating 550 the location of the object 220:1, 230:1 relative the first shared area 25O:1A and the second shared area 25O:1B, 260 in the first virtual environment 240:1 into a same position relative the first shared area 250:2A and the second shared area 250:2B, 260 in the second virtual environment 240:2 if not so.
The arrangement comprising circuitry 500 also comprises a circuitry 560 for implementing or executing other functionality as discussed herein
Figure 6 shows a schematic view of a computer-readable medium 120 carrying computer instructions 121 that when loaded into and executed by a controller of a server 210 enables the server 210 to implement the present invention.
The computer-readable medium 120 may be tangible such as a hard drive or a flash memory, for example a USB memory stick or a cloud server. Alternatively, the computer-readable medium 120 may be intangible such as a signal carrying the computer instructions enabling the computer instructions to be downloaded through a network connection, such as an internet connection.
In the example of figure 6, a computer-readable medium 120 is shown as being a computer disc 120 carrying computer-readable computer instructions 121, being inserted in a computer disc reader 122. The computer disc reader 122 may be part of a cloud server 123 - or other server - or the computer disc reader may be connected to a cloud server 123 - or other server. The cloud server 123 may be part of the internet or at least connected to the internet. The cloud server 123 may alternatively be connected through a proprietary or dedicated connection. In one example embodiment, the computer instructions are stored at a remote server 123 and be downloaded to the memory 102 of the virtual display arrangement 100 for being executed by the controller 101. The computer disc reader 122 may also or alternatively be connected to (or possibly inserted into) a server 210 for transferring the computer-readable computer instructions 121 to a controller of the server (presumably via a memory of the server 210).
Figure 6 shows both the situation when a server 210 receives the computer-readable computer instructions 121 via a server connection and the situation when another server 210 receives the computer-readable computer instructions 121 through a wired interface. This enables for computer- readable computer instructions 121 being downloaded into a server 210 thereby enabling the server 210to operate according to and implement the invention as disclosed herein.

Claims

1. A virtual environment server (210) comprising a controller (211), wherein the controller (211) is configured to: receive an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255:2A); receive an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2); receive an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1); receive an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2); receive an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and determine a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the controller is further configured to determine the corresponding location by: determining whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and if so translating the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area, and if not translate the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2).
2. The virtual environment server (210) according to claim 1, wherein the controller is further configured to: receive an indication of the first shared coordinate system (255:1A) for the first shared area (25O:1A) adapted to the first virtual environment (240:1); determine a location in the first shared area (25O:1A) in the first virtual environment (240:1) as coordinates in the first shared coordinate system (255:1A); receive an indication of the first shared coordinate system (255:2A) for the first shared area (250:2A) adapted to the second virtual environment (240:2); and determine a location in the first shared area (250:2A) in the second virtual environment (240:2) as coordinates in the first shared coordinate system (255:2A), wherein the controller is further configured to translate the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area by assigning the same location the same coordinates in the first shared coordinate system (255:1A, 255:2A) in the same and the second virtual environment (240:1, 240:2).
3. The virtual environment server (210) according to claim 2, wherein the controller is further configured to: receive an indication of a first environment coordinate system (245:1) for the first virtual environment (240:1); and determine a location of the first shared area (25O:1A) in the first virtual environment (240:1) as coordinates in the first environment coordinate system (245:1), wherein the controller is further configured to determine a location of an object in the first shared area (25O:1A) as coordinates in the first environment coordinate system (245:1) based on the coordinates for the location of the object extending from the coordinates of the location of the first shared area (25O:1A) in the first virtual environment (240:1).
4. The virtual environment server (210) according to any previous claim, wherein the controller is further configured to: receive an indication of a movement of the object (220:1, 230:1) in the first virtual environment (240:1) ; and translate the movement relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in in the first virtual environment (240:1) into a relative movement between the first shared area (250:2A) and the second shared area (250:2A, 260) in the second virtual environment (240:2).
5. The virtual environment server (210) according to claim 3 and 4, wherein the indication of the movement comprises start coordinates in the first environment coordinate system (245:1) for a start location for the object (220:1, 230:1) at the first shared area (25O:1A) and end coordinates in the first environment coordinate system (245:1) for an end location for the object (220:1, 230:1) at the second shared area (25O:1B) in the first virtual environment (240:1), wherein the controller is further configured to: translate the end coordinates in the first environment coordinate system (245:1) into end coordinates in a second environment coordinate system (245:2) based on the location of the second shared surface (25O:1B) in the first environment coordinate system (245:1) so that the movement from the end position to the location of the first shared surface are the same in the first and the second environment coordinate system.
6. The virtual environment server (210) according to claim 5 or 4, wherein the controller is further configured to translate the movement relative the first shared area (250) and the second shared area (250, 260) in the first environment coordinate system (245:1) and the first shared area (250) and the second shared area (250, 260) in the second environment coordinate system (245:2) for the first and second virtual environment respectively, wherein a movement from the second shared area to the first shared area in the first virtual environment is translated as a movement from the second shared area to the first shared area in the second virtual environment, and a movement from inside the first shared area to inside the first shared area in the first virtual environment is translated as a same movement from inside the first shared area in the second virtual environment.
7. The virtual environment server (210) according to claim 6 and 5, wherein the controller is further configured to translate the movement relative the locations of the first shared area (250) and the second shared area (250, 260) in the first environment coordinate system (245:1) and the locations of the first shared area (250) and the second shared area (250, 260) in the second environment coordinate system (245:2) for the first and second virtual environment respectively, wherein the movement from the first shared area to the second shared area in the first environment coordinate system (245:1) of the first virtual environment is translated as the movement from the first shared area to the second shared area in the second environment coordinate system (245:2) of the second virtual environment, and the movement is from the start location in the first shared area to the end location in the first shared area in the first virtual environment is translated as a same movement from a start position in the first shared area to an end position in the first shared area in the second virtual environment, wherein the start locations and the end locations in the first shared area are the same in the first shared coordinate system irrespective of virtual environment.
8. The virtual environment server (210) according to claim 4, 5, 6 or 7, wherein the controller is configured to portion the movement into a plurality of partial movements, and to translate the movement by translating each partial movement.
9. The virtual environment server (210) according to claim 8, wherein the controller is configured to translate the movement by determining if the movement is from one of the first shared area and the second shared area to the other one of the first shared area and the second shared area, and if so determine a direction in the second environment coordinate system (245:2) of the movement and translate any partial movement in the first shared area as being in the determined direction.
10. The virtual environment server (210) according to any preceding claim, wherein the first shared coordinate system is absolute.
11. The virtual environment server (210) according to any preceding claim, wherein the first environment coordinate system and the second environment coordinate system are relative.
12. The virtual environment server (210) according to any preceding claim, wherein second shared area is a shared object (260).
13. The virtual environment server (210) according to any preceding claim, wherein a location of a shared area in a virtual environment is determined based on user input.
14. The virtual environment server (210) according to any preceding claim, wherein the controller is configured to determine that the object (220) being moved is a real life object (220), and in response thereto generate a virtual representation (230:2) of the object (220) to be used in the virtual environment.
15. The virtual environment server (210) according to claim 14, wherein the controller is further configured to generate a virtual marking (230:1) of the real life object (220) to be used in the first virtual environment to indicate that the object (220) is rendered passive.
16. The virtual environment server (210) according to claim 14, wherein the controller is further configured to refrain from displaying the real life object
17. The virtual environment server (210) according to any previous claim, wherein the first virtual environment (240:1) is associated with a first virtual display arrangement (100:1) and the second virtual environment (240:2) is associated with a second virtual display arrangement (100:2).
18. A system comprising a virtual environment server according to any previous claim.
19. The system according to claim 18 wherein the server is according to claim 17 and wherein the system further comprises the first virtual display arrangement (100:1) and the second virtual display arrangement (100:2).
20. A virtual display arrangement comprising a server according to any of claims 1 to 17.
21. A method for use in a virtual environment server (210), wherein the method comprises: receiving an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255:2A); receiving an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2); receiving an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1); receiving an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2); receiving an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and determining a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the method further comprises determining the corresponding location by: determining whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and if so translating the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area, and if not translating the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2).
22. A computer-readable medium (120) carrying computer instructions (121) that when loaded into and executed by a controller (211) of a virtual environment server (210) enables the virtual environment server (210) to implement the method according to claim 21.
23. A software component arrangement (600) for use in a virtual environment server (210), wherein the software component arrangement (600) comprises: a software component for receiving an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255:2A); a software component for receiving an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2); a software component for receiving an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1); a software component for receiving an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2); a software component for receiving an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and a software component for determining a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the software component for determining the corresponding location comprises: a software component for determining whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and a software component for translating the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area if so, and a software component for translating the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2) if not so. irtual environment server (210, 700) comprising: a circuitry for receiving an indication of a first shared area (25O:1A, 250:2A, 260) in a first virtual environment (240:1), wherein the first shared area (25O:1A, 250:2A, 260) is associated with a first shared coordinate system (255:1A, 255:2A); a circuitry for receiving an indication of the first shared area (25O:1A, 250:2A, 260) in a second virtual environment (240:2); a circuitry for receiving an indication of a second shared area (25O:1B, 250:2B, 260) in the first virtual environment (240:1); a circuitry for receiving an indication of the second shared area (25O:1B, 250:2B, 260) in the second virtual environment (240:2); a circuitry for receiving an indication of a location of an object (220:1, 230:1) in the first virtual environment (240:1); and a circuitry for determining a corresponding location in the second virtual environment (240:2) for a corresponding object (230:2), wherein the circuitry for determining the corresponding location comprises: a circuitry for determining whether the location for the object (220:1, 230:1) in the first virtual environment (240:1) indicates a position in the first shared area (25O:1A, 250:2A), and a circuitry for translating the location of the object (220:1, 230:1) in the first virtual environment (240:1) into a location in the second virtual environment (240:2) representing a same location in the first shared area if so, and a circuitry for translating the location of the object (220:1, 230:1) relative the first shared area (25O:1A) and the second shared area (25O:1B, 260) in the first virtual environment (240:1) into a same position relative the first shared area (250:2A) and the second shared area (250:2B, 260) in the second virtual environment (240:2) if not so.
PCT/EP2022/063276 2022-05-17 2022-05-17 A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces WO2023222194A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/063276 WO2023222194A1 (en) 2022-05-17 2022-05-17 A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/063276 WO2023222194A1 (en) 2022-05-17 2022-05-17 A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces

Publications (1)

Publication Number Publication Date
WO2023222194A1 true WO2023222194A1 (en) 2023-11-23

Family

ID=82020855

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/063276 WO2023222194A1 (en) 2022-05-17 2022-05-17 A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces

Country Status (1)

Country Link
WO (1) WO2023222194A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220086205A1 (en) * 2020-09-15 2022-03-17 Facebook Technologies, Llc Artificial reality collaborative working environments

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220086205A1 (en) * 2020-09-15 2022-03-17 Facebook Technologies, Llc Artificial reality collaborative working environments

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOHN UNDERKOFFLER ET AL: "Emancipated pixels", COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH 99; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], ACM ? - NEW YORK, NY, USA, 1515 BROADWAY, 17TH FLOOR NEW YORK, NY 10036 USA, July 1999 (1999-07-01), pages 385 - 392, XP058128815, ISBN: 978-0-201-48560-8, DOI: 10.1145/311535.311593 *
LEHMENT NICOLAS H ET AL: "Creating automatically aligned consensus realities for AR videoconferencing", 2014 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR), IEEE, 10 September 2014 (2014-09-10), pages 201 - 206, XP032676190, DOI: 10.1109/ISMAR.2014.6948428 *
MISHA SRA ET AL: "Your Place and Mine: Designing a Shared VR Experience for Remotely Located Users", 2018, XP055480254, Retrieved from the Internet <URL:https://web.media.mit.edu/~sra/papers/ypam-preprint.pdf> [retrieved on 20180531], DOI: 10.1145/3196709.3196788 *

Similar Documents

Publication Publication Date Title
US20220006973A1 (en) Placement of virtual content in environments with a plurality of physical participants
US20220141259A1 (en) Multiuser asymmetric immersive teleconferencing
US9041739B2 (en) Matching physical locations for shared virtual experience
US11800059B2 (en) Environment for remote communication
US9779538B2 (en) Real-time content immersion system
US10013805B2 (en) Control of enhanced communication between remote participants using augmented and virtual reality
US20210281802A1 (en) IMPROVED METHOD AND SYSTEM FOR VIDEO CONFERENCES WITH HMDs
US20120050325A1 (en) System and method for providing virtual reality linking service
WO2013119475A1 (en) Integrated interactive space
US20120092327A1 (en) Overlaying graphical assets onto viewing plane of 3d glasses per metadata accompanying 3d image
US10896322B2 (en) Information processing device, information processing system, facial image output method, and program
CN110537208B (en) Head-mounted display and method
Steptoe et al. Acting rehearsal in collaborative multimodal mixed reality environments
US20220407902A1 (en) Method And Apparatus For Real-time Data Communication in Full-Presence Immersive Platforms
US20200349749A1 (en) Virtual reality equipment and method for controlling thereof
Le et al. Immersive environment for distributed creative collaboration
WO2023222194A1 (en) A computer software module arrangement, a circuitry arrangement, an arrangement and a method for providing a virtual display for simultaneous display of representations of real life objects in shared surfaces
Pan et al. A surround video capture and presentation system for preservation of eye-gaze in teleconferencing applications
US11727645B2 (en) Device and method for sharing an immersion in a virtual environment
US20200225467A1 (en) Method for projecting immersive audiovisual content
WO2017098999A1 (en) Information-processing device, information-processing system, method for controlling information-processing device, and computer program
Le et al. HybridMingler: Towards Mixed-Reality Support for Mingling at Hybrid Conferences
US20230164304A1 (en) Communication terminal device, communication method, and software program
KR102428438B1 (en) Method and system for multilateral remote collaboration based on real-time coordinate sharing
US20240095984A1 (en) System and method of spatial groups in multi-user communication sessions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22730111

Country of ref document: EP

Kind code of ref document: A1