WO2022177505A1 - Methods relating to virtual reality systems and interactive objects - Google Patents

Methods relating to virtual reality systems and interactive objects Download PDF

Info

Publication number
WO2022177505A1
WO2022177505A1 PCT/SG2022/050072 SG2022050072W WO2022177505A1 WO 2022177505 A1 WO2022177505 A1 WO 2022177505A1 SG 2022050072 W SG2022050072 W SG 2022050072W WO 2022177505 A1 WO2022177505 A1 WO 2022177505A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
digital twin
environment
landmark
landmarks
Prior art date
Application number
PCT/SG2022/050072
Other languages
French (fr)
Inventor
Ching-Chiuan YEN
Chor Guan TEO
Chin Hong LOW
Choon Ern CHNG
Chong Yunn CHUAH
Junyong Marcus LIN
Yong Jie SIM
Grace Yong Gee LIM
Affirudin Bin KAMARUDIN
Jun Yao Francis LEE
Chee Ming TAN
Chin Han Lim
Han John ZHENG
Original Assignee
National University Of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University Of Singapore filed Critical National University Of Singapore
Priority to CN202280019831.6A priority Critical patent/CN116964638A/en
Publication of WO2022177505A1 publication Critical patent/WO2022177505A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Definitions

  • the present invention relates to virtual reality (VR) systems for interacting with interactive object, and calibration methods for VR systems.
  • VR virtual reality
  • Generating a virtual object to correspond to a physical object often requires the physical object to be viewed from multiple angles, to define the shape of the surfaces of the virtual object in the VR environment. Each viewpoint then needs to be aligned to ensure the surfaces of the virtual object match those of the physical object. The virtual object then needs to be aligned in the VR environment with the physical object in the real-world environment. For some technologies, a user has to manually adjust the transformation of the virtual model by repeatedly viewing it through a VR headset, determining the error between the lay of the 3D model and the virtual object, and adjusting the 3D model to match the physical object. Such manual alignment of the virtual model to be in the exact position of the physical object is tedious.
  • the user must also repeatedly shift their head to multiple perspectives (top, side, front) to correctly calibrate the model in all axes. This process has to be repeated whenever there is a change of the position of the physical with respect to the VR tracking system. Moreover, human error is introduced during manual alignment by visual inspection.
  • This problem also substantially removes the ability of these VR systems to map multiple objects into the virtual space.
  • a change in viewpoint applies to all objects.
  • a user will typically only be able to view one object to update its position and shape in the virtual environment.
  • the viewpoint is changed. So the position and shape of all other objects are now no longer up-to-date.
  • the physical objects are static. This enables the digital twins to be statically placed in the virtual environment. However, this reduces the realism of the VR environment.
  • a virtual reality (VR) system comprising: a VR display for displaying a VR environment to a user; memory comprising a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
  • VR virtual reality
  • the processor system may process the corresponding landmarks to orient, position and scale the digital twin in the VR environment by: orienting the digital twin in accordance with an orientation of the physical object relative to the user; and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
  • Scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment may comprise down-sizing the object and placing the object perceptually further from the user in the VR environment when compared with a distance of the physical object from the user.
  • the VR system may further comprise a controller for receiving position control commands, the VR display receiving a signal from the controller and teleporting the user to a new position in the VR environment in accordance with the signal, wherein the digital twin is re-sized based on the new position of the user.
  • the processor system may orient and position the digital twin by transforming the digital twin using Singular Vector Decomposition.
  • the processor system may scale the digital twin using binary search.
  • a non-transitory computer-readable storage medium storing: a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; and instructions that, when executed by a processor system of a virtual reality (VR) system, cause the VR system to: detect a location of each landmark; match each landmark to a corresponding landmark on the digital twin; and process the corresponding landmarks to orient, position and scale the digital twin in a VR environment based on a position of the physical object.
  • VR virtual reality
  • an interactive object comprising: a body configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system; a control module configured to control a state of the object, the object having at least two states; wherein an action performed by the user on the object is detected and rendered in the VR environment on the digital twin, and when the action corresponds to a predetermined action, the control module is configured to transition the object from a current state to a next state.
  • VR virtual reality
  • the interactive object may further comprise a transmitter and a sensor system, the sensor system detecting the action and sending a signal to the control module, and the control module sending a signal via the transmitter to the VR system if the action corresponds to the predetermined action.
  • the sensor system may comprise at least one of: a potentiometer for measuring a relative rotation between two or more portions of the body; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body.
  • the control module or VR system may be configured to combine the signal from the sensor system and actions of the user detected by the VR system, to detect performance of the predetermined action.
  • the interactive object may further comprise a receiver for receiving a control signal from the VR system, the control signal comprising at least one of a scenario and an object state, the control module being configured to control the object in accordance with the control signal.
  • the control module may be configured to implement a scenario to control the object, the scenario comprising: a plurality of states including: an initial state of the object; and at least one further state of the object; and a predetermined action for transitioning between states of the plurality of states, wherein the control module is configured to control the state of the object in accordance with the scenario.
  • the object may further comprise a stimulus system, the stimulus system providing stimulus to the user during interaction between the user and the object.
  • the stimulus system may be controlled by the control module, to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object.
  • the stimulus system may comprise at least one of: a haptic system to deliver a haptic stimulus to the user; a speaker system to deliver a sound stimulus to the user; an olfactory system to deliver an olfactory stimulus to the user; and a thermal system to deliver a thermal stimulus to the user.
  • the object may replicate at least part of a living entity, and the haptic system may comprise a pulse generator for generating a pulse at a location of the living entity at which the user should test for the pulse.
  • VR virtual reality
  • the change in content may comprise a change in behaviour of the digital twin.
  • the VR system may comprise: memory comprising the digital twin corresponding to the physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
  • Embodiments of the invention are able to perform alignment automatically after the landmarks have been identified and demarcated.
  • the coordinates of the landmark of the physical object in world space are stored, and the position of the VR system (or VR display) is known and updated. Therefore, the VR system can determined the new location of the digital twin in the virtual environment, relative to the physical object, by measuring movements of the VR display. Therefore, the digital twin only needs to be located once in the virtual environment and will be transformed as the virtual environment is transformed in response to actions from the user.
  • embodiments of the invention alter the scale of the 3D model automatically to the desired coordinates moreover, scaling can be automatic when scanning the physical object using photogram metry equipment. This avoids the need for special hardware such as a 3D scanner, and minimises human error by reducing human interaction other than for using a detector (e.g. VR controller) to locate or define landmarks of a physical object in a virtual space. This is especially important if the 3D model generation process fails to provide accurate information about the scale. Therefore, embodiments of the invention allow for more accurate physical-virtual mappings for virtual reality experiences with physical embodiments and interactions.
  • a detector e.g. VR controller
  • Embodiments of the invention are able to calibrate multiple 3D models independently, each with their own transformation changes with respect to the origin of the VR tracking system. Hence, such embodiments allow for multiple concurrent virtual-physical mappings for multiple virtual-physical interactions. Moreover, embodiments of the present invention enable alignment and scaling of multiple objects in virtual space.
  • Embodiments of the methods described herein for calibrating the VR system - e.g. methods for orienting, positioning and scaling a digital twin in a VR environment based on a position of a physical object in a real-world or physical environment - can be employed using only three landmarks or points for alignment. It can be important for object with complex surface structures where requiring larger numbers of points for calibration and alignment can result in some of those points being obscured within the complex surface structure.
  • Figure 1 illustrates a method for calibrating a VR system for accurately displaying a digital twin, in accordance with present teachings
  • Figure 2 is a photograph of a three-dimensional (3D) model of a mannequin produced using photogrammetry
  • Figure 3 is a photograph of the 3D model of Figure 2, showing landmarks of the physical object
  • Figure 4 illustrates a position of a detector (controller) when capturing the positions in virtual space of the landmarks of the physical object
  • Figure 5 shows the virtual space, the coordinate system and coordinates of the landmarks in that space
  • Figure 6 illustrates the positions of the landmarks of the physical object and the corresponding landmarks of the digital twin
  • Figure 7 illustrates a VR system for implementing the method of Figure 1
  • Figures 8a, 8b and 8c illustrate deviations in model alignment with deviations from marking landmarks of 1cm, 2cm and 0.5cm, respectively; and Figure 9 illustrates an interactive object in accordance with present teachings.
  • the digital twin can be one of many digital twins displayed in the VR environment, each digital twin being associated with a respective physical object in the physical environment.
  • some present methods enable the scaling, orientation and position to be updated automatically, without needing to reacquire each physical object - i.e. without having to view the physical object between successive updates of the position of the corresponding digital twin in the VR environment.
  • Figure 1 illustrates one such method 100, for calibrating a VR system - i.e. mapping a digital twin to a physical object.
  • the method 100 broadly comprises:
  • 108 orienting, positioning and scaling the digital twin in the VR environment based on the position of the physical object - i.e. the position with respect to a VR display or headset worn by the user.
  • step 102 developing a digital twin of a physical object enables the digital twin to be created once, accurately.
  • the shape, appearance and other properties of the digital twin are then stored in memory. Since the digital twin has been produced and stored, it does not need to be reacquired if the viewpoint of the VR system changes with respect to the physical object. Instead, the position of the digital twin can simply be updated in the VR environment to match the location (i.e. position, orientation and scaling) of the physical object.
  • the digital twin can be rapidly developed using 3D scanning, photogrammetry or other 3D modelling techniques. Such techniques typically acquire only the shape of the physical object. With the exception of the shape of portions of the digital twin, the appearance of the digital twin in the VR environment does not need to reflect the appearance of the physical object.
  • the physical object reflected in the 3D scanned image shown in Figure 2 is the torso 202, head 204 and right arm 206 of a mannequin 200.
  • the digital twin may include the other limbs that are absent from the mannequin or 3D scanned image, clothing and other visible characteristics that differ from the characteristics of the physical object.
  • the physical object and its digital twin should at least include those components (e.g. head and torso) that are required to be interacted with in order to implement and complete the scenario. Therefore, step 102 may involve developing a 3D model from the physical object, and adapting the 3D model to produce the digital twin - e.g. rendering clothes over the 3D model.
  • the physical object, and therefore the 3D model 200 has a number of landmarks 208.
  • a landmark is something at a known location of the physical object 200. Each landmark may constitute one or more visible indicia on a surface of the physical object 200 per landmarks 208, a beacon or other device the position of which are detectable by a detector to the desired degree of accuracy.
  • a landmark is a physical feature of the physical object - e.g. a shape feature.
  • the physical feature will usually be readily identifiable by the controller, such as the tip of the nose, base of the torso, bottom of the chin and shoulder points.
  • the physical object should have at least three landmarks. The location of the landmarks can be automatically determined during the scanning process.
  • visible indicia can be identified in images captured during the scanning process and can be incorporated into the digital twin at locations of the digital twin corresponding to the locations of those indicia on the physical object.
  • the location of the landmarks can be manually inserted into the digital twin as shown in Figure 3 - in this sense the landmarks are "of" the object insofar as they correspond to physical locations on the object that can be used to match the location of the object to that of the virtual object.
  • the location of the landmarks are manually inserted, it can be desirable for the landmarks to be located at distinctive positions on the physical object - e.g. in embodiments where the physical object is a mannequin or part thereof, distinctive positions can include the nose (e.g. tip), chin, left and/or right shoulder, left and/or right nipple and base of the torso.
  • the manually inserted landmarks 300 of mannequin 302 are identified in Figure 3.
  • the landmarks can be located at the apices (e.g. nipples, tip of nose) or other distinctive locations of the physical object. Instead, the landmarks can be located anywhere in the physical object. It is generally desirable, however, that the landmarks are not disposed in a straight line.
  • Step 104 involves detecting the location of each landmark 210 using a detector. This detection step is used to calibrate the location of the digital twin in virtual space. Therefore, at least three landmarks should be visible or otherwise detectable by the detector.
  • Figure 4 illustrates a detector - controller 400 - the base of which is positioned on the nose of the physical object 402.
  • the position of the controller 400 is known with respect to the VR system - in the various circumstances where relative positions are "known" in the present teachings, it will be understood that those relative positions are known to an acceptable degree of accuracy.
  • the coordinates of the landmarks are then detected relative to the controller 400.
  • the VR system can therefore determine the location of the landmarks relative to the known location of the controller 400. Therefore, step 104 may include detecting the location of landmark based on a location of each landmark relative to the detector and a location of the detector relative to the VR system, or a processor or other component of the VR system.
  • the three or more landmarks can be detected in more than one detection step 104.
  • Step 104 may therefore include detecting the location of each landmark of the physical object using the detector, controller 104, and mapping the location of each landmark to a desired location of a corresponding landmark of the digital claim in the VR environment.
  • Figure 5 also shows the coordinate system 502 with respect to which the location of the controller, and therefore of the landmarks of the physical object, is known.
  • Step 106 involves matching each landmark to a corresponding landmark of the digital twin.
  • the matching process involves identifying the corresponding landmark on the digital twin for each landmark of the physical object that was detected at step 104.
  • Step 108 then involves processing the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
  • Step 108 recognises that when placing the digital twin in the virtual space, as shown in Figure 6, there may be an offset between the locations of the landmarks 600 of the physical objects reproduced in virtual space, and the corresponding landmarks 602 of the digital twins.
  • step 108 can make use of many different processes.
  • the digital twin is rendered at a distance equal to that of the physical object from the VR system - note: references to distances from the VR system will generally be understood to correspond to distances as perceived through the VR headset or display, such that if a user were to remove the VR headset the physical object would appear roughly where the digital twin was last presented in the VR headset.
  • Scaling can be performed by binary search, to identify the location of a corresponding landmark 602 in the VR display required to match a location of a landmark 600 of the physical object in virtual space. Transformation can be done by least-squares fitting or other processes or even manual alignment to minimise the distance between each pair of coordinates (i.e. coordinates of each landmark and corresponding landmark).
  • the transformation is performed using Singular Vector Decomposition (SVD) - this involves specifying the coordinates of the landmarks and corresponding landmarks as column matrices (or as row matrices transposed for the transformation) and defining a matrix transformation that converts the locations of the corresponding landmarks to the locations of the landmarks in virtual space.
  • Singular Vector Decomposition Singular Vector Decomposition
  • the points in the digital twin that are not demarcated with landmarks can then be transformed using the same matrix transformation.
  • the matrix transform is determined by: providing pairs of coordinates of virtual (corresponding) and real world landmarks for alignment; using SVD to determine the rotation and transformation to get a best matching by iteratively applying changes to the transformation matrix until a predetermined threshold has been met; repeating this process at different scales; and using the scale and alignment that gives the minimum error i.e. the best match.
  • step 108 involves orienting the digital twin in accordance with an orientation of the physical object relative to the user, and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
  • the "size” of the virtual object is how 'big' it looks to the user - e.g. the number of pixels it occupies in a VR display.
  • the "scale” of the virtual object is how big the virtual object is in the virtual environment - e.g. relative to other objects in the virtual environment.
  • the “size” may change but the "scale” will generally be fixed. Therefore, when the virtual object appears further away from the user in the virtual environment, its size will be smaller but its scale will remain the same.
  • step 108 involves locating the virtual object, including scaling it, in the virtual environment. This will generally only be performed once, to fix the coordinates, orientation and scale of the virtual object in the virtual environment - assuming the object itself is static in that environment. Movements of the VR system will result in changes in the viewing direction and position of the user. These movements will result in a transformation of the virtual environment, and the virtual object will be similarly transformed in the virtual environment.
  • the scenario may be static - i.e. the digital twin or physical object may have a single state, such as a passive/inactive state.
  • the scenario may instead be evolving. Therefore, scaling the digital twin in accordance with step 108 may involve sizing the digital twin to suit the scenario. For example, while the physical object may be near the user, the digital twin may be rendered at a distance from the user. The digital twin is therefore down-sized based on the distance at which the user should perceive it to be from their current position in the VR environment.
  • the user may use a controller.
  • the user uses the controller to produce a signal and the VR display receives the signal (which may include a processor system interpreting the signal and mapping it to a signal suitable for actioning by the VR display) and teleports the user to a new position in the VR environment in accordance with the signal.
  • the digital twin can then be resized based on the new position of the user VR environment.
  • This teleporting process i.e. moving between locations in the VR environment without a corresponding movement of the user in the physical environment - enables a user to navigate through a VR environment while remaining stationary, thereby allowing the VR environment to be of any size regardless of the size of the physical environment. It also avoids the need for the user to navigate around obstacles. Obstacles can therefore be placed in the VR environment without corresponding objects being placed in the physical environment.
  • the method 100 may be employed, for example, on a VR system 700 as shown in Figure 7.
  • the block diagram of the VR system 700 will typically include a desktop computer or laptop.
  • the VR system 700 may instead include a mobile computer device such as a smart phone, a personal data assistant (PDA), a palm-top computer, or multimedia Internet enabled cellular telephone.
  • PDA personal data assistant
  • the VR system 700 includes the following components in electronic communication via a bus 712:
  • a VR display 702 for displaying a VR environment to a user
  • non-volatile (non-transitory) memory 704 comprising (i.e. storing) the digital twin;
  • RAM 706 random access memory
  • RAM 706 may store the digital twin
  • transceiver component 710 that includes N transceivers
  • Figure 7 Although the components depicted in Figure 7 represent physical components, Figure 7 is not intended to be a hardware diagram. Thus, many of the components depicted in Figure 7 may be realized by common constructs or distributed among additional physical components. Moreover, it is certainly contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to Figure 7.
  • the display 702 generally operates to provide a presentation of content, such as the digital twin or twins and the VR environment more generally, to a user. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro projector and OLED displays).
  • a presentation of content such as the digital twin or twins and the VR environment more generally, to a user. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro projector and OLED displays).
  • non-volatile data storage 704 functions to store (e.g., persistently store) data and executable code, including a virtual textile (data container for texture map and cloth simulation parameters). It may also store the 3D model or digital twin.
  • the executable code in this instance comprises instructions enabling the system 700 to perform the methods disclosed herein, such as that described with reference to Figure 1.
  • the non-volatile memory 704 includes bootloader code, modem software, operating system code, file system code, and code to facilitate the implementation components, well known to those of ordinary skill in the art that, for simplicity, are not depicted nor described.
  • the non-volatile memory 704 is realized by flash memory (e.g., NAND or ONENAND memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory 704, the executable code in the non-volatile memory 704 is typically loaded into RAM 706 and executed by one or more of the N processing components 708.
  • flash memory e.g., NAND or ONENAND memory
  • the N processing components 708 in connection with RAM 706 generally operate to execute the instructions stored in non-volatile memory 704.
  • the N processing components 708 may include a video processor, modem processor, DSP, graphics processing unit (GPU), and other processing components. It is possible for the N processing components 708 to include a central processing unit (CPU), which executes operations in series. However, to rapidly transform the digital twin with movements of the VR display, a GPU may be provided.
  • the instructions stored in memory 704 (or 706) are executed by the processing system 708, they cause the VR system 700 to detect a location of each landmark using the detector 720, match each landmark to a corresponding landmark on the digital twin using the processor system 708, and process, also using the processor system 708, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
  • the transceiver component 710 includes M transceiver chains, which may be used for communicating with external devices via wireless networks 716.
  • Each of the M transceiver chains may represent a transceiver associated with a particular communication scheme.
  • each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a CDMA network, a GPRS network, a UMTS networks), and other types of communication networks.
  • Reference numeral 718 indicates that the VR system 700 may include physical buttons, as well as virtual buttons such as those that would be displayed in the VR environment. Moreover, the VR system 700 may communicate with other computer systems or data sources over network 716.
  • the VR system 700 also includes a detector, presently embodied by controller 720, for detecting landmarks.
  • controller 720 may also be used to navigate through the VR environment.
  • Non-transitory computer- readable medium 704 includes both computer storage medium and communication medium including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium may be any available medium that can be accessed by a computer, such as a USB drive, solid state hard drive or hard disk.
  • the VR system 700 can robustly implement the method 100, to calibrate the digital twin (or digital twins where multiple physical objects are rendered) in the VR environment.
  • the alignment process was repeatedly performed whilst randomizing the starting position, rotation and scale of the model (mimicking variations in model generation process) to be aligned as well as adding noise to the calibration markers (mimicking variations in taking measurements of the physical landmarks).
  • the system should have an alignment tolerance of 1cm for the user to feel that physical object and virtual object (digital twin) are aligned.
  • the object 900 includes a body 902 and a control module 904 shown in broken lines as it is internal of the body 902 and in communication with a VR system via cable 906. In other embodiments, the object 900 may not be in communication with a VR system or may communicate wirelessly.
  • the body 902 is configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system. This correspondence may be achieved by the calibration method 100.
  • the control module 904 is configured to control a state of the object 900.
  • the state of the object 900 is defined by reference to its behaviour - the object 900 has at least two states. For example, the object may generate an audible noise in one state and be silent in another state.
  • an action performed by the user on the object 900 is detected and rendered in the VR environment on the digital twin - e.g. the hands of the user performing an action on the physical object will be detected by the VR system or a sensor system of the object 900, and rendered in the VR environment.
  • the control module 904 is configured to transition the object 900 from a current state to a next state.
  • the object 900 can therefore change state according to a pre-programmed sequence. This enables the user to interact with a digital twin by performing corresponding actions on the object 900.
  • the interactive object 900 will generally include a transmitter (e.g. transceiver 710 of Figure 7), presently forming an interface 908 between the cable 906 and the object 900, and a sensor system 910.
  • the sensor system 910 detects the action and sends a signal to the control module 904.
  • the control module 904 sends a signal via the transmitter 908 to the VR system (e.g. system 700) corresponding to the action detected by the sensor system 910.
  • the control module 904 may also determine if the action corresponds to the predetermined action, and only send the signal, via the transmitter 908 to the VR system if the action corresponds to the predetermined action.
  • the sensor system 910 and control module 904 are shown as separate components, they may form part of the same component or components.
  • the sensor system may determine whether the predetermined action or relevant portion thereof has been performed.
  • the predetermined action may involve the application of pressure for a predetermined period, to a particular region of the physical/virtual object.
  • the sensor system may include a pressure sensor located at that particular region and may time the duration of application of pressure to that region. If the pressure is applied for the requisite duration, the sensor system (or control module) generates a signal specifying that the predetermined action has been performed.
  • the sensor 910 may comprise a pressure sensor positioned in the vicinity of the wrist of the right arm of the mannequin.
  • the pressure sensor may measure the pressure applied by the user to the pressure sensor - when simulating checking a pulse - and send a signal to the control module 904.
  • the control module may determine if a pressure has been applied to the sensor, if the user has maintained the pressure for a sufficient period or if the applied pressure is within a particular desired pressure range and so on. If the action (application of pressure by the user to the pressure sensor) accords with a predetermined action, that may be stored in memory accessible by the control module 904, the control module 904 sends a signal to the VR system and transitions the object 900 to the next state - e.g. activating an audio signal to replicate shallow breathing that the user then needs to listen to. This ensures the user follows a predetermined sequence of actions, or a 'scenario', when interacting with the object.
  • the sensor system can employ any desired sensors.
  • it may include one or more of: a potentiometer for measuring a relative rotation between two or more portions of the body - e.g. in the mannequin example the object may be static on approach by the user, the object (i.e.
  • the VR system and/or control module 904 then updates the state of the object to "breathing"; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body - this can be used to determine if the user is speaking to the object 900 or, if coupled with a natural language processing system, to determine what the user is saying to the object 900 - e.g. whether the user is asking a question prescribed when triaging a patient simulated using the object 900.
  • the control module 904 or VR system 700 is configured to combine the signal from the sensor system 910 and actions of the user, to detect performance of the predetermined action. This checks whether all relevant components of an action have been performed. For example, if a user if applying pressure to a pulse using their elbow, images captured by the VR system (or a camera in the object 900) may be used to identify that the user's hands are not correctly positioned on the object 900. Therefore, although the pressure may be correctly applied, the predetermined action has not been performed.
  • transmitter 908 may be part of a transceiver and therefore comprise a receiver for receiving a control signal from the VR system.
  • the control signal can be used to indicate to the control module 904 that the object 900 can transition from its current state to its next state.
  • the control signal may also include a scenario or an object state. In either case, the control module 904 is configured to control the object 900 in accordance with the control signal.
  • a scenario will include a plurality of states.
  • the plurality of states will include an initial state of the object 900, that defines the starting condition - i.e. the state of the object when the user first interacts with it - and one or more further states.
  • the object 900 may revisit the initial state.
  • a scenario may simulate triaging multiple victims in an accident.
  • the object 900 can behave as necessary to simulate the behaviour of each victim, in the order that those victims are approached by the user.
  • the object 900 may transition from its current states to the initial state to commence behaving like the next victim visited by the user.
  • control module 904 controls the state (i.e. the behaviour) of the object in accordance with the scenario received from the VR system.
  • the object 900 also includes a stimulus system 912.
  • the stimulus system 912 provides stimulus to the user during interaction between the user and the object 900.
  • the stimulus system 912 and the sensor system 910 each provide the object 900 with active behaviour and an ability to respond to the user, rather than being a passive object.
  • the stimulus system 912 is controlled by the control module 904 to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object.
  • the stimulus system includes a module 912 at the wrist of the object 900.
  • the module 912 may include a servomotor that simulates the pulse of a victim, and the sensor system 910 can determine if the user is applying pressure at the correct location, and/or with the correct force, to accurately detect the pulse. If the user checks the pulse correctly, the object 900 may be transitioned to its next state. If the user fails to check the pulse correctly, the object 900 may not transition or, after a predetermined period of time, the simulation may exit.
  • the stimulus system can include any suitable stimuli, such as a haptic system to deliver a haptic stimulus to the user (e.g. the servomotor example, or a compressor system for simulating respiratory function), a speaker system to deliver a sound stimulus to the user, an olfactory system to deliver an olfactory stimulus to the user, and a thermal system to deliver a thermal stimulus to the user.
  • a haptic system to deliver a haptic stimulus to the user
  • a speaker system to deliver a sound stimulus to the user
  • an olfactory system to deliver an olfactory stimulus to the user
  • a thermal system to deliver a thermal stimulus to the user.
  • the VR system 700 may interact with any number of interactive objects 900. Moreover, the VR system 700 may change content presented to the user in the VR environment in response to detection of the predetermined action. That change in content may be a change in behaviour of the digital twin - e.g. if the user clears the airway of the physical object 900, the digital twin may visible start breathing.
  • the active objects afford interaction between a user and the physical environment, where that interaction maps to a scenario or state that is presented in the VR environment.
  • Embodiments of such objects can, when interacting with a VR system, enable heightened interaction and scenario mapping that has previously been achievable.
  • the objects and VR system can produce a more realistic simulation in a VR environment than has previously been achievable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Described is a virtual reality (VR) system. The VR system comprises a VR display for displaying a VR environment to a user, memory comprising a digital twin corresponding to a physical object, the physical object comprising three or more 5 landmarks, a detector, and a processor system. The memory stores instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, 10 position and scale the digital twin in the VR environment based on a position of the physical object.

Description

METHODS RELATING TO VIRTUAL REALITY SYSTEMS AND INTERACTIVE OBJECTS
Technical Field
The present invention relates to virtual reality (VR) systems for interacting with interactive object, and calibration methods for VR systems.
Background
There have been many studies relating to tactile stimulation in a virtual environment. To improve tactile feedback in a virtual environment, and to enable a user to see objects in a virtual environment, these studies have typically involved the virtual reproduction of physical objects in the virtual environment (i.e. digital twinning). The static physical presence of the physical object provides a tactile presence perceived in the virtual environment.
For a VR simulation involving a physical embodiment to recreate the tactile feel of a digital twin (also referred to as a virtual object or 3D model) in virtual space, requires accurate mapping between the physical embodiment and digital twin. Broadly, there is a number of problems encountered in this mapping process. One issue is that a 3D model of the physical object has to be generated and another issue is that the 3D model has to be placed in the virtual environment and mapped accurately to the physical object.
Generating a virtual object to correspond to a physical object often requires the physical object to be viewed from multiple angles, to define the shape of the surfaces of the virtual object in the VR environment. Each viewpoint then needs to be aligned to ensure the surfaces of the virtual object match those of the physical object. The virtual object then needs to be aligned in the VR environment with the physical object in the real-world environment. For some technologies, a user has to manually adjust the transformation of the virtual model by repeatedly viewing it through a VR headset, determining the error between the lay of the 3D model and the virtual object, and adjusting the 3D model to match the physical object. Such manual alignment of the virtual model to be in the exact position of the physical object is tedious. Additionally, the user must also repeatedly shift their head to multiple perspectives (top, side, front) to correctly calibrate the model in all axes. This process has to be repeated whenever there is a change of the position of the physical with respect to the VR tracking system. Moreover, human error is introduced during manual alignment by visual inspection.
This problem also substantially removes the ability of these VR systems to map multiple objects into the virtual space. A change in viewpoint applies to all objects. However, a user will typically only be able to view one object to update its position and shape in the virtual environment. When viewing an object from multiple angles, the viewpoint is changed. So the position and shape of all other objects are now no longer up-to-date.
In addition, to improve predictability of and ease of alignment of physical objects and digital twins, the physical objects are static. This enables the digital twins to be statically placed in the virtual environment. However, this reduces the realism of the VR environment.
It is desirable for there to be a method and system for producing a digital twin of a physical object, and to improve VR simulations by improving the physical objects that afford those simulations.
Summary
Disclosed herein is a virtual reality (VR) system comprising: a VR display for displaying a VR environment to a user; memory comprising a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
The processor system may process the corresponding landmarks to orient, position and scale the digital twin in the VR environment by: orienting the digital twin in accordance with an orientation of the physical object relative to the user; and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
Scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment may comprise down-sizing the object and placing the object perceptually further from the user in the VR environment when compared with a distance of the physical object from the user. The VR system may further comprise a controller for receiving position control commands, the VR display receiving a signal from the controller and teleporting the user to a new position in the VR environment in accordance with the signal, wherein the digital twin is re-sized based on the new position of the user.
The processor system may orient and position the digital twin by transforming the digital twin using Singular Vector Decomposition.
The processor system may scale the digital twin using binary search. Also disclosed is a non-transitory computer-readable storage medium storing: a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; and instructions that, when executed by a processor system of a virtual reality (VR) system, cause the VR system to: detect a location of each landmark; match each landmark to a corresponding landmark on the digital twin; and process the corresponding landmarks to orient, position and scale the digital twin in a VR environment based on a position of the physical object.
Also disclosed is an interactive object comprising: a body configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system; a control module configured to control a state of the object, the object having at least two states; wherein an action performed by the user on the object is detected and rendered in the VR environment on the digital twin, and when the action corresponds to a predetermined action, the control module is configured to transition the object from a current state to a next state.
The interactive object may further comprise a transmitter and a sensor system, the sensor system detecting the action and sending a signal to the control module, and the control module sending a signal via the transmitter to the VR system if the action corresponds to the predetermined action. The sensor system may comprise at least one of: a potentiometer for measuring a relative rotation between two or more portions of the body; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body.
The control module or VR system may be configured to combine the signal from the sensor system and actions of the user detected by the VR system, to detect performance of the predetermined action.
The interactive object may further comprise a receiver for receiving a control signal from the VR system, the control signal comprising at least one of a scenario and an object state, the control module being configured to control the object in accordance with the control signal.
The control module may be configured to implement a scenario to control the object, the scenario comprising: a plurality of states including: an initial state of the object; and at least one further state of the object; and a predetermined action for transitioning between states of the plurality of states, wherein the control module is configured to control the state of the object in accordance with the scenario.
The object may further comprise a stimulus system, the stimulus system providing stimulus to the user during interaction between the user and the object. The stimulus system may be controlled by the control module, to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object. The stimulus system may comprise at least one of: a haptic system to deliver a haptic stimulus to the user; a speaker system to deliver a sound stimulus to the user; an olfactory system to deliver an olfactory stimulus to the user; and a thermal system to deliver a thermal stimulus to the user. The object may replicate at least part of a living entity, and the haptic system may comprise a pulse generator for generating a pulse at a location of the living entity at which the user should test for the pulse.
Also disclosed herein is a virtual reality (VR) system for interacting with one or more interactive objects as described above, comprising: a VR display to display the VR environment to the user; and at least one processor for changing content presented to the user in the VR environment in response to detection of the predetermined action.
The change in content may comprise a change in behaviour of the digital twin.
The VR system may comprise: memory comprising the digital twin corresponding to the physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
Thus, the VR systems described above, and the methods implemented by them, may be combined in any manner that achieves one or more of the functions taught herein.
Embodiments of the invention are able to perform alignment automatically after the landmarks have been identified and demarcated. The coordinates of the landmark of the physical object in world space are stored, and the position of the VR system (or VR display) is known and updated. Therefore, the VR system can determined the new location of the digital twin in the virtual environment, relative to the physical object, by measuring movements of the VR display. Therefore, the digital twin only needs to be located once in the virtual environment and will be transformed as the virtual environment is transformed in response to actions from the user.
In addition, embodiments of the invention alter the scale of the 3D model automatically to the desired coordinates moreover, scaling can be automatic when scanning the physical object using photogram metry equipment. This avoids the need for special hardware such as a 3D scanner, and minimises human error by reducing human interaction other than for using a detector (e.g. VR controller) to locate or define landmarks of a physical object in a virtual space. This is especially important if the 3D model generation process fails to provide accurate information about the scale. Therefore, embodiments of the invention allow for more accurate physical-virtual mappings for virtual reality experiences with physical embodiments and interactions.
In addition, creating accurate large-scale 3D models is challenging and 3D models of individual elements are often provided as a substitute. Embodiments of the invention are able to calibrate multiple 3D models independently, each with their own transformation changes with respect to the origin of the VR tracking system. Hence, such embodiments allow for multiple concurrent virtual-physical mappings for multiple virtual-physical interactions. Moreover, embodiments of the present invention enable alignment and scaling of multiple objects in virtual space.
Embodiments of the methods described herein for calibrating the VR system - e.g. methods for orienting, positioning and scaling a digital twin in a VR environment based on a position of a physical object in a real-world or physical environment - can be employed using only three landmarks or points for alignment. It can be important for object with complex surface structures where requiring larger numbers of points for calibration and alignment can result in some of those points being obscured within the complex surface structure.
Brief description of the drawings
Embodiments of the present invention will now be described, by way of non limiting example, with reference to the drawings in which:
Figure 1 illustrates a method for calibrating a VR system for accurately displaying a digital twin, in accordance with present teachings;
Figure 2 is a photograph of a three-dimensional (3D) model of a mannequin produced using photogrammetry;
Figure 3 is a photograph of the 3D model of Figure 2, showing landmarks of the physical object;
Figure 4 illustrates a position of a detector (controller) when capturing the positions in virtual space of the landmarks of the physical object;
Figure 5 shows the virtual space, the coordinate system and coordinates of the landmarks in that space;
Figure 6 illustrates the positions of the landmarks of the physical object and the corresponding landmarks of the digital twin;
Figure 7 illustrates a VR system for implementing the method of Figure 1;
Figures 8a, 8b and 8c illustrate deviations in model alignment with deviations from marking landmarks of 1cm, 2cm and 0.5cm, respectively; and Figure 9 illustrates an interactive object in accordance with present teachings.
Detailed description
Described herein are various methods, and systems that implement those methods, for calibrating the alignment between a physical object and its digital twin rendered in a VR environment. The digital twin can be one of many digital twins displayed in the VR environment, each digital twin being associated with a respective physical object in the physical environment. In such embodiments where there are multiple digital twins concurrently displayed, some present methods enable the scaling, orientation and position to be updated automatically, without needing to reacquire each physical object - i.e. without having to view the physical object between successive updates of the position of the corresponding digital twin in the VR environment.
Figure 1 illustrates one such method 100, for calibrating a VR system - i.e. mapping a digital twin to a physical object. The method 100 broadly comprises:
102: developing a digital twin of a physical object, the physical object having at least three landmarks of known position on or in the physical object;
104: [once the 3D model is been developed] detecting the location of each landmark using a detector;
106: matching each landmark detected at step 104 to a corresponding landmark on the 3D model - hereinafter referred to as a "digital twin"; and
108: orienting, positioning and scaling the digital twin in the VR environment based on the position of the physical object - i.e. the position with respect to a VR display or headset worn by the user.
For step 102, developing a digital twin of a physical object enables the digital twin to be created once, accurately. The shape, appearance and other properties of the digital twin are then stored in memory. Since the digital twin has been produced and stored, it does not need to be reacquired if the viewpoint of the VR system changes with respect to the physical object. Instead, the position of the digital twin can simply be updated in the VR environment to match the location (i.e. position, orientation and scaling) of the physical object.
The digital twin can be rapidly developed using 3D scanning, photogrammetry or other 3D modelling techniques. Such techniques typically acquire only the shape of the physical object. With the exception of the shape of portions of the digital twin, the appearance of the digital twin in the VR environment does not need to reflect the appearance of the physical object. For example, the physical object reflected in the 3D scanned image shown in Figure 2, is the torso 202, head 204 and right arm 206 of a mannequin 200.
The digital twin may include the other limbs that are absent from the mannequin or 3D scanned image, clothing and other visible characteristics that differ from the characteristics of the physical object. Where the VR system implements a scenario as discussed below, the physical object and its digital twin should at least include those components (e.g. head and torso) that are required to be interacted with in order to implement and complete the scenario. Therefore, step 102 may involve developing a 3D model from the physical object, and adapting the 3D model to produce the digital twin - e.g. rendering clothes over the 3D model.
The physical object, and therefore the 3D model 200, has a number of landmarks 208. A landmark is something at a known location of the physical object 200. Each landmark may constitute one or more visible indicia on a surface of the physical object 200 per landmarks 208, a beacon or other device the position of which are detectable by a detector to the desired degree of accuracy. In other embodiments, a landmark is a physical feature of the physical object - e.g. a shape feature. The physical feature will usually be readily identifiable by the controller, such as the tip of the nose, base of the torso, bottom of the chin and shoulder points. For the present calibration method, the physical object should have at least three landmarks. The location of the landmarks can be automatically determined during the scanning process. For example, visible indicia can be identified in images captured during the scanning process and can be incorporated into the digital twin at locations of the digital twin corresponding to the locations of those indicia on the physical object. Alternatively, the location of the landmarks can be manually inserted into the digital twin as shown in Figure 3 - in this sense the landmarks are "of" the object insofar as they correspond to physical locations on the object that can be used to match the location of the object to that of the virtual object. If the location of the landmarks are manually inserted, it can be desirable for the landmarks to be located at distinctive positions on the physical object - e.g. in embodiments where the physical object is a mannequin or part thereof, distinctive positions can include the nose (e.g. tip), chin, left and/or right shoulder, left and/or right nipple and base of the torso. The manually inserted landmarks 300 of mannequin 302 are identified in Figure 3.
With the exception of increasing the ease of manual insertion, it is not necessary for the landmarks to be located at the apices (e.g. nipples, tip of nose) or other distinctive locations of the physical object. Instead, the landmarks can be located anywhere in the physical object. It is generally desirable, however, that the landmarks are not disposed in a straight line.
After creation of the digital twin in accordance with step 102, the location of the physical object relative to the VR system needs to be determined. Step 104 involves detecting the location of each landmark 210 using a detector. This detection step is used to calibrate the location of the digital twin in virtual space. Therefore, at least three landmarks should be visible or otherwise detectable by the detector.
Figure 4 illustrates a detector - controller 400 - the base of which is positioned on the nose of the physical object 402. The position of the controller 400 is known with respect to the VR system - in the various circumstances where relative positions are "known" in the present teachings, it will be understood that those relative positions are known to an acceptable degree of accuracy. The coordinates of the landmarks are then detected relative to the controller 400. The VR system can therefore determine the location of the landmarks relative to the known location of the controller 400. Therefore, step 104 may include detecting the location of landmark based on a location of each landmark relative to the detector and a location of the detector relative to the VR system, or a processor or other component of the VR system.
To reduce error, it can be desirable for at least three landmarks to be detectable from a single position of the detector. However, provided the location of the detector is known with respect to a known coordinate system, and the locations of the landmarks are detectable relative to the detector, then the three or more landmarks can be detected in more than one detection step 104.
Since the VR system knows the virtual controller's position (i.e. position of the detector in virtual space - the VR environment) the coordinates of the landmarks of the physical object can be demarcated virtual space. The landmarks demarcated in virtual space are therefore the desired locations for corresponding landmarks on the digital twin. Step 104 may therefore include detecting the location of each landmark of the physical object using the detector, controller 104, and mapping the location of each landmark to a desired location of a corresponding landmark of the digital claim in the VR environment.
The desired locations of the corresponding landmarks of the digital twin are indicated by reference numeral 500 in Figure 5. Figure 5 also shows the coordinate system 502 with respect to which the location of the controller, and therefore of the landmarks of the physical object, is known.
Step 106 involves matching each landmark to a corresponding landmark of the digital twin. The matching process involves identifying the corresponding landmark on the digital twin for each landmark of the physical object that was detected at step 104. Step 108 then involves processing the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
Step 108 recognises that when placing the digital twin in the virtual space, as shown in Figure 6, there may be an offset between the locations of the landmarks 600 of the physical objects reproduced in virtual space, and the corresponding landmarks 602 of the digital twins. To correct for this offset, step 108 can make use of many different processes. In its simplest form, the digital twin is rendered at a distance equal to that of the physical object from the VR system - note: references to distances from the VR system will generally be understood to correspond to distances as perceived through the VR headset or display, such that if a user were to remove the VR headset the physical object would appear roughly where the digital twin was last presented in the VR headset. This involves scaling the digital twin based on a position of the user (and thus of the VR system) relative to the physical object and transforming - i.e. rotating and positioning - the digital twin to match the orientation and position of the physical object.
Scaling can be performed by binary search, to identify the location of a corresponding landmark 602 in the VR display required to match a location of a landmark 600 of the physical object in virtual space. Transformation can be done by least-squares fitting or other processes or even manual alignment to minimise the distance between each pair of coordinates (i.e. coordinates of each landmark and corresponding landmark). In the present embodiment, the transformation is performed using Singular Vector Decomposition (SVD) - this involves specifying the coordinates of the landmarks and corresponding landmarks as column matrices (or as row matrices transposed for the transformation) and defining a matrix transformation that converts the locations of the corresponding landmarks to the locations of the landmarks in virtual space. The points in the digital twin that are not demarcated with landmarks can then be transformed using the same matrix transformation. The matrix transform is determined by: providing pairs of coordinates of virtual (corresponding) and real world landmarks for alignment; using SVD to determine the rotation and transformation to get a best matching by iteratively applying changes to the transformation matrix until a predetermined threshold has been met; repeating this process at different scales; and using the scale and alignment that gives the minimum error i.e. the best match.
In each case, step 108 involves orienting the digital twin in accordance with an orientation of the physical object relative to the user, and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment. In this respect, there is a distinction between the "size" of the virtual object and its "scale" in the virtual environment. The "size" of the virtual object is how 'big' it looks to the user - e.g. the number of pixels it occupies in a VR display. In contrast, the "scale" of the virtual object is how big the virtual object is in the virtual environment - e.g. relative to other objects in the virtual environment. The "size" may change but the "scale" will generally be fixed. Therefore, when the virtual object appears further away from the user in the virtual environment, its size will be smaller but its scale will remain the same.
Moreover, step 108 involves locating the virtual object, including scaling it, in the virtual environment. This will generally only be performed once, to fix the coordinates, orientation and scale of the virtual object in the virtual environment - assuming the object itself is static in that environment. Movements of the VR system will result in changes in the viewing direction and position of the user. These movements will result in a transformation of the virtual environment, and the virtual object will be similarly transformed in the virtual environment.
The scenario may be static - i.e. the digital twin or physical object may have a single state, such as a passive/inactive state. The scenario may instead be evolving. Therefore, scaling the digital twin in accordance with step 108 may involve sizing the digital twin to suit the scenario. For example, while the physical object may be near the user, the digital twin may be rendered at a distance from the user. The digital twin is therefore down-sized based on the distance at which the user should perceive it to be from their current position in the VR environment.
In such a scenario, to approach the digital twin the user may use a controller. The user uses the controller to produce a signal and the VR display receives the signal (which may include a processor system interpreting the signal and mapping it to a signal suitable for actioning by the VR display) and teleports the user to a new position in the VR environment in accordance with the signal. The digital twin can then be resized based on the new position of the user VR environment. This teleporting process - i.e. moving between locations in the VR environment without a corresponding movement of the user in the physical environment - enables a user to navigate through a VR environment while remaining stationary, thereby allowing the VR environment to be of any size regardless of the size of the physical environment. It also avoids the need for the user to navigate around obstacles. Obstacles can therefore be placed in the VR environment without corresponding objects being placed in the physical environment.
The method 100 may be employed, for example, on a VR system 700 as shown in Figure 7. The block diagram of the VR system 700 will typically include a desktop computer or laptop. However, the VR system 700 may instead include a mobile computer device such as a smart phone, a personal data assistant (PDA), a palm-top computer, or multimedia Internet enabled cellular telephone.
As shown, the VR system 700 includes the following components in electronic communication via a bus 712:
(a) a VR display 702 for displaying a VR environment to a user;
(b) non-volatile (non-transitory) memory 704 comprising (i.e. storing) the digital twin;
(c) random access memory ("RAM") 706 - as an alternative to memory 704, RAM 706 may store the digital twin;
(d) process system comprising N processing components embodied in processor module 708;
(e) a transceiver component 710 that includes N transceivers; and
(f) user controls 714.
Although the components depicted in Figure 7 represent physical components, Figure 7 is not intended to be a hardware diagram. Thus, many of the components depicted in Figure 7 may be realized by common constructs or distributed among additional physical components. Moreover, it is certainly contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to Figure 7.
The display 702 generally operates to provide a presentation of content, such as the digital twin or twins and the VR environment more generally, to a user. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro projector and OLED displays).
In general, the non-volatile data storage 704 (also referred to as non-volatile memory) functions to store (e.g., persistently store) data and executable code, including a virtual textile (data container for texture map and cloth simulation parameters). It may also store the 3D model or digital twin. The executable code in this instance comprises instructions enabling the system 700 to perform the methods disclosed herein, such as that described with reference to Figure 1.
In some embodiments for example, the non-volatile memory 704 includes bootloader code, modem software, operating system code, file system code, and code to facilitate the implementation components, well known to those of ordinary skill in the art that, for simplicity, are not depicted nor described.
In many implementations, the non-volatile memory 704 is realized by flash memory (e.g., NAND or ONENAND memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory 704, the executable code in the non-volatile memory 704 is typically loaded into RAM 706 and executed by one or more of the N processing components 708.
The N processing components 708 in connection with RAM 706 generally operate to execute the instructions stored in non-volatile memory 704. As one of ordinarily skill in the art will appreciate, the N processing components 708 may include a video processor, modem processor, DSP, graphics processing unit (GPU), and other processing components. It is possible for the N processing components 708 to include a central processing unit (CPU), which executes operations in series. However, to rapidly transform the digital twin with movements of the VR display, a GPU may be provided. When the instructions stored in memory 704 (or 706) are executed by the processing system 708, they cause the VR system 700 to detect a location of each landmark using the detector 720, match each landmark to a corresponding landmark on the digital twin using the processor system 708, and process, also using the processor system 708, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
The transceiver component 710 includes M transceiver chains, which may be used for communicating with external devices via wireless networks 716. Each of the M transceiver chains may represent a transceiver associated with a particular communication scheme. For example, each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a CDMA network, a GPRS network, a UMTS networks), and other types of communication networks.
Reference numeral 718 indicates that the VR system 700 may include physical buttons, as well as virtual buttons such as those that would be displayed in the VR environment. Moreover, the VR system 700 may communicate with other computer systems or data sources over network 716.
The VR system 700 also includes a detector, presently embodied by controller 720, for detecting landmarks. The controller 720 may also be used to navigate through the VR environment.
It should be recognized that Figure 7 is merely exemplary and that the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on, or transmitted as, one or more instructions or code encoded on a non- transitory computer-readable medium 704 - the full method 100, described with reference to Figure 1, may be stored as instructions on the computer-readable medium to await execution by a computer system. Non-transitory computer- readable medium 704 includes both computer storage medium and communication medium including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer, such as a USB drive, solid state hard drive or hard disk.
The VR system 700 can robustly implement the method 100, to calibrate the digital twin (or digital twins where multiple physical objects are rendered) in the VR environment. To quantify the robustness of the VR system, the alignment process was repeatedly performed whilst randomizing the starting position, rotation and scale of the model (mimicking variations in model generation process) to be aligned as well as adding noise to the calibration markers (mimicking variations in taking measurements of the physical landmarks). For these trials it was also assumed that the system should have an alignment tolerance of 1cm for the user to feel that physical object and virtual object (digital twin) are aligned. With some deviation when measuring landmarks, an alignment with error deviations was achieved as summarized in the box and whisker plots of Figures 8a, 8b and 8c that show deviations in model alignment with a maximum deviation from landmarks of the physical object of 1cm, 2cm and 0.5cm respectively. It is evident that, given that the maximum deviation when demarcating the landmark points during the calibration process are within lcm, the resultant alignment is likely to deviate less than 1cm which is acceptable. Further fine-tuning of the landmark markers will result in a more accurate matching.
In addition to accurately calibrating the position of the digital twin to match that of a physical object, it can be desirable to cause the physical object to behave in a particular manner. Such an interactive object 900 is shown in Figure 9. The object 900 includes a body 902 and a control module 904 shown in broken lines as it is internal of the body 902 and in communication with a VR system via cable 906. In other embodiments, the object 900 may not be in communication with a VR system or may communicate wirelessly.
The body 902 is configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system. This correspondence may be achieved by the calibration method 100. The control module 904 is configured to control a state of the object 900. The state of the object 900 is defined by reference to its behaviour - the object 900 has at least two states. For example, the object may generate an audible noise in one state and be silent in another state.
In use, an action performed by the user on the object 900 is detected and rendered in the VR environment on the digital twin - e.g. the hands of the user performing an action on the physical object will be detected by the VR system or a sensor system of the object 900, and rendered in the VR environment. When the action corresponds to a predetermined action - e.g. shaking of the shoulders of the mannequin object 900 of the present embodiment - the control module 904 is configured to transition the object 900 from a current state to a next state. The object 900 can therefore change state according to a pre-programmed sequence. This enables the user to interact with a digital twin by performing corresponding actions on the object 900.
The interactive object 900 will generally include a transmitter (e.g. transceiver 710 of Figure 7), presently forming an interface 908 between the cable 906 and the object 900, and a sensor system 910. The sensor system 910 detects the action and sends a signal to the control module 904. The control module 904 sends a signal via the transmitter 908 to the VR system (e.g. system 700) corresponding to the action detected by the sensor system 910. The control module 904 may also determine if the action corresponds to the predetermined action, and only send the signal, via the transmitter 908 to the VR system if the action corresponds to the predetermined action. Moreover, while the sensor system 910 and control module 904 are shown as separate components, they may form part of the same component or components. The sensor system (with or without control module) may determine whether the predetermined action or relevant portion thereof has been performed. For example, the predetermined action may involve the application of pressure for a predetermined period, to a particular region of the physical/virtual object. The sensor system may include a pressure sensor located at that particular region and may time the duration of application of pressure to that region. If the pressure is applied for the requisite duration, the sensor system (or control module) generates a signal specifying that the predetermined action has been performed.
To illustrate, the sensor 910 may comprise a pressure sensor positioned in the vicinity of the wrist of the right arm of the mannequin. The pressure sensor may measure the pressure applied by the user to the pressure sensor - when simulating checking a pulse - and send a signal to the control module 904. The control module may determine if a pressure has been applied to the sensor, if the user has maintained the pressure for a sufficient period or if the applied pressure is within a particular desired pressure range and so on. If the action (application of pressure by the user to the pressure sensor) accords with a predetermined action, that may be stored in memory accessible by the control module 904, the control module 904 sends a signal to the VR system and transitions the object 900 to the next state - e.g. activating an audio signal to replicate shallow breathing that the user then needs to listen to. This ensures the user follows a predetermined sequence of actions, or a 'scenario', when interacting with the object.
The sensor system can employ any desired sensors. For example, it may include one or more of: a potentiometer for measuring a relative rotation between two or more portions of the body - e.g. in the mannequin example the object may be static on approach by the user, the object (i.e. casualty or victim) waits for the user to tilt the head of the mannequin (or tilt the head by a predetermined amount) as measured by the potentiometer, to simulate opening the airway, and the VR system and/or control module 904 then updates the state of the object to "breathing"; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body - this can be used to determine if the user is speaking to the object 900 or, if coupled with a natural language processing system, to determine what the user is saying to the object 900 - e.g. whether the user is asking a question prescribed when triaging a patient simulated using the object 900.
The control module 904 or VR system 700 is configured to combine the signal from the sensor system 910 and actions of the user, to detect performance of the predetermined action. This checks whether all relevant components of an action have been performed. For example, if a user if applying pressure to a pulse using their elbow, images captured by the VR system (or a camera in the object 900) may be used to identify that the user's hands are not correctly positioned on the object 900. Therefore, although the pressure may be correctly applied, the predetermined action has not been performed.
As mentioned above, transmitter 908 may be part of a transceiver and therefore comprise a receiver for receiving a control signal from the VR system. The control signal can be used to indicate to the control module 904 that the object 900 can transition from its current state to its next state. The control signal may also include a scenario or an object state. In either case, the control module 904 is configured to control the object 900 in accordance with the control signal.
Using this method, the object 900 can be preloaded with a scenario, whether that is a single object state or a plurality of object states between which the object 900 can transition. In general, a scenario will include a plurality of states. The plurality of states will include an initial state of the object 900, that defines the starting condition - i.e. the state of the object when the user first interacts with it - and one or more further states. Notably, in some scenarios the object 900 may revisit the initial state. For example, a scenario may simulate triaging multiple victims in an accident. The object 900 can behave as necessary to simulate the behaviour of each victim, in the order that those victims are approached by the user. On completion of triage of one victim, the object 900 may transition from its current states to the initial state to commence behaving like the next victim visited by the user.
In each case, a predetermined action is required to be performed to transition from the current state to the next state, and the control module 904 controls the state (i.e. the behaviour) of the object in accordance with the scenario received from the VR system.
The object 900 also includes a stimulus system 912. The stimulus system 912 provides stimulus to the user during interaction between the user and the object 900. The stimulus system 912 and the sensor system 910 each provide the object 900 with active behaviour and an ability to respond to the user, rather than being a passive object.
The stimulus system 912 is controlled by the control module 904 to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object. Revisiting the triage example, as shown in Figure 9 the stimulus system includes a module 912 at the wrist of the object 900. The module 912 may include a servomotor that simulates the pulse of a victim, and the sensor system 910 can determine if the user is applying pressure at the correct location, and/or with the correct force, to accurately detect the pulse. If the user checks the pulse correctly, the object 900 may be transitioned to its next state. If the user fails to check the pulse correctly, the object 900 may not transition or, after a predetermined period of time, the simulation may exit.
The stimulus system can include any suitable stimuli, such as a haptic system to deliver a haptic stimulus to the user (e.g. the servomotor example, or a compressor system for simulating respiratory function), a speaker system to deliver a sound stimulus to the user, an olfactory system to deliver an olfactory stimulus to the user, and a thermal system to deliver a thermal stimulus to the user.
The VR system 700 may interact with any number of interactive objects 900. Moreover, the VR system 700 may change content presented to the user in the VR environment in response to detection of the predetermined action. That change in content may be a change in behaviour of the digital twin - e.g. if the user clears the airway of the physical object 900, the digital twin may visible start breathing.
Thus, the active objects afford interaction between a user and the physical environment, where that interaction maps to a scenario or state that is presented in the VR environment. Embodiments of such objects can, when interacting with a VR system, enable heightened interaction and scenario mapping that has previously been achievable. Moreover, when combined with calibration method 100 of Figure 1, the objects and VR system can produce a more realistic simulation in a VR environment than has previously been achievable.
It will be appreciated that many further modifications and permutations of various aspects of the described embodiments are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
Throughout this specification and the claims which follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.

Claims

Claims
1. A virtual reality (VR) system comprising: a VR display for displaying a VR environment to a user; memory comprising a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
2. The VR system of claim 1, wherein the processor system processes the corresponding landmarks to orient, position and scale the digital twin in the VR environment by: orienting the digital twin in accordance with an orientation of the physical object relative to the user; and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
3. The VR system of claim 2, wherein scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment comprises down-sizing the object and placing the object perceptually further from the user in the VR environment when compared with a distance of the physical object from the user.
4. The VR system of claim 3, further comprising a controller for receiving position control commands, the VR display receiving a signal from the controller and teleporting the user to a new position in the VR environment in accordance with the signal, wherein the digital twin is resized based on the new position of the user.
5. The VR system of any one of claims 2 to 4, wherein the processor system orients and positions the digital twin by transforming the digital twin using Singular Vector Decomposition.
6. The VR system of any one of claims 1 to 5, wherein the processor system scales the digital twin using binary search.
7. A non-transitory computer-readable storage medium storing: a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; and instructions that, when executed by a processor system of a virtual reality (VR) system, cause the VR system to: detect a location of each landmark; match each landmark to a corresponding landmark on the digital twin; and process the corresponding landmarks to orient, position and scale the digital twin in a VR environment based on a position of the physical object.
8. An interactive object comprising: a body configured to correspond to a digital twin in a virtual reality
(VR) environment displayed to a user by a VR system; a control module configured to control a state of the object, the object having at least two states; wherein an action performed by the user on the object is detected and rendered in the VR environment on the digital twin, and when the action corresponds to a predetermined action, the control module is configured to transition the object from a current state to a next state.
9. The interactive object of claim 8, further comprising a transmitter and a sensor system, the sensor system detecting the action and sending a signal to the control module, and the control module sending a signal via the transmitter to the VR system if the action corresponds to the predetermined action.
10. The interactive object of claim 9, wherein the sensor system comprises at least one of: a potentiometer for measuring a relative rotation between two or more portions of the body; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body.
11. The interactive object of claim 9 or 10, wherein the control module or VR system is configured to combine the signal from the sensor system and actions of the user detected by the VR system, to detect performance of the predetermined action.
12. The interactive object of any one of claims 8 to 11, further comprising a receiver for receiving a control signal from the VR system, the control signal comprising at least one of a scenario and an object state, the control module being configured to control the object in accordance with the control signal.
13. The interactive object of claim 8, wherein the control module is configured to implement a scenario to control the object, the scenario comprising: a plurality of states including: an initial state of the object; and at least one further state of the object; and a predetermined action for transitioning between states of the plurality of states, wherein the control module is configured to control the state of the object in accordance with the scenario.
14. The object of any one of claims 8 to 13, further comprising a stimulus system, the stimulus system providing stimulus to the user during interaction between the user and the object.
15. The object of claim 14, wherein the stimulus system is controlled by the control module, to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object.
16. The object of claim 14 or 15, wherein the stimulus system comprises at least one of: a haptic system to deliver a haptic stimulus to the user; a speaker system to deliver a sound stimulus to the user; an olfactory system to deliver an olfactory stimulus to the user; and a thermal system to deliver a thermal stimulus to the user.
17. The object of claim 16, that replicates at least part of a living entity, and wherein the haptic system comprises a pulse generator for generating a pulse at a location of the living entity at which the user should test for the pulse.
18. A virtual reality (VR) system for interacting with one or more interactive objects according to any one of claims 8 to 17, comprising: a VR display to display the VR environment to the user; and at least one processor for changing content presented to the user in the VR environment in response to detection of the predetermined action.
19. The VR system of claim 18, wherein the change in content comprises a change in behaviour of the digital twin.
20. The VR system of claim 18 or 19, comprising: memory comprising the digital twin corresponding to the physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
PCT/SG2022/050072 2021-02-17 2022-02-17 Methods relating to virtual reality systems and interactive objects WO2022177505A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280019831.6A CN116964638A (en) 2021-02-17 2022-02-17 Method involving virtual reality systems and interactive objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202101597Q 2021-02-17
SG10202101597Q 2021-02-17

Publications (1)

Publication Number Publication Date
WO2022177505A1 true WO2022177505A1 (en) 2022-08-25

Family

ID=82932305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050072 WO2022177505A1 (en) 2021-02-17 2022-02-17 Methods relating to virtual reality systems and interactive objects

Country Status (2)

Country Link
CN (1) CN116964638A (en)
WO (1) WO2022177505A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190392646A1 (en) * 2015-05-05 2019-12-26 Ptc Inc. Augmented reality system
CN112091982A (en) * 2020-11-16 2020-12-18 杭州景业智能科技股份有限公司 Master-slave linkage control method and system based on digital twin mapping
CN112132900A (en) * 2020-09-29 2020-12-25 凌美芯(北京)科技有限责任公司 Visual repositioning method and system
US20210042992A1 (en) * 2017-08-30 2021-02-11 Compedia Software and Hardware Development Ltd. Assisted augmented reality
US20210201584A1 (en) * 2019-12-31 2021-07-01 VIRNECT inc. System and method for monitoring field based augmented reality using digital twin
CN113450448A (en) * 2020-03-25 2021-09-28 阿里巴巴集团控股有限公司 Image processing method, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190392646A1 (en) * 2015-05-05 2019-12-26 Ptc Inc. Augmented reality system
US20210042992A1 (en) * 2017-08-30 2021-02-11 Compedia Software and Hardware Development Ltd. Assisted augmented reality
US20210201584A1 (en) * 2019-12-31 2021-07-01 VIRNECT inc. System and method for monitoring field based augmented reality using digital twin
CN113450448A (en) * 2020-03-25 2021-09-28 阿里巴巴集团控股有限公司 Image processing method, device and system
CN112132900A (en) * 2020-09-29 2020-12-25 凌美芯(北京)科技有限责任公司 Visual repositioning method and system
CN112091982A (en) * 2020-11-16 2020-12-18 杭州景业智能科技股份有限公司 Master-slave linkage control method and system based on digital twin mapping

Also Published As

Publication number Publication date
CN116964638A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
US8244506B2 (en) Ultrasound simulation apparatus and method
US20200410713A1 (en) Generating pose information for a person in a physical environment
KR20210011425A (en) Image processing method and device, image device, and storage medium
KR101519775B1 (en) Method and apparatus for generating animation based on object motion
JP7299414B2 (en) Image processing method, device, electronic device and computer program
WO2007013833A1 (en) Method and system for visualising virtual three-dimensional objects
US20140071165A1 (en) Mixed reality simulation methods and systems
US11042730B2 (en) Method, apparatus and device for determining an object, and storage medium for the same
CN109643014A (en) Head-mounted display tracking
US20190371072A1 (en) Static occluder
CN112346572A (en) Method, system and electronic device for realizing virtual-real fusion
US20110109628A1 (en) Method for producing an effect on virtual objects
JP2006192548A (en) Body imitation robot system and body imitation motion control method
WO2020149270A1 (en) Method for generating 3d object arranged in augmented reality space
KR101960929B1 (en) Basic life support training simulation system
CN114663516A (en) Method and device for calibrating multi-camera system based on human posture
WO2022177505A1 (en) Methods relating to virtual reality systems and interactive objects
US20200334998A1 (en) Wearable image display device for surgery and surgery information real-time display system
Dias et al. The arena: An indoor mixed reality space
CN110244842B (en) VR model, VR scene processing method, VR training system, storage medium and electronic equipment
WO2022245653A1 (en) Ar data simulation with gaitprint imitation
Ruffaldi et al. Co-located haptic interaction for virtual USG exploration
CN108845669B (en) AR/MR interaction method and device
CN109716395B (en) Maintaining object stability in virtual reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22756644

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280019831.6

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11202305954W

Country of ref document: SG

122 Ep: pct application non-entry in european phase

Ref document number: 22756644

Country of ref document: EP

Kind code of ref document: A1