WO2022177505A1 - Methods relating to virtual reality systems and interactive objects - Google Patents
Methods relating to virtual reality systems and interactive objects Download PDFInfo
- Publication number
- WO2022177505A1 WO2022177505A1 PCT/SG2022/050072 SG2022050072W WO2022177505A1 WO 2022177505 A1 WO2022177505 A1 WO 2022177505A1 SG 2022050072 W SG2022050072 W SG 2022050072W WO 2022177505 A1 WO2022177505 A1 WO 2022177505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- digital twin
- environment
- landmark
- landmarks
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000002452 interceptive effect Effects 0.000 title claims description 18
- 230000008569 process Effects 0.000 claims abstract description 23
- 230000009471 action Effects 0.000 claims description 43
- 230000003993 interaction Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 11
- 230000007704 transition Effects 0.000 claims description 9
- 230000006399 behavior Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000004513 sizing Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 230000026676 system process Effects 0.000 claims 1
- 230000000875 corresponding effect Effects 0.000 description 32
- 238000012545 processing Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 10
- 238000013507 mapping Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000002445 nipple Anatomy 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 206010021079 Hypopnoea Diseases 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004202 respiratory function Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Definitions
- the present invention relates to virtual reality (VR) systems for interacting with interactive object, and calibration methods for VR systems.
- VR virtual reality
- Generating a virtual object to correspond to a physical object often requires the physical object to be viewed from multiple angles, to define the shape of the surfaces of the virtual object in the VR environment. Each viewpoint then needs to be aligned to ensure the surfaces of the virtual object match those of the physical object. The virtual object then needs to be aligned in the VR environment with the physical object in the real-world environment. For some technologies, a user has to manually adjust the transformation of the virtual model by repeatedly viewing it through a VR headset, determining the error between the lay of the 3D model and the virtual object, and adjusting the 3D model to match the physical object. Such manual alignment of the virtual model to be in the exact position of the physical object is tedious.
- the user must also repeatedly shift their head to multiple perspectives (top, side, front) to correctly calibrate the model in all axes. This process has to be repeated whenever there is a change of the position of the physical with respect to the VR tracking system. Moreover, human error is introduced during manual alignment by visual inspection.
- This problem also substantially removes the ability of these VR systems to map multiple objects into the virtual space.
- a change in viewpoint applies to all objects.
- a user will typically only be able to view one object to update its position and shape in the virtual environment.
- the viewpoint is changed. So the position and shape of all other objects are now no longer up-to-date.
- the physical objects are static. This enables the digital twins to be statically placed in the virtual environment. However, this reduces the realism of the VR environment.
- a virtual reality (VR) system comprising: a VR display for displaying a VR environment to a user; memory comprising a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
- VR virtual reality
- the processor system may process the corresponding landmarks to orient, position and scale the digital twin in the VR environment by: orienting the digital twin in accordance with an orientation of the physical object relative to the user; and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
- Scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment may comprise down-sizing the object and placing the object perceptually further from the user in the VR environment when compared with a distance of the physical object from the user.
- the VR system may further comprise a controller for receiving position control commands, the VR display receiving a signal from the controller and teleporting the user to a new position in the VR environment in accordance with the signal, wherein the digital twin is re-sized based on the new position of the user.
- the processor system may orient and position the digital twin by transforming the digital twin using Singular Vector Decomposition.
- the processor system may scale the digital twin using binary search.
- a non-transitory computer-readable storage medium storing: a digital twin corresponding to a physical object, the physical object comprising three or more landmarks; and instructions that, when executed by a processor system of a virtual reality (VR) system, cause the VR system to: detect a location of each landmark; match each landmark to a corresponding landmark on the digital twin; and process the corresponding landmarks to orient, position and scale the digital twin in a VR environment based on a position of the physical object.
- VR virtual reality
- an interactive object comprising: a body configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system; a control module configured to control a state of the object, the object having at least two states; wherein an action performed by the user on the object is detected and rendered in the VR environment on the digital twin, and when the action corresponds to a predetermined action, the control module is configured to transition the object from a current state to a next state.
- VR virtual reality
- the interactive object may further comprise a transmitter and a sensor system, the sensor system detecting the action and sending a signal to the control module, and the control module sending a signal via the transmitter to the VR system if the action corresponds to the predetermined action.
- the sensor system may comprise at least one of: a potentiometer for measuring a relative rotation between two or more portions of the body; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body.
- the control module or VR system may be configured to combine the signal from the sensor system and actions of the user detected by the VR system, to detect performance of the predetermined action.
- the interactive object may further comprise a receiver for receiving a control signal from the VR system, the control signal comprising at least one of a scenario and an object state, the control module being configured to control the object in accordance with the control signal.
- the control module may be configured to implement a scenario to control the object, the scenario comprising: a plurality of states including: an initial state of the object; and at least one further state of the object; and a predetermined action for transitioning between states of the plurality of states, wherein the control module is configured to control the state of the object in accordance with the scenario.
- the object may further comprise a stimulus system, the stimulus system providing stimulus to the user during interaction between the user and the object.
- the stimulus system may be controlled by the control module, to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object.
- the stimulus system may comprise at least one of: a haptic system to deliver a haptic stimulus to the user; a speaker system to deliver a sound stimulus to the user; an olfactory system to deliver an olfactory stimulus to the user; and a thermal system to deliver a thermal stimulus to the user.
- the object may replicate at least part of a living entity, and the haptic system may comprise a pulse generator for generating a pulse at a location of the living entity at which the user should test for the pulse.
- VR virtual reality
- the change in content may comprise a change in behaviour of the digital twin.
- the VR system may comprise: memory comprising the digital twin corresponding to the physical object, the physical object comprising three or more landmarks; a detector; and a processor system, the memory storing instructions that, when executed by the processor system, cause the VR system to: detect a location of each landmark using the detector; match each landmark to a corresponding landmark on the digital twin using the processor system; and process, using the processor system, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
- Embodiments of the invention are able to perform alignment automatically after the landmarks have been identified and demarcated.
- the coordinates of the landmark of the physical object in world space are stored, and the position of the VR system (or VR display) is known and updated. Therefore, the VR system can determined the new location of the digital twin in the virtual environment, relative to the physical object, by measuring movements of the VR display. Therefore, the digital twin only needs to be located once in the virtual environment and will be transformed as the virtual environment is transformed in response to actions from the user.
- embodiments of the invention alter the scale of the 3D model automatically to the desired coordinates moreover, scaling can be automatic when scanning the physical object using photogram metry equipment. This avoids the need for special hardware such as a 3D scanner, and minimises human error by reducing human interaction other than for using a detector (e.g. VR controller) to locate or define landmarks of a physical object in a virtual space. This is especially important if the 3D model generation process fails to provide accurate information about the scale. Therefore, embodiments of the invention allow for more accurate physical-virtual mappings for virtual reality experiences with physical embodiments and interactions.
- a detector e.g. VR controller
- Embodiments of the invention are able to calibrate multiple 3D models independently, each with their own transformation changes with respect to the origin of the VR tracking system. Hence, such embodiments allow for multiple concurrent virtual-physical mappings for multiple virtual-physical interactions. Moreover, embodiments of the present invention enable alignment and scaling of multiple objects in virtual space.
- Embodiments of the methods described herein for calibrating the VR system - e.g. methods for orienting, positioning and scaling a digital twin in a VR environment based on a position of a physical object in a real-world or physical environment - can be employed using only three landmarks or points for alignment. It can be important for object with complex surface structures where requiring larger numbers of points for calibration and alignment can result in some of those points being obscured within the complex surface structure.
- Figure 1 illustrates a method for calibrating a VR system for accurately displaying a digital twin, in accordance with present teachings
- Figure 2 is a photograph of a three-dimensional (3D) model of a mannequin produced using photogrammetry
- Figure 3 is a photograph of the 3D model of Figure 2, showing landmarks of the physical object
- Figure 4 illustrates a position of a detector (controller) when capturing the positions in virtual space of the landmarks of the physical object
- Figure 5 shows the virtual space, the coordinate system and coordinates of the landmarks in that space
- Figure 6 illustrates the positions of the landmarks of the physical object and the corresponding landmarks of the digital twin
- Figure 7 illustrates a VR system for implementing the method of Figure 1
- Figures 8a, 8b and 8c illustrate deviations in model alignment with deviations from marking landmarks of 1cm, 2cm and 0.5cm, respectively; and Figure 9 illustrates an interactive object in accordance with present teachings.
- the digital twin can be one of many digital twins displayed in the VR environment, each digital twin being associated with a respective physical object in the physical environment.
- some present methods enable the scaling, orientation and position to be updated automatically, without needing to reacquire each physical object - i.e. without having to view the physical object between successive updates of the position of the corresponding digital twin in the VR environment.
- Figure 1 illustrates one such method 100, for calibrating a VR system - i.e. mapping a digital twin to a physical object.
- the method 100 broadly comprises:
- 108 orienting, positioning and scaling the digital twin in the VR environment based on the position of the physical object - i.e. the position with respect to a VR display or headset worn by the user.
- step 102 developing a digital twin of a physical object enables the digital twin to be created once, accurately.
- the shape, appearance and other properties of the digital twin are then stored in memory. Since the digital twin has been produced and stored, it does not need to be reacquired if the viewpoint of the VR system changes with respect to the physical object. Instead, the position of the digital twin can simply be updated in the VR environment to match the location (i.e. position, orientation and scaling) of the physical object.
- the digital twin can be rapidly developed using 3D scanning, photogrammetry or other 3D modelling techniques. Such techniques typically acquire only the shape of the physical object. With the exception of the shape of portions of the digital twin, the appearance of the digital twin in the VR environment does not need to reflect the appearance of the physical object.
- the physical object reflected in the 3D scanned image shown in Figure 2 is the torso 202, head 204 and right arm 206 of a mannequin 200.
- the digital twin may include the other limbs that are absent from the mannequin or 3D scanned image, clothing and other visible characteristics that differ from the characteristics of the physical object.
- the physical object and its digital twin should at least include those components (e.g. head and torso) that are required to be interacted with in order to implement and complete the scenario. Therefore, step 102 may involve developing a 3D model from the physical object, and adapting the 3D model to produce the digital twin - e.g. rendering clothes over the 3D model.
- the physical object, and therefore the 3D model 200 has a number of landmarks 208.
- a landmark is something at a known location of the physical object 200. Each landmark may constitute one or more visible indicia on a surface of the physical object 200 per landmarks 208, a beacon or other device the position of which are detectable by a detector to the desired degree of accuracy.
- a landmark is a physical feature of the physical object - e.g. a shape feature.
- the physical feature will usually be readily identifiable by the controller, such as the tip of the nose, base of the torso, bottom of the chin and shoulder points.
- the physical object should have at least three landmarks. The location of the landmarks can be automatically determined during the scanning process.
- visible indicia can be identified in images captured during the scanning process and can be incorporated into the digital twin at locations of the digital twin corresponding to the locations of those indicia on the physical object.
- the location of the landmarks can be manually inserted into the digital twin as shown in Figure 3 - in this sense the landmarks are "of" the object insofar as they correspond to physical locations on the object that can be used to match the location of the object to that of the virtual object.
- the location of the landmarks are manually inserted, it can be desirable for the landmarks to be located at distinctive positions on the physical object - e.g. in embodiments where the physical object is a mannequin or part thereof, distinctive positions can include the nose (e.g. tip), chin, left and/or right shoulder, left and/or right nipple and base of the torso.
- the manually inserted landmarks 300 of mannequin 302 are identified in Figure 3.
- the landmarks can be located at the apices (e.g. nipples, tip of nose) or other distinctive locations of the physical object. Instead, the landmarks can be located anywhere in the physical object. It is generally desirable, however, that the landmarks are not disposed in a straight line.
- Step 104 involves detecting the location of each landmark 210 using a detector. This detection step is used to calibrate the location of the digital twin in virtual space. Therefore, at least three landmarks should be visible or otherwise detectable by the detector.
- Figure 4 illustrates a detector - controller 400 - the base of which is positioned on the nose of the physical object 402.
- the position of the controller 400 is known with respect to the VR system - in the various circumstances where relative positions are "known" in the present teachings, it will be understood that those relative positions are known to an acceptable degree of accuracy.
- the coordinates of the landmarks are then detected relative to the controller 400.
- the VR system can therefore determine the location of the landmarks relative to the known location of the controller 400. Therefore, step 104 may include detecting the location of landmark based on a location of each landmark relative to the detector and a location of the detector relative to the VR system, or a processor or other component of the VR system.
- the three or more landmarks can be detected in more than one detection step 104.
- Step 104 may therefore include detecting the location of each landmark of the physical object using the detector, controller 104, and mapping the location of each landmark to a desired location of a corresponding landmark of the digital claim in the VR environment.
- Figure 5 also shows the coordinate system 502 with respect to which the location of the controller, and therefore of the landmarks of the physical object, is known.
- Step 106 involves matching each landmark to a corresponding landmark of the digital twin.
- the matching process involves identifying the corresponding landmark on the digital twin for each landmark of the physical object that was detected at step 104.
- Step 108 then involves processing the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
- Step 108 recognises that when placing the digital twin in the virtual space, as shown in Figure 6, there may be an offset between the locations of the landmarks 600 of the physical objects reproduced in virtual space, and the corresponding landmarks 602 of the digital twins.
- step 108 can make use of many different processes.
- the digital twin is rendered at a distance equal to that of the physical object from the VR system - note: references to distances from the VR system will generally be understood to correspond to distances as perceived through the VR headset or display, such that if a user were to remove the VR headset the physical object would appear roughly where the digital twin was last presented in the VR headset.
- Scaling can be performed by binary search, to identify the location of a corresponding landmark 602 in the VR display required to match a location of a landmark 600 of the physical object in virtual space. Transformation can be done by least-squares fitting or other processes or even manual alignment to minimise the distance between each pair of coordinates (i.e. coordinates of each landmark and corresponding landmark).
- the transformation is performed using Singular Vector Decomposition (SVD) - this involves specifying the coordinates of the landmarks and corresponding landmarks as column matrices (or as row matrices transposed for the transformation) and defining a matrix transformation that converts the locations of the corresponding landmarks to the locations of the landmarks in virtual space.
- Singular Vector Decomposition Singular Vector Decomposition
- the points in the digital twin that are not demarcated with landmarks can then be transformed using the same matrix transformation.
- the matrix transform is determined by: providing pairs of coordinates of virtual (corresponding) and real world landmarks for alignment; using SVD to determine the rotation and transformation to get a best matching by iteratively applying changes to the transformation matrix until a predetermined threshold has been met; repeating this process at different scales; and using the scale and alignment that gives the minimum error i.e. the best match.
- step 108 involves orienting the digital twin in accordance with an orientation of the physical object relative to the user, and scaling the digital twin in accordance with a position of the user in a scenario displayed in the VR environment.
- the "size” of the virtual object is how 'big' it looks to the user - e.g. the number of pixels it occupies in a VR display.
- the "scale” of the virtual object is how big the virtual object is in the virtual environment - e.g. relative to other objects in the virtual environment.
- the “size” may change but the "scale” will generally be fixed. Therefore, when the virtual object appears further away from the user in the virtual environment, its size will be smaller but its scale will remain the same.
- step 108 involves locating the virtual object, including scaling it, in the virtual environment. This will generally only be performed once, to fix the coordinates, orientation and scale of the virtual object in the virtual environment - assuming the object itself is static in that environment. Movements of the VR system will result in changes in the viewing direction and position of the user. These movements will result in a transformation of the virtual environment, and the virtual object will be similarly transformed in the virtual environment.
- the scenario may be static - i.e. the digital twin or physical object may have a single state, such as a passive/inactive state.
- the scenario may instead be evolving. Therefore, scaling the digital twin in accordance with step 108 may involve sizing the digital twin to suit the scenario. For example, while the physical object may be near the user, the digital twin may be rendered at a distance from the user. The digital twin is therefore down-sized based on the distance at which the user should perceive it to be from their current position in the VR environment.
- the user may use a controller.
- the user uses the controller to produce a signal and the VR display receives the signal (which may include a processor system interpreting the signal and mapping it to a signal suitable for actioning by the VR display) and teleports the user to a new position in the VR environment in accordance with the signal.
- the digital twin can then be resized based on the new position of the user VR environment.
- This teleporting process i.e. moving between locations in the VR environment without a corresponding movement of the user in the physical environment - enables a user to navigate through a VR environment while remaining stationary, thereby allowing the VR environment to be of any size regardless of the size of the physical environment. It also avoids the need for the user to navigate around obstacles. Obstacles can therefore be placed in the VR environment without corresponding objects being placed in the physical environment.
- the method 100 may be employed, for example, on a VR system 700 as shown in Figure 7.
- the block diagram of the VR system 700 will typically include a desktop computer or laptop.
- the VR system 700 may instead include a mobile computer device such as a smart phone, a personal data assistant (PDA), a palm-top computer, or multimedia Internet enabled cellular telephone.
- PDA personal data assistant
- the VR system 700 includes the following components in electronic communication via a bus 712:
- a VR display 702 for displaying a VR environment to a user
- non-volatile (non-transitory) memory 704 comprising (i.e. storing) the digital twin;
- RAM 706 random access memory
- RAM 706 may store the digital twin
- transceiver component 710 that includes N transceivers
- Figure 7 Although the components depicted in Figure 7 represent physical components, Figure 7 is not intended to be a hardware diagram. Thus, many of the components depicted in Figure 7 may be realized by common constructs or distributed among additional physical components. Moreover, it is certainly contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to Figure 7.
- the display 702 generally operates to provide a presentation of content, such as the digital twin or twins and the VR environment more generally, to a user. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro projector and OLED displays).
- a presentation of content such as the digital twin or twins and the VR environment more generally, to a user. It may be realized by any of a variety of displays (e.g., CRT, LCD, HDMI, micro projector and OLED displays).
- non-volatile data storage 704 functions to store (e.g., persistently store) data and executable code, including a virtual textile (data container for texture map and cloth simulation parameters). It may also store the 3D model or digital twin.
- the executable code in this instance comprises instructions enabling the system 700 to perform the methods disclosed herein, such as that described with reference to Figure 1.
- the non-volatile memory 704 includes bootloader code, modem software, operating system code, file system code, and code to facilitate the implementation components, well known to those of ordinary skill in the art that, for simplicity, are not depicted nor described.
- the non-volatile memory 704 is realized by flash memory (e.g., NAND or ONENAND memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory 704, the executable code in the non-volatile memory 704 is typically loaded into RAM 706 and executed by one or more of the N processing components 708.
- flash memory e.g., NAND or ONENAND memory
- the N processing components 708 in connection with RAM 706 generally operate to execute the instructions stored in non-volatile memory 704.
- the N processing components 708 may include a video processor, modem processor, DSP, graphics processing unit (GPU), and other processing components. It is possible for the N processing components 708 to include a central processing unit (CPU), which executes operations in series. However, to rapidly transform the digital twin with movements of the VR display, a GPU may be provided.
- the instructions stored in memory 704 (or 706) are executed by the processing system 708, they cause the VR system 700 to detect a location of each landmark using the detector 720, match each landmark to a corresponding landmark on the digital twin using the processor system 708, and process, also using the processor system 708, the corresponding landmarks to orient, position and scale the digital twin in the VR environment based on a position of the physical object.
- the transceiver component 710 includes M transceiver chains, which may be used for communicating with external devices via wireless networks 716.
- Each of the M transceiver chains may represent a transceiver associated with a particular communication scheme.
- each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a CDMA network, a GPRS network, a UMTS networks), and other types of communication networks.
- Reference numeral 718 indicates that the VR system 700 may include physical buttons, as well as virtual buttons such as those that would be displayed in the VR environment. Moreover, the VR system 700 may communicate with other computer systems or data sources over network 716.
- the VR system 700 also includes a detector, presently embodied by controller 720, for detecting landmarks.
- controller 720 may also be used to navigate through the VR environment.
- Non-transitory computer- readable medium 704 includes both computer storage medium and communication medium including any medium that facilitates transfer of a computer program from one place to another.
- a storage medium may be any available medium that can be accessed by a computer, such as a USB drive, solid state hard drive or hard disk.
- the VR system 700 can robustly implement the method 100, to calibrate the digital twin (or digital twins where multiple physical objects are rendered) in the VR environment.
- the alignment process was repeatedly performed whilst randomizing the starting position, rotation and scale of the model (mimicking variations in model generation process) to be aligned as well as adding noise to the calibration markers (mimicking variations in taking measurements of the physical landmarks).
- the system should have an alignment tolerance of 1cm for the user to feel that physical object and virtual object (digital twin) are aligned.
- the object 900 includes a body 902 and a control module 904 shown in broken lines as it is internal of the body 902 and in communication with a VR system via cable 906. In other embodiments, the object 900 may not be in communication with a VR system or may communicate wirelessly.
- the body 902 is configured to correspond to a digital twin in a virtual reality (VR) environment displayed to a user by a VR system. This correspondence may be achieved by the calibration method 100.
- the control module 904 is configured to control a state of the object 900.
- the state of the object 900 is defined by reference to its behaviour - the object 900 has at least two states. For example, the object may generate an audible noise in one state and be silent in another state.
- an action performed by the user on the object 900 is detected and rendered in the VR environment on the digital twin - e.g. the hands of the user performing an action on the physical object will be detected by the VR system or a sensor system of the object 900, and rendered in the VR environment.
- the control module 904 is configured to transition the object 900 from a current state to a next state.
- the object 900 can therefore change state according to a pre-programmed sequence. This enables the user to interact with a digital twin by performing corresponding actions on the object 900.
- the interactive object 900 will generally include a transmitter (e.g. transceiver 710 of Figure 7), presently forming an interface 908 between the cable 906 and the object 900, and a sensor system 910.
- the sensor system 910 detects the action and sends a signal to the control module 904.
- the control module 904 sends a signal via the transmitter 908 to the VR system (e.g. system 700) corresponding to the action detected by the sensor system 910.
- the control module 904 may also determine if the action corresponds to the predetermined action, and only send the signal, via the transmitter 908 to the VR system if the action corresponds to the predetermined action.
- the sensor system 910 and control module 904 are shown as separate components, they may form part of the same component or components.
- the sensor system may determine whether the predetermined action or relevant portion thereof has been performed.
- the predetermined action may involve the application of pressure for a predetermined period, to a particular region of the physical/virtual object.
- the sensor system may include a pressure sensor located at that particular region and may time the duration of application of pressure to that region. If the pressure is applied for the requisite duration, the sensor system (or control module) generates a signal specifying that the predetermined action has been performed.
- the sensor 910 may comprise a pressure sensor positioned in the vicinity of the wrist of the right arm of the mannequin.
- the pressure sensor may measure the pressure applied by the user to the pressure sensor - when simulating checking a pulse - and send a signal to the control module 904.
- the control module may determine if a pressure has been applied to the sensor, if the user has maintained the pressure for a sufficient period or if the applied pressure is within a particular desired pressure range and so on. If the action (application of pressure by the user to the pressure sensor) accords with a predetermined action, that may be stored in memory accessible by the control module 904, the control module 904 sends a signal to the VR system and transitions the object 900 to the next state - e.g. activating an audio signal to replicate shallow breathing that the user then needs to listen to. This ensures the user follows a predetermined sequence of actions, or a 'scenario', when interacting with the object.
- the sensor system can employ any desired sensors.
- it may include one or more of: a potentiometer for measuring a relative rotation between two or more portions of the body - e.g. in the mannequin example the object may be static on approach by the user, the object (i.e.
- the VR system and/or control module 904 then updates the state of the object to "breathing"; a pressure sensor for measuring pressure applied to a location on the body; and a microphone on or in the body, for detecting one or more of speech from the user, sound from touch interaction between the user and body - this can be used to determine if the user is speaking to the object 900 or, if coupled with a natural language processing system, to determine what the user is saying to the object 900 - e.g. whether the user is asking a question prescribed when triaging a patient simulated using the object 900.
- the control module 904 or VR system 700 is configured to combine the signal from the sensor system 910 and actions of the user, to detect performance of the predetermined action. This checks whether all relevant components of an action have been performed. For example, if a user if applying pressure to a pulse using their elbow, images captured by the VR system (or a camera in the object 900) may be used to identify that the user's hands are not correctly positioned on the object 900. Therefore, although the pressure may be correctly applied, the predetermined action has not been performed.
- transmitter 908 may be part of a transceiver and therefore comprise a receiver for receiving a control signal from the VR system.
- the control signal can be used to indicate to the control module 904 that the object 900 can transition from its current state to its next state.
- the control signal may also include a scenario or an object state. In either case, the control module 904 is configured to control the object 900 in accordance with the control signal.
- a scenario will include a plurality of states.
- the plurality of states will include an initial state of the object 900, that defines the starting condition - i.e. the state of the object when the user first interacts with it - and one or more further states.
- the object 900 may revisit the initial state.
- a scenario may simulate triaging multiple victims in an accident.
- the object 900 can behave as necessary to simulate the behaviour of each victim, in the order that those victims are approached by the user.
- the object 900 may transition from its current states to the initial state to commence behaving like the next victim visited by the user.
- control module 904 controls the state (i.e. the behaviour) of the object in accordance with the scenario received from the VR system.
- the object 900 also includes a stimulus system 912.
- the stimulus system 912 provides stimulus to the user during interaction between the user and the object 900.
- the stimulus system 912 and the sensor system 910 each provide the object 900 with active behaviour and an ability to respond to the user, rather than being a passive object.
- the stimulus system 912 is controlled by the control module 904 to deliver the stimulus to the user in accordance with at least one of a state of the object and a scenario involving the object.
- the stimulus system includes a module 912 at the wrist of the object 900.
- the module 912 may include a servomotor that simulates the pulse of a victim, and the sensor system 910 can determine if the user is applying pressure at the correct location, and/or with the correct force, to accurately detect the pulse. If the user checks the pulse correctly, the object 900 may be transitioned to its next state. If the user fails to check the pulse correctly, the object 900 may not transition or, after a predetermined period of time, the simulation may exit.
- the stimulus system can include any suitable stimuli, such as a haptic system to deliver a haptic stimulus to the user (e.g. the servomotor example, or a compressor system for simulating respiratory function), a speaker system to deliver a sound stimulus to the user, an olfactory system to deliver an olfactory stimulus to the user, and a thermal system to deliver a thermal stimulus to the user.
- a haptic system to deliver a haptic stimulus to the user
- a speaker system to deliver a sound stimulus to the user
- an olfactory system to deliver an olfactory stimulus to the user
- a thermal system to deliver a thermal stimulus to the user.
- the VR system 700 may interact with any number of interactive objects 900. Moreover, the VR system 700 may change content presented to the user in the VR environment in response to detection of the predetermined action. That change in content may be a change in behaviour of the digital twin - e.g. if the user clears the airway of the physical object 900, the digital twin may visible start breathing.
- the active objects afford interaction between a user and the physical environment, where that interaction maps to a scenario or state that is presented in the VR environment.
- Embodiments of such objects can, when interacting with a VR system, enable heightened interaction and scenario mapping that has previously been achievable.
- the objects and VR system can produce a more realistic simulation in a VR environment than has previously been achievable.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280019831.6A CN116964638A (en) | 2021-02-17 | 2022-02-17 | Method involving virtual reality systems and interactive objects |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG10202101597Q | 2021-02-17 | ||
SG10202101597Q | 2021-02-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022177505A1 true WO2022177505A1 (en) | 2022-08-25 |
Family
ID=82932305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SG2022/050072 WO2022177505A1 (en) | 2021-02-17 | 2022-02-17 | Methods relating to virtual reality systems and interactive objects |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116964638A (en) |
WO (1) | WO2022177505A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190392646A1 (en) * | 2015-05-05 | 2019-12-26 | Ptc Inc. | Augmented reality system |
CN112091982A (en) * | 2020-11-16 | 2020-12-18 | 杭州景业智能科技股份有限公司 | Master-slave linkage control method and system based on digital twin mapping |
CN112132900A (en) * | 2020-09-29 | 2020-12-25 | 凌美芯(北京)科技有限责任公司 | Visual repositioning method and system |
US20210042992A1 (en) * | 2017-08-30 | 2021-02-11 | Compedia Software and Hardware Development Ltd. | Assisted augmented reality |
US20210201584A1 (en) * | 2019-12-31 | 2021-07-01 | VIRNECT inc. | System and method for monitoring field based augmented reality using digital twin |
CN113450448A (en) * | 2020-03-25 | 2021-09-28 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
-
2022
- 2022-02-17 WO PCT/SG2022/050072 patent/WO2022177505A1/en active Application Filing
- 2022-02-17 CN CN202280019831.6A patent/CN116964638A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190392646A1 (en) * | 2015-05-05 | 2019-12-26 | Ptc Inc. | Augmented reality system |
US20210042992A1 (en) * | 2017-08-30 | 2021-02-11 | Compedia Software and Hardware Development Ltd. | Assisted augmented reality |
US20210201584A1 (en) * | 2019-12-31 | 2021-07-01 | VIRNECT inc. | System and method for monitoring field based augmented reality using digital twin |
CN113450448A (en) * | 2020-03-25 | 2021-09-28 | 阿里巴巴集团控股有限公司 | Image processing method, device and system |
CN112132900A (en) * | 2020-09-29 | 2020-12-25 | 凌美芯(北京)科技有限责任公司 | Visual repositioning method and system |
CN112091982A (en) * | 2020-11-16 | 2020-12-18 | 杭州景业智能科技股份有限公司 | Master-slave linkage control method and system based on digital twin mapping |
Also Published As
Publication number | Publication date |
---|---|
CN116964638A (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7337104B2 (en) | Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality | |
US8244506B2 (en) | Ultrasound simulation apparatus and method | |
US20200410713A1 (en) | Generating pose information for a person in a physical environment | |
KR20210011425A (en) | Image processing method and device, image device, and storage medium | |
KR101519775B1 (en) | Method and apparatus for generating animation based on object motion | |
JP7299414B2 (en) | Image processing method, device, electronic device and computer program | |
WO2007013833A1 (en) | Method and system for visualising virtual three-dimensional objects | |
US20140071165A1 (en) | Mixed reality simulation methods and systems | |
US11042730B2 (en) | Method, apparatus and device for determining an object, and storage medium for the same | |
CN109643014A (en) | Head-mounted display tracking | |
US20190371072A1 (en) | Static occluder | |
CN112346572A (en) | Method, system and electronic device for realizing virtual-real fusion | |
US20110109628A1 (en) | Method for producing an effect on virtual objects | |
JP2006192548A (en) | Body imitation robot system and body imitation motion control method | |
WO2020149270A1 (en) | Method for generating 3d object arranged in augmented reality space | |
KR101960929B1 (en) | Basic life support training simulation system | |
CN114663516A (en) | Method and device for calibrating multi-camera system based on human posture | |
WO2022177505A1 (en) | Methods relating to virtual reality systems and interactive objects | |
US20200334998A1 (en) | Wearable image display device for surgery and surgery information real-time display system | |
Dias et al. | The arena: An indoor mixed reality space | |
CN110244842B (en) | VR model, VR scene processing method, VR training system, storage medium and electronic equipment | |
WO2022245653A1 (en) | Ar data simulation with gaitprint imitation | |
Ruffaldi et al. | Co-located haptic interaction for virtual USG exploration | |
CN108845669B (en) | AR/MR interaction method and device | |
CN109716395B (en) | Maintaining object stability in virtual reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22756644 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280019831.6 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202305954W Country of ref document: SG |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22756644 Country of ref document: EP Kind code of ref document: A1 |