WO2023234824A1 - A virtual-reality interaction system - Google Patents

A virtual-reality interaction system Download PDF

Info

Publication number
WO2023234824A1
WO2023234824A1 PCT/SE2023/050458 SE2023050458W WO2023234824A1 WO 2023234824 A1 WO2023234824 A1 WO 2023234824A1 SE 2023050458 W SE2023050458 W SE 2023050458W WO 2023234824 A1 WO2023234824 A1 WO 2023234824A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
sensor
virtual
spatial data
interaction system
Prior art date
Application number
PCT/SE2023/050458
Other languages
French (fr)
Inventor
Mattias KRUS
Ola Wassvik
Eric ROSTEDT
Original Assignee
Flatfrog Laboratories Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flatfrog Laboratories Ab filed Critical Flatfrog Laboratories Ab
Publication of WO2023234824A1 publication Critical patent/WO2023234824A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Definitions

  • the present invention relates generally to the field of virtual-reality (VR) interaction systems. More particularly, the present invention relates to a VR controller and a positioning unit, and to a related method.
  • VR virtual-reality
  • VR virtual-reality
  • AR Augmented reality
  • IR tracked gloves IR tracked gloves
  • IR tracked wands or other gesturing tools gyroscope-/accelerometer tracked objects.
  • the objects are typically tracked using IR sensors configured to view and triangulate IR light sources on the objects.
  • Such interaction systems are however typically associated with sub- optimal accuracy in tracking the user’s full movements in VR applications, or are too complex for the average user to implement in the home environment.
  • One objective is to provide a VR interaction system with a high-precision user tracking, while having less complex user implementation.
  • One or more of these objectives, and other objectives that may appear from the description below, are at least partly achieved by means of a VR interaction system and a related method according to the independent claims, embodiments thereof being defined by the dependent claims.
  • a virtual-reality (VR) interaction system comprising a VR controller configured to generate a model of a user in a VR environment coordinate system (v x ,v y ,v z ) within a virtual space to be displayed in a wearable display device, and a positioning unit configured to receive spatial position information around the user in a room having room coordinates (x,y,z), wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor, wherein the first set of spatial data is captured by a point of view (POV) sensor of the wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determine, based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determine user coordinates of the user in the room based on the relative sensor position and
  • POV point of view
  • a method in a virtual-reality (VR) interaction system comprising receiving at a positioning unit spatial position information around a user in a room having room coordinates, wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor, wherein the first set of spatial data is captured by a point of view (POV) sensor of a wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determining, based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determining user coordinates (x u ,y u ,z u ) of the user in the room based on the relative sensor position and the first and second sets of spatial data, and mapping the user coordinates to a VR environment coordinate system (v x ,v y ,v z
  • a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to the second aspect.
  • Some examples of the disclosure provide for capturing input from a user’s interaction with a VR environment with a high accuracy.
  • Some examples of the disclosure provide for high accuracy in tracking a user’s position and movements within a VR environment.
  • Some examples of the disclosure provide for a VR interaction system with user feedback of high precision.
  • Some examples of the disclosure provide for a VR interaction system with an enhanced VR experience.
  • Fig. 1 shows a VR interaction system according to an example of the disclosure
  • Fig. 2 shows a VR interaction system according to an example of the disclosure
  • Fig. 3 shows a VR interaction system according to an example of the disclosure
  • Fig. 4 shows a VR interaction system according to an example of the disclosure.
  • Fig. 5 is a flowchart of a method in a VR interaction system according to an example of the disclosure.
  • Fig. 1 is a schematic illustration of a virtual-reality (VR) interaction system 100 comprising a VR controller 101 configured to generate a model 107 of a user 102 in a VR environment coordinate system (v x ,v y ,v z ) within a virtual space, as further schematically depicted in Fig. 1 within the circular dashed lines.
  • the virtual space is to be displayed in a wearable display device 103, such as a VR headset, for the user 102.
  • the VR interaction system 100 comprises a positioning unit 104 configured to receive spatial position information around the user 102 in a room having room coordinates (x,y,z).
  • the room should be construed as any physical environment, such as indoors or outdoors.
  • the spatial position information comprises a first set of spatial data of the user 102 and of a placement of a peripheral sensor 105.
  • the first set of spatial data of the user 102 may include any input device 109 held by the user 102, such as a stylus, and/or any part of the user 102, such as a user’s finger, hand, arm, upper body or full body. Thus, reference to the user 102 in the present disclosure should be construed as any combination of such part and/or input device 109.
  • the first set of spatial data is captured by a point of view (POV) sensor 106 of the wearable display device 103 when worn by the user 102.
  • the spatial position information comprises further a second set of spatial data of the user 102 and of a placement of the POV sensor 106.
  • POV point of view
  • the second set of spatial data is captured by the peripheral sensor 105.
  • the top-down view of Fig. 2 is a further schematic illustration of an example where the field-of-view (see dashed arrows) of the POV sensor 106 of the wearable display device 103 captures the user 102 and the peripheral sensor 105.
  • the field-of-view (see dashed arrows) of the peripheral sensor 105 captures the user 102 and the POV sensor 106.
  • the POV sensor 106 may comprise an image sensor.
  • the first set of spatial data may thus comprise image data of the user 102, and image data of the peripheral sensor 105 in the room.
  • the peripheral sensor 105 may comprise an image sensor.
  • the second set of spatial data may thus comprise image data of the user 102, and image data of the POV sensor 106 in the room.
  • the positioning unit 104 is configured to determine, based on the first and second sets of spatial data, the placement of the peripheral sensor 105 relative to the placement of the POV sensor 106 as a relative sensor position.
  • the positioning unit 104 may be configured to determine and angle (vi, V2) of a line-of-sight (h) between the POV sensor 106 and the peripheral sensor 105, relative a reference axis (ci) of the POV sensor 106, and/or relative a reference axis (C2) of the peripheral sensor 105.
  • the positioning unit 104 is further configured to determine user coordinates (x u ,y u ,z u ) of the user 102 in the room based on the relative sensor position and the first and second sets of spatial data.
  • the user 102 should be construed as any combination of parts of the user and/or any input device 109 held, worn, operated by, or otherwise engaged by the user 102.
  • the user coordinates (x u ,y u ,z u ) may thus include any such combination.
  • Fig. 2 shows a schematic illustration of the hand of the user 102, including an input device 109, and the associated user coordinates (x u ,y u ,z u ) determined by the positioning unit 104.
  • the first and second sets of spatial data may comprise image data and the positioning unit 104 may be configured to determine the user coordinates (x u ,y u ,z u ) based on the image data.
  • the positioning unit 104 is configured to communicate the user coordinates (x u ,y u ,z u ) to the VR controller 101.
  • the VR controller 101 is configured to map the user coordinates (x u ,y u ,z u ) to the VR environment coordinate system (v x ,v y ,v z ) to generate the model 107 of the user 102 in the virtual space.
  • the model 107 may also include any input device 109 or other physical object held, engaged or worn by the user 102.
  • the user coordinates (x u ,y u ,z u ) may comprise a set of coordinates defining the user 102 in three dimensions (3D).
  • the VR controller 101 may be configured to generate the model 107 in 3D in the virtual space, based on the set of user coordinates.
  • the position and motion of the user’s hand may be modelled by the VR controller 101 as a multi-point skeleton model 107, such as a 21 -parameters hand pose skeleton model 107, in the virtual space, based on the determined user coordinates (x u ,y u ,z u ).
  • Having a positioning unit 104 receiving a first set of spatial data captured by the POV sensor 106, of the user 102 and the peripheral sensor 105, as well as a second set of spatial data captured by the peripheral sensor 105, of the user 102 and the POV sensor 106, provides for a more accurate determination of the user coordinates (x u ,y u ,z u ).
  • the spatial data from the POV sensor 106 of the wearable display device 103 is thus combined with the spatial data of the peripheral sensor 105 to determine the user coordinates (x u ,y u ,z u ) with a greater precision than stereo camera set-ups in typical VR headsets that has a limited camera separation and depth accuracy.
  • the interaction system 100 is also less complex and less expensive to implement than dedicated VR room installations with fixed VR cameras, typical in commercial settings.
  • the peripheral sensor 105 may be freely movable in the room with respect to the user 102, such as an image sensor 105 connected to, or integrated with, the user’s laptop, as exemplified in Fig. 1. Having the positioning unit 104 configured to determine the relative sensor positions, from the first and second sets of spatial data as described above, allows the peripheral sensor 105 to be freely movable with respect to the user 102.
  • the second set of spatial data may be sent wirelessly by the peripheral sensor 105 to the wearable display device 103 to be received by the positioning unit 104.
  • the positioning unit 104 may thus be in communication with the wearable display device 103.
  • the positioning unit 104 may be directly connected to the wearable display device 103 in one example, such as integrated with the wearable display device 103.
  • the positioning unit 104 may have a direct wired connection to the POV sensor 106, while receiving the second set of spatial data from the peripheral sensor 105 over wireless communication.
  • the VR controller 101 may be directly connected to the wearable display device 103 in one example, such as integrated with the wearable display device 103.
  • the VR interaction system 100 may thus comprise the peripheral sensor 105 and the wearable display device 103 in one example.
  • the VR interaction system 100 thus allows for maintaining the practicability of utilizing a wearable display device 103, such as a 3D VR headset, while incorporating spatial data captured by a peripheral sensor 105 into the mapping of the user coordinates (x u ,y u ,z u ) to the virtual space model 107.
  • the peripheral sensor 105 thus sends spatial data, such as data tracking the user’s position, from its viewpoint to the VR headset which merges that tracking data with its own viewpoint to create an improved reconstruction of the tracked object, typically the user’s hands and/or input device 109. This provides for solving many problems with the prior methods. E.g.
  • the peripheral sensor 105 allows for obtaining a view point from an angle with a greater separation than that of a stereo camera of a typical VR headset, giving better position information in the depth direction as seen from the VR headset. With a view from a different position, an un-occluded view of parts of the hand can be obtained, which otherwise can not be seen from a VR headset with a stereo camera. For most positions when the peripheral sensor 105 is on the other side of the user’s hand from the VR headset, an increase in distance from the VR headset results in a decrease of the distance to the peripheral sensor 105, providing for accurate positioning of the user 102.
  • the peripheral sensor may be a POV sensor of a second wearable display device (not shown).
  • the spatial data obtained from a VR headset of a second user, having the first user 102 within the field-of-view of the second user may be combined with the spatial data captured by POV sensor 106 of the first user 102.
  • the field-of-view of the second user in a direction towards the first user 102 may thus be utilized for determining the user coordinates (x u ,y u ,z u ) of the first user 102, and vice versa.
  • the peripheral sensor 105 may be integrated in other devices such as in whiteboards or other interaction surfaces, such as touch surfaces, in some examples.
  • the position of the peripheral sensor 105 may be initially unknown also in this case, before the relative sensor position is determined from the first and second sets of spatial data.
  • the advantages of combining the spatial data from the POV sensor 106 of wearable display device 103 with the spatial data captured by such peripheral sensor 105 are provided for also in this example, i.e. a more accurate determining of the user coordinates (x u ,y u ,z u ).
  • the positioning unit 104 may be configured to receive spatial position information from a plurality of peripheral sensors 105.
  • the plurality of peripheral sensors 105 may be freely movable in the room with respect to the user 102, such as respective image sensors 105 of a plurality of laptops in the room, or a plurality of VR headsets, or other sensors 105 as exemplified.
  • the aforementioned second set of spatial data may thus comprise second subsets of spatial data received form a respective peripheral sensor 105.
  • the positioning unit 104 may be configured to determine, based on the first set of spatial data and second subsets of spatial data, the placement of the plurality of peripheral sensors 105 relative to the placement of the POV sensor 106 as a plurality of relative sensor positions.
  • the positioning unit 104 may be configured to determine user coordinates (x u ,y u ,z u ) of the user 102 in the room based on the plurality of relative sensor positions, the first set of spatial data and the second subsets of spatial data.
  • the VR interaction system 100 may comprise a plurality of peripheral sensors 105. Each of the plurality of peripheral sensors 105 may be freely movable.
  • the peripheral sensor 105 may comprise a depth camera in one example.
  • the depth camera may comprise first and second image sensors with overlapping field-of-views.
  • the positioning unit 104 may thus be configured to determine the user coordinates (x u ,y u ,z u ) based on the image data from the first and second image sensors and the first set of spatial data from the POV sensor 106. This may provide for a further improved model 107 of the user 102 in the virtual space.
  • the positioning unit 104 may be configured to continuously receive the spatial position information from the peripheral sensor 105 and the POV sensor 106 to track the user 102 in the room over a duration of time.
  • the positioning unit 104 may be configured to determine the user coordinates (x u ,y u ,z u ) for said duration of time to calculate a velocity and/or an acceleration of the user 102.
  • the VR controller 101 may be configured to map the tracked user coordinates (x u ,y u ,z u ) to the VR environment coordinate system to generate a model 107 having a corresponding velocity and/or acceleration.
  • the positioning unit 104 may be configured to receive sensor motion data from the peripheral sensor 105 and/or the POV sensor 106, such as an acceleration and/or a change in orientation of the peripheral sensor 105 and/or the POV sensor 106 in the room.
  • the POV sensor 106 may send motion data of the acceleration and/or a change in orientation of the POV sensor 106 in the VR headset, i.e. in the wearable display device 103, as the user’s head moves.
  • the positioning unit 104 may be configured to determine the user coordinates (x u ,y u ,z u ) based on the sensor motion data and the first and second sets of spatial data.
  • the positioning unit 104 may be configured to determine the location of a common reference marker 108 in the room in which the user 102 is positioned based on the first and second sets of spatial data.
  • Fig. 3 shows an example of a common reference marker 108 as an illuminated area or spot on a wall 111 of the room.
  • the common reference marker 108 may in such case be generated by a light source in the wearable display device 103 and/or in the peripheral sensor 105.
  • the POV sensor 106 and the peripheral sensor 105 may thus capture image data of the common reference marker 108, which is uniquely identifiable by the positioning unit 104 in the received first and second sets of spatial data.
  • the positioning unit 104 may be configured to determine the relative sensor position based on the common reference marker 108.
  • the positioning unit 104 may be configured to determine and angle (ai, 82) of a line-of-sight (h, I3) between the common reference marker 108 and the POV sensor 106 and the peripheral sensor 105, respectively, relative a reference axis (ci) of the POV sensor 106 and a reference axis (C2) of the peripheral sensor 105.
  • the relative sensor position may thus be accurately determined, e.g. in combination with determining angles vi, V2, as described in relation to Fig. 2, even if both the peripheral sensor 105 and the POV sensor 106 are subject to continuous shifting positions in relation to the user 102. This allows for the positioning unit 104 to accurately determine the user coordinates (x u ,y u ,z u ) based on image data of the user 102 from the peripheral sensor 105 and the POV sensor 106.
  • Fig. 4 shows a further example of determining the relative sensor position based on a common reference marker 108.
  • the wearable display device 103 has turned so that the field of view of POV sensor 106 has moved away from the peripheral sensor 105.
  • the common reference marker 108 is within the field of view of both the POV sensor 106 and the peripheral sensor 105, so that the relative sensor position can be tracked and determined.
  • a plurality of common reference markers 108 may be generated by the peripheral sensor 105 and/or the POV sensor 106, each being uniquely identifiable to facilitate determining the relative sensor position.
  • the common reference marker 108 may be a determined geometric pattern.
  • the peripheral sensor 105 and/or the wearable display device 103 may be configured to emit a light patter as the common reference marker 108, having a determined shape.
  • the geometric pattern may be a structural geometry of an identified object in the sets of spatial data.
  • a unique object may thus be identified in the image data captured by both the POV sensor 106 and the peripheral sensor 105. E.g. the angle towards the uniquely identified object (corresponding to ai and a2 in Fig. 3) may be determined, allowing for the positioning unit 104 to determine the relative sensor position.
  • the common reference marker 108 may be a spot in the room being lit by a determined temporal light modulation, i.e. a light intensity that varies over time.
  • the common reference marker 108 may thus be a spot of light on the wall 111 that blinks in a defined sequence or frequency.
  • the peripheral sensor 105 and/or the wearable display device 103 may be configured to emit light in such defined sequence or frequency.
  • a plurality of common reference markers 108 may be generated by the peripheral sensor 105 and/or the POV sensor 106, each having a uniquely identifiable sequence of light.
  • the virtual-reality interaction system 100 may comprise a light modulator 110 configured to emit said light modulation in a defined direction with respect to a first sensor of the peripheral sensor 105 and the POV sensor 106.
  • the POV sensor 106 may emit the light modulation in a defined direction, such as in the example of Fig. 3.
  • the light modulation is identified in the set of spatial data captured by a second sensor of the peripheral sensor 105 and the POV sensor 106, not being the first sensor. I.e. the peripheral sensor 105 may capture the light modulation emitted by the POV sensor 106.
  • the identified light modulation may be assigned as the common reference marker 108.
  • the virtual-reality interaction system 100 may comprise the peripheral sensor 105.
  • the peripheral sensor 105 may be configured to capture image data of the user 102.
  • the peripheral sensor 105 may be configured to determine a user coordinate approximation of the user 102 based on the captured image data.
  • the peripheral sensor 105 may comprise a camera which captures 2D image data of the user 102.
  • the peripheral sensor 105 may process the image data to identify the position and shape of e.g. the user’s hand, in relation to the POV sensor 106.
  • the peripheral sensor 105 may assign a set of corresponding user coordinates as an approximation of the position and shape of the hand.
  • the user’s hand may have a certain pose, where the key-points of the pose, such as the position of the fingertips, are assigned an x- y coordinate.
  • the peripheral sensor 105 may be configured to send the user coordinate approximation as the second set of spatial data to the positioning unit 104. I.e. the peripheral sensor 105 may pre-process the captured image data and send an approximation of the user coordinates to the positioning unit 104.
  • the positioning unit 104 may be connected to the wearable display device 103. Thus, the user coordinate approximation may be sent wirelessly to the wearable display device 103.
  • the positioning unit 104 may be configured to determine the user coordinates (x u ,y u ,z u ) based on the user coordinate approximation received from the peripheral sensor 105, the relative sensor position, and the first set of spatial data, i.e. as captured by the POV sensor 106.
  • the final full user coordinates (x u ,y u ,z u ) for each key-point of the hand pose may thus be determined.
  • the peripheral sensor 105 may comprise a depth camera in one example, in which case the user coordinate approximation of the hand may comprise x-y-z coordinates of the key-points of the hand pose.
  • These x-y-z coordinates may then be sent to the positioning unit 104, e.g. wirelessly as described above, for determining the final full user coordinates (x u ,y u ,z u ) for each key-point of the hand pose. This may provide for a further improved determination of the model 107.
  • Fig. 5 illustrates a flow chart of a method 200 in a VR interaction system 100.
  • the order in which the steps of the method 200 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order.
  • the method 200 comprises receiving 201 , at a positioning unit 104, spatial position information around a user 102 in a room having room coordinates (x,y,z).
  • the spatial position information comprises a first set of spatial data of the user 102 and of a placement of a peripheral sensor 105.
  • the first set of spatial data is captured by a point of view (POV) sensor 106 of a wearable display device 103 when worn by the user 102.
  • POV point of view
  • the spatial position information comprises a second set of spatial data of the user 102 and of a placement of the POV sensor 106.
  • the second set of spatial data is captured by the peripheral sensor 105.
  • the method 200 comprises determining 202, based on the first and second sets of spatial data, the placement of the peripheral sensor 105 relative to the placement of the POV sensor 106 as a relative sensor position.
  • the method 200 comprises determining 203 user coordinates (x u ,y u ,z u ) of the user 102 in the room based on the relative sensor position and the first and second sets of spatial data.
  • the method 200 comprises mapping 204 the user coordinates to a VR environment coordinate system (v x ,v y ,v z ) to generate 205 a model 107 of the user 102 within a virtual space to be displayed in the wearable display device 103.
  • the method 200 thus provides for the advantageous benefits as described above in relation to the VR interaction system 100 and Figs. 1 - 4.
  • the method 200 provides a VR interaction system with a high-precision user tracking, while having less complex user implementation.
  • a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method 200.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A virtual-reality (VR) interaction system is disclosed comprising a VR controller configured to generate a model of a user within a virtual space to be displayed in a wearable display device, and a positioning unit configured to receive spatial position information around the user, the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor, captured by a point of view (POV) sensor of the wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, captured by the peripheral sensor, determine user coordinates of the user in the room based on a relative sensor position and the first and second sets of spatial data, and communicate the user coordinates to the VR controller to generate the model.

Description

A VIRTUAL-REALITY INTERACTION SYSTEM
Technical Field
The present invention relates generally to the field of virtual-reality (VR) interaction systems. More particularly, the present invention relates to a VR controller and a positioning unit, and to a related method.
Background
To an increasing extent, virtual-reality (VR) interaction systems are being used as an interaction tool in a wide range of business and recreational applications. Virtual-reality presents the user with an environment partially if not fully disconnected from the actual physical environment of the user. Augmented reality (AR) allows the overlay of virtual objects in the physical environment. Various ways of interacting with this environment have been tried. These include IR tracked gloves, IR tracked wands or other gesturing tools, gyroscope-/accelerometer tracked objects. The objects are typically tracked using IR sensors configured to view and triangulate IR light sources on the objects. Such interaction systems are however typically associated with sub- optimal accuracy in tracking the user’s full movements in VR applications, or are too complex for the average user to implement in the home environment. These limitations hinder the potential of VR as a more widespread interaction tool in the increasingly more advanced VR environments which can be generated with today’s processing power.
Summary
It is an objective of the invention to at least partly overcome one or more of the above-identified limitations of the prior art.
One objective is to provide a VR interaction system with a high-precision user tracking, while having less complex user implementation. One or more of these objectives, and other objectives that may appear from the description below, are at least partly achieved by means of a VR interaction system and a related method according to the independent claims, embodiments thereof being defined by the dependent claims.
According to a first aspect a virtual-reality (VR) interaction system is provided comprising a VR controller configured to generate a model of a user in a VR environment coordinate system (vx,vy,vz) within a virtual space to be displayed in a wearable display device, and a positioning unit configured to receive spatial position information around the user in a room having room coordinates (x,y,z), wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor, wherein the first set of spatial data is captured by a point of view (POV) sensor of the wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determine, based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determine user coordinates of the user in the room based on the relative sensor position and the first and second sets of spatial data, and communicate the user coordinates to the VR controller, wherein the VR controller is configured to map the user coordinates to the VR environment coordinate system to generate said model.
According to a second aspect a method in a virtual-reality (VR) interaction system is provided comprising receiving at a positioning unit spatial position information around a user in a room having room coordinates, wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor, wherein the first set of spatial data is captured by a point of view (POV) sensor of a wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determining, based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determining user coordinates (xu,yu,zu) of the user in the room based on the relative sensor position and the first and second sets of spatial data, and mapping the user coordinates to a VR environment coordinate system (vx,vy,vz) to generate a model of the user within a virtual space to be displayed in the wearable display device.
According to a third aspect a computer program product is provided comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to the second aspect.
Further examples of the invention are defined in the dependent claims, wherein features for the first aspect may be implemented for the second and subsequent aspects, and vice versa.
Some examples of the disclosure provide for capturing input from a user’s interaction with a VR environment with a high accuracy.
Some examples of the disclosure provide for high accuracy in tracking a user’s position and movements within a VR environment.
Some examples of the disclosure provide for a VR interaction system with user feedback of high precision.
Some examples of the disclosure provide for a VR interaction system with an enhanced VR experience.
It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Brief Description of the Drawings
These and other aspects, features and advantages of which examples of the invention are capable of will be apparent and elucidated from the following description of examples of the present invention, reference being made to the accompanying schematic drawings, in which;
Fig. 1 shows a VR interaction system according to an example of the disclosure;
Fig. 2 shows a VR interaction system according to an example of the disclosure;
Fig. 3 shows a VR interaction system according to an example of the disclosure;
Fig. 4 shows a VR interaction system according to an example of the disclosure; and
Fig. 5 is a flowchart of a method in a VR interaction system according to an example of the disclosure.
Detailed Description
Specific examples of the invention will now be described with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these examples are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. The terminology used in the detailed description of the examples illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, like numbers refer to like elements.
Fig. 1 is a schematic illustration of a virtual-reality (VR) interaction system 100 comprising a VR controller 101 configured to generate a model 107 of a user 102 in a VR environment coordinate system (vx,vy,vz) within a virtual space, as further schematically depicted in Fig. 1 within the circular dashed lines. The virtual space is to be displayed in a wearable display device 103, such as a VR headset, for the user 102. The VR interaction system 100 comprises a positioning unit 104 configured to receive spatial position information around the user 102 in a room having room coordinates (x,y,z). The room should be construed as any physical environment, such as indoors or outdoors. The spatial position information comprises a first set of spatial data of the user 102 and of a placement of a peripheral sensor 105. The first set of spatial data of the user 102 may include any input device 109 held by the user 102, such as a stylus, and/or any part of the user 102, such as a user’s finger, hand, arm, upper body or full body. Thus, reference to the user 102 in the present disclosure should be construed as any combination of such part and/or input device 109. The first set of spatial data is captured by a point of view (POV) sensor 106 of the wearable display device 103 when worn by the user 102. The spatial position information comprises further a second set of spatial data of the user 102 and of a placement of the POV sensor 106. The second set of spatial data is captured by the peripheral sensor 105. The top-down view of Fig. 2 is a further schematic illustration of an example where the field-of-view (see dashed arrows) of the POV sensor 106 of the wearable display device 103 captures the user 102 and the peripheral sensor 105. The field-of-view (see dashed arrows) of the peripheral sensor 105 captures the user 102 and the POV sensor 106. The POV sensor 106 may comprise an image sensor. The first set of spatial data may thus comprise image data of the user 102, and image data of the peripheral sensor 105 in the room. The peripheral sensor 105 may comprise an image sensor. The second set of spatial data may thus comprise image data of the user 102, and image data of the POV sensor 106 in the room.
The positioning unit 104 is configured to determine, based on the first and second sets of spatial data, the placement of the peripheral sensor 105 relative to the placement of the POV sensor 106 as a relative sensor position. For example, the positioning unit 104 may be configured to determine and angle (vi, V2) of a line-of-sight (h) between the POV sensor 106 and the peripheral sensor 105, relative a reference axis (ci) of the POV sensor 106, and/or relative a reference axis (C2) of the peripheral sensor 105.
The positioning unit 104 is further configured to determine user coordinates (xu,yu,zu) of the user 102 in the room based on the relative sensor position and the first and second sets of spatial data. As described above, the user 102 should be construed as any combination of parts of the user and/or any input device 109 held, worn, operated by, or otherwise engaged by the user 102. The user coordinates (xu,yu,zu) may thus include any such combination. Fig. 2 shows a schematic illustration of the hand of the user 102, including an input device 109, and the associated user coordinates (xu,yu,zu) determined by the positioning unit 104. The first and second sets of spatial data may comprise image data and the positioning unit 104 may be configured to determine the user coordinates (xu,yu,zu) based on the image data. The positioning unit 104 is configured to communicate the user coordinates (xu,yu,zu) to the VR controller 101. The VR controller 101 is configured to map the user coordinates (xu,yu,zu) to the VR environment coordinate system (vx,vy,vz) to generate the model 107 of the user 102 in the virtual space. The model 107 may also include any input device 109 or other physical object held, engaged or worn by the user 102. The user coordinates (xu,yu,zu) may comprise a set of coordinates defining the user 102 in three dimensions (3D). Hence, the VR controller 101 may be configured to generate the model 107 in 3D in the virtual space, based on the set of user coordinates. For example, the position and motion of the user’s hand may be modelled by the VR controller 101 as a multi-point skeleton model 107, such as a 21 -parameters hand pose skeleton model 107, in the virtual space, based on the determined user coordinates (xu,yu,zu).
Having a positioning unit 104 receiving a first set of spatial data captured by the POV sensor 106, of the user 102 and the peripheral sensor 105, as well as a second set of spatial data captured by the peripheral sensor 105, of the user 102 and the POV sensor 106, provides for a more accurate determination of the user coordinates (xu,yu,zu). The spatial data from the POV sensor 106 of the wearable display device 103 is thus combined with the spatial data of the peripheral sensor 105 to determine the user coordinates (xu,yu,zu) with a greater precision than stereo camera set-ups in typical VR headsets that has a limited camera separation and depth accuracy. The interaction system 100 is also less complex and less expensive to implement than dedicated VR room installations with fixed VR cameras, typical in commercial settings. The peripheral sensor 105 may be freely movable in the room with respect to the user 102, such as an image sensor 105 connected to, or integrated with, the user’s laptop, as exemplified in Fig. 1. Having the positioning unit 104 configured to determine the relative sensor positions, from the first and second sets of spatial data as described above, allows the peripheral sensor 105 to be freely movable with respect to the user 102.
The second set of spatial data may be sent wirelessly by the peripheral sensor 105 to the wearable display device 103 to be received by the positioning unit 104. The positioning unit 104 may thus be in communication with the wearable display device 103. The positioning unit 104 may be directly connected to the wearable display device 103 in one example, such as integrated with the wearable display device 103. The positioning unit 104 may have a direct wired connection to the POV sensor 106, while receiving the second set of spatial data from the peripheral sensor 105 over wireless communication. Further, the VR controller 101 may be directly connected to the wearable display device 103 in one example, such as integrated with the wearable display device 103. The VR interaction system 100 may thus comprise the peripheral sensor 105 and the wearable display device 103 in one example. The VR interaction system 100 thus allows for maintaining the practicability of utilizing a wearable display device 103, such as a 3D VR headset, while incorporating spatial data captured by a peripheral sensor 105 into the mapping of the user coordinates (xu,yu,zu) to the virtual space model 107. The peripheral sensor 105 thus sends spatial data, such as data tracking the user’s position, from its viewpoint to the VR headset which merges that tracking data with its own viewpoint to create an improved reconstruction of the tracked object, typically the user’s hands and/or input device 109. This provides for solving many problems with the prior methods. E.g. the peripheral sensor 105 allows for obtaining a view point from an angle with a greater separation than that of a stereo camera of a typical VR headset, giving better position information in the depth direction as seen from the VR headset. With a view from a different position, an un-occluded view of parts of the hand can be obtained, which otherwise can not be seen from a VR headset with a stereo camera. For most positions when the peripheral sensor 105 is on the other side of the user’s hand from the VR headset, an increase in distance from the VR headset results in a decrease of the distance to the peripheral sensor 105, providing for accurate positioning of the user 102.
The peripheral sensor may be a POV sensor of a second wearable display device (not shown). Thus, the spatial data obtained from a VR headset of a second user, having the first user 102 within the field-of-view of the second user, may be combined with the spatial data captured by POV sensor 106 of the first user 102. The field-of-view of the second user in a direction towards the first user 102 may thus be utilized for determining the user coordinates (xu,yu,zu) of the first user 102, and vice versa.
It is conceivable that the peripheral sensor 105 may be integrated in other devices such as in whiteboards or other interaction surfaces, such as touch surfaces, in some examples. The position of the peripheral sensor 105 may be initially unknown also in this case, before the relative sensor position is determined from the first and second sets of spatial data. The advantages of combining the spatial data from the POV sensor 106 of wearable display device 103 with the spatial data captured by such peripheral sensor 105 are provided for also in this example, i.e. a more accurate determining of the user coordinates (xu,yu,zu). The positioning unit 104 may be configured to receive spatial position information from a plurality of peripheral sensors 105. The plurality of peripheral sensors 105 may be freely movable in the room with respect to the user 102, such as respective image sensors 105 of a plurality of laptops in the room, or a plurality of VR headsets, or other sensors 105 as exemplified. The aforementioned second set of spatial data may thus comprise second subsets of spatial data received form a respective peripheral sensor 105. The positioning unit 104 may be configured to determine, based on the first set of spatial data and second subsets of spatial data, the placement of the plurality of peripheral sensors 105 relative to the placement of the POV sensor 106 as a plurality of relative sensor positions. The positioning unit 104 may be configured to determine user coordinates (xu,yu,zu) of the user 102 in the room based on the plurality of relative sensor positions, the first set of spatial data and the second subsets of spatial data. The VR interaction system 100 may comprise a plurality of peripheral sensors 105. Each of the plurality of peripheral sensors 105 may be freely movable.
The peripheral sensor 105 may comprise a depth camera in one example. The depth camera may comprise first and second image sensors with overlapping field-of-views. The positioning unit 104 may thus be configured to determine the user coordinates (xu,yu,zu) based on the image data from the first and second image sensors and the first set of spatial data from the POV sensor 106. This may provide for a further improved model 107 of the user 102 in the virtual space.
The positioning unit 104 may be configured to continuously receive the spatial position information from the peripheral sensor 105 and the POV sensor 106 to track the user 102 in the room over a duration of time. The positioning unit 104 may be configured to determine the user coordinates (xu,yu,zu) for said duration of time to calculate a velocity and/or an acceleration of the user 102. The VR controller 101 may be configured to map the tracked user coordinates (xu,yu,zu) to the VR environment coordinate system to generate a model 107 having a corresponding velocity and/or acceleration.
The positioning unit 104 may be configured to receive sensor motion data from the peripheral sensor 105 and/or the POV sensor 106, such as an acceleration and/or a change in orientation of the peripheral sensor 105 and/or the POV sensor 106 in the room. E.g. the POV sensor 106 may send motion data of the acceleration and/or a change in orientation of the POV sensor 106 in the VR headset, i.e. in the wearable display device 103, as the user’s head moves. The positioning unit 104 may be configured to determine the user coordinates (xu,yu,zu) based on the sensor motion data and the first and second sets of spatial data. The positioning unit 104 may be configured to determine the location of a common reference marker 108 in the room in which the user 102 is positioned based on the first and second sets of spatial data. Fig. 3 shows an example of a common reference marker 108 as an illuminated area or spot on a wall 111 of the room. The common reference marker 108 may in such case be generated by a light source in the wearable display device 103 and/or in the peripheral sensor 105. The POV sensor 106 and the peripheral sensor 105 may thus capture image data of the common reference marker 108, which is uniquely identifiable by the positioning unit 104 in the received first and second sets of spatial data. The positioning unit 104 may be configured to determine the relative sensor position based on the common reference marker 108. For example, the positioning unit 104 may be configured to determine and angle (ai, 82) of a line-of-sight (h, I3) between the common reference marker 108 and the POV sensor 106 and the peripheral sensor 105, respectively, relative a reference axis (ci) of the POV sensor 106 and a reference axis (C2) of the peripheral sensor 105. The relative sensor position may thus be accurately determined, e.g. in combination with determining angles vi, V2, as described in relation to Fig. 2, even if both the peripheral sensor 105 and the POV sensor 106 are subject to continuous shifting positions in relation to the user 102. This allows for the positioning unit 104 to accurately determine the user coordinates (xu,yu,zu) based on image data of the user 102 from the peripheral sensor 105 and the POV sensor 106.
Fig. 4 shows a further example of determining the relative sensor position based on a common reference marker 108. The wearable display device 103 has turned so that the field of view of POV sensor 106 has moved away from the peripheral sensor 105. Still, the common reference marker 108 is within the field of view of both the POV sensor 106 and the peripheral sensor 105, so that the relative sensor position can be tracked and determined. A plurality of common reference markers 108 may be generated by the peripheral sensor 105 and/or the POV sensor 106, each being uniquely identifiable to facilitate determining the relative sensor position. The common reference marker 108 may be a determined geometric pattern. The peripheral sensor 105 and/or the wearable display device 103 may be configured to emit a light patter as the common reference marker 108, having a determined shape. In another example, the geometric pattern may be a structural geometry of an identified object in the sets of spatial data. A unique object may thus be identified in the image data captured by both the POV sensor 106 and the peripheral sensor 105. E.g. the angle towards the uniquely identified object (corresponding to ai and a2 in Fig. 3) may be determined, allowing for the positioning unit 104 to determine the relative sensor position.
The common reference marker 108 may be a spot in the room being lit by a determined temporal light modulation, i.e. a light intensity that varies over time. The common reference marker 108 may thus be a spot of light on the wall 111 that blinks in a defined sequence or frequency. The peripheral sensor 105 and/or the wearable display device 103 may be configured to emit light in such defined sequence or frequency. A plurality of common reference markers 108 may be generated by the peripheral sensor 105 and/or the POV sensor 106, each having a uniquely identifiable sequence of light.
The virtual-reality interaction system 100 may comprise a light modulator 110 configured to emit said light modulation in a defined direction with respect to a first sensor of the peripheral sensor 105 and the POV sensor 106. E.g. the POV sensor 106 may emit the light modulation in a defined direction, such as in the example of Fig. 3. The light modulation is identified in the set of spatial data captured by a second sensor of the peripheral sensor 105 and the POV sensor 106, not being the first sensor. I.e. the peripheral sensor 105 may capture the light modulation emitted by the POV sensor 106. The identified light modulation may be assigned as the common reference marker 108.
The virtual-reality interaction system 100 may comprise the peripheral sensor 105. The peripheral sensor 105 may be configured to capture image data of the user 102. The peripheral sensor 105 may be configured to determine a user coordinate approximation of the user 102 based on the captured image data. E.g. the peripheral sensor 105 may comprise a camera which captures 2D image data of the user 102. The peripheral sensor 105 may process the image data to identify the position and shape of e.g. the user’s hand, in relation to the POV sensor 106. The peripheral sensor 105 may assign a set of corresponding user coordinates as an approximation of the position and shape of the hand. E.g. the user’s hand may have a certain pose, where the key-points of the pose, such as the position of the fingertips, are assigned an x- y coordinate. The peripheral sensor 105 may be configured to send the user coordinate approximation as the second set of spatial data to the positioning unit 104. I.e. the peripheral sensor 105 may pre-process the captured image data and send an approximation of the user coordinates to the positioning unit 104. The positioning unit 104 may be connected to the wearable display device 103. Thus, the user coordinate approximation may be sent wirelessly to the wearable display device 103. The positioning unit 104 may be configured to determine the user coordinates (xu,yu,zu) based on the user coordinate approximation received from the peripheral sensor 105, the relative sensor position, and the first set of spatial data, i.e. as captured by the POV sensor 106. The final full user coordinates (xu,yu,zu) for each key-point of the hand pose may thus be determined. The peripheral sensor 105 may comprise a depth camera in one example, in which case the user coordinate approximation of the hand may comprise x-y-z coordinates of the key-points of the hand pose. These x-y-z coordinates may then be sent to the positioning unit 104, e.g. wirelessly as described above, for determining the final full user coordinates (xu,yu,zu) for each key-point of the hand pose. This may provide for a further improved determination of the model 107.
Fig. 5 illustrates a flow chart of a method 200 in a VR interaction system 100. The order in which the steps of the method 200 are described and illustrated should not be construed as limiting and it is conceivable that the steps can be performed in varying order. The method 200 comprises receiving 201 , at a positioning unit 104, spatial position information around a user 102 in a room having room coordinates (x,y,z). The spatial position information comprises a first set of spatial data of the user 102 and of a placement of a peripheral sensor 105. The first set of spatial data is captured by a point of view (POV) sensor 106 of a wearable display device 103 when worn by the user 102. The spatial position information comprises a second set of spatial data of the user 102 and of a placement of the POV sensor 106. The second set of spatial data is captured by the peripheral sensor 105. The method 200 comprises determining 202, based on the first and second sets of spatial data, the placement of the peripheral sensor 105 relative to the placement of the POV sensor 106 as a relative sensor position. The method 200 comprises determining 203 user coordinates (xu,yu,zu) of the user 102 in the room based on the relative sensor position and the first and second sets of spatial data. The method 200 comprises mapping 204 the user coordinates to a VR environment coordinate system (vx,vy,vz) to generate 205 a model 107 of the user 102 within a virtual space to be displayed in the wearable display device 103. The method 200 thus provides for the advantageous benefits as described above in relation to the VR interaction system 100 and Figs. 1 - 4. The method 200 provides a VR interaction system with a high-precision user tracking, while having less complex user implementation.
A computer program product is provided comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method 200.
The present invention has been described above with reference to specific examples. However, other examples than the above described are equally possible within the scope of the invention. The different features and steps of the invention may be combined in other combinations than those described. The scope of the invention is only limited by the appended patent claims.
More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings of the present invention is/are used.

Claims

Claims
1 . A virtual-reality (VR) interaction system (100) comprising a VR controller (101) configured to generate a model (107) of a user (102) in a VR environment coordinate system (vx,vy,vz) within a virtual space to be displayed in a wearable display device (103), and a positioning unit (104) configured to receive spatial position information around the user in a room having room coordinates (x,y,z), wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor (105), wherein the first set of spatial data is captured by a point of view (POV) sensor (106) of the wearable display device when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determine, based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determine user coordinates (xu,yu,zu) of the user in the room based on the relative sensor position and the first and second sets of spatial data, and communicate the user coordinates to the VR controller, wherein the VR controller is configured to map the user coordinates to the VR environment coordinate system to generate said model.
2. Virtual-reality interaction system according to claim 1 , wherein the peripheral sensor is freely movable in the room with respect to the user.
3. Virtual-reality interaction system according to claim 1 or 2, wherein the user coordinates comprise a set of coordinates defining the user in 3D, wherein the VR controller is configured to generate said model in 3D based on said set of user coordinates.
4. Virtual-reality interaction system according to any of claims 1 - 3, wherein the positioning unit is configured to continuously receive the spatial position information to track the user in the room over a duration of time, and determine the user coordinates for said duration of time to calculate a velocity and/or an acceleration of the user.
5. Virtual-reality interaction system according to any of claims 1 - 4, wherein the positioning unit is configured to receive sensor motion data from the peripheral sensor and/or the POV sensor, such as an acceleration and/or a change in orientation of the peripheral sensor and/or the POV sensor in the room, and determine the user coordinates based on the sensor motion data and the first and second sets of spatial data.
6. Virtual-reality interaction system according to any of claims 1 - 5, wherein the positioning unit is configured to determine the location of a common reference marker (108) in the room based on the first and second sets of spatial data.
7. Virtual-reality interaction system according to claim 6, wherein the relative sensor position is determined based on the common reference marker.
8. Virtual-reality interaction system according to claim 6 or 7, wherein the common reference marker is a determined geometric pattern.
9. Virtual-reality interaction system according to claim 8, wherein the geometric pattern is generated by a light pattern and/or is a structural geometry of an identified object in the sets of spatial data.
10. Virtual-reality interaction system according to claim 6 or 7, wherein the common reference marker is a spot in the room being lit by a determined temporal light modulation.
11. Virtual-reality interaction system according to claim 10, comprising a light modulator (110) configured to emit said light modulation in a defined direction with respect to a first sensor of the peripheral sensor and the POV sensor, wherein the light modulation is identified in the set of spatial data captured by a second sensor of the peripheral sensor and the POV sensor, not being said first sensor, and wherein the identified light modulation is assigned as the common reference marker.
12. Virtual-reality interaction system according to any of claims 1 - 11 , wherein the first and second sets of spatial data comprise image data, wherein the positioning unit determines the user coordinates based on the image data.
13. Virtual-reality interaction system according to any of claims 1 - 12, comprising the peripheral sensor and/or the wearable display device.
14. Virtual-reality interaction system according to any of claims 1 - 13, comprising the peripheral sensor, wherein the peripheral sensor is configured to capture image data of the user, determine a user coordinate approximation of the user based on said image data, send the user coordinate approximation as the second set of spatial data to the positioning unit, wherein the positioning unit is configured to determine the user coordinates (xu,yu,zu) based on the user coordinate approximation and the first set of spatial data.
15. Virtual-reality interaction system according to any of claims 1 - 14, comprising the peripheral sensor and the wearable display device, wherein the second set of spatial data is sent wirelessly by the peripheral sensor to the wearable display device to be received by the positioning unit, (the positioning unit can be arranged in the headset)
16. Virtual-reality interaction system according to any of claims 1 - 15, wherein the peripheral sensor is a POV sensor of a second wearable display device.
17. Virtual-reality interaction system according to any of claims 1 - 16, wherein the peripheral sensor comprises a depth camera comprising first and second image sensors with overlapping field-of-views.
18. A method (200) in a virtual-reality (VR) interaction system (100) comprising receiving (201) at a positioning unit (104) spatial position information around a user in a room having room coordinates (x,y,z), wherein the spatial position information comprises a first set of spatial data of the user and of a placement of a peripheral sensor (105), wherein the first set of spatial data is captured by a point of view (POV) sensor (106) of a wearable display device (103) when worn by the user, and a second set of spatial data of the user and of a placement of the POV sensor, wherein the second set of spatial data is captured by the peripheral sensor, determining (202), based on the first and second sets of spatial data, the placement of the peripheral sensor relative to the placement of the POV sensor as a relative sensor position, determining (203) user coordinates (xu,yu,zu) of the user in the room based on the relative sensor position and the first and second sets of spatial data, and mapping (204) the user coordinates to a VR environment coordinate system (vx,vy,vz) to generate (205) a model (107) of the user within a virtual space to be displayed in the wearable display device.
19. A computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to claim 18.
PCT/SE2023/050458 2022-05-31 2023-05-10 A virtual-reality interaction system WO2023234824A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE2230167-5 2022-05-31
SE2230167 2022-05-31

Publications (1)

Publication Number Publication Date
WO2023234824A1 true WO2023234824A1 (en) 2023-12-07

Family

ID=86469120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2023/050458 WO2023234824A1 (en) 2022-05-31 2023-05-10 A virtual-reality interaction system

Country Status (1)

Country Link
WO (1) WO2023234824A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200342673A1 (en) * 2019-04-23 2020-10-29 Valve Corporation Head-mounted display with pass-through imaging

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200342673A1 (en) * 2019-04-23 2020-10-29 Valve Corporation Head-mounted display with pass-through imaging

Similar Documents

Publication Publication Date Title
CN110794958B (en) Input device for use in an augmented/virtual reality environment
CN110238831B (en) Robot teaching system and method based on RGB-D image and teaching device
CN105912110B (en) A kind of method, apparatus and system carrying out target selection in virtual reality space
US10384348B2 (en) Robot apparatus, method for controlling the same, and computer program
EP1629366B1 (en) Single camera system for gesture-based input and target indication
Krupke et al. Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction
US9430698B2 (en) Information input apparatus, information input method, and computer program
CN107717981B (en) Control device of mechanical arm and teaching system and method thereof
US10179407B2 (en) Dynamic multi-sensor and multi-robot interface system
Lambrecht et al. Spatial programming for industrial robots based on gestures and augmented reality
CN109313502B (en) Tap event location using selection device
US20190050132A1 (en) Visual cue system
US11209916B1 (en) Dominant hand usage for an augmented/virtual reality device
WO2013074989A1 (en) Pre-button event stylus position
Lambrecht et al. Markerless gesture-based motion control and programming of industrial robots
KR20030002937A (en) No Contact 3-Dimension Wireless Joystick
WO2023234824A1 (en) A virtual-reality interaction system
Braeuer-Burchardt et al. Finger pointer based human machine interaction for selected quality checks of industrial work pieces
CN114373016A (en) Method for positioning implementation point in augmented reality technical scene
CN111857364B (en) Interaction device, virtual content processing method and device and terminal equipment
Caruso et al. AR-Mote: A wireless device for Augmented Reality environment
JP2022163836A (en) Method for displaying robot image, computer program, and method for displaying robot image
CN112181135A (en) 6-DOF visual touch interaction method based on augmented reality
CN111475019A (en) Virtual reality gesture interaction system and method
CN110554784B (en) Input method, input device, display device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23725466

Country of ref document: EP

Kind code of ref document: A1