GB2451461A - Camera based 3D user and wand tracking human-computer interaction system - Google Patents

Camera based 3D user and wand tracking human-computer interaction system Download PDF

Info

Publication number
GB2451461A
GB2451461A GB0714844A GB0714844A GB2451461A GB 2451461 A GB2451461 A GB 2451461A GB 0714844 A GB0714844 A GB 0714844A GB 0714844 A GB0714844 A GB 0714844A GB 2451461 A GB2451461 A GB 2451461A
Authority
GB
United Kingdom
Prior art keywords
user
cameras
world
virtual
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0714844A
Other versions
GB0714844D0 (en
Inventor
Naveen Chawla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0714844A priority Critical patent/GB2451461A/en
Publication of GB0714844D0 publication Critical patent/GB0714844D0/en
Publication of GB2451461A publication Critical patent/GB2451461A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0325Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
    • G06K9/00355
    • G06T7/004
    • G06T7/2086
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/142Image acquisition using hand-held instruments; Constructional details of the instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6676Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera by dedicated player input
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Position Input By Displaying (AREA)

Abstract

A camera based tracking system is disclosed providing computer tracking of two handheld wands 1,2 by means of a pair of cameras 6,7, and tracking of the user's viewing position by means of a viewing position marker 3 which may be mounted on the users head, or by individually tracking the users eyes 9,10. The wands 1,2 may comprise a user operated trigger 15 to allow the user to pick up and manipulate objects inside a 3D virtual environment. The wands 1,2 may include a marker 4,5.

Description

Camera-V ision-Base'J 3D User-andwand..Track lug Stereoscopic Human-Computer Interaction System A camera-vision-based system, which includes the computer tracking of up to two handheld wands, as shown in Fig.2 and Fig.3, and denoted by I and 2 in Fig. 1, in addition to the tracking of the user's left and right eye viewing positions via a viewing position marker 3 in Fig. I which is mounted on their head, or by individually tracking the user's eyes denoted by 9 and 10 in Fig.!, is proposed as a crucially and uniquely unified system for the purpose of achieving complete stereoscopic 3D human-computer interaction. The unique completeness of interaction that is afforded by the system can only be achieved using the uniquely unified combination proposed. The "wands", as designed, would each have a user-operatej trigger 15 (Fig.l), and as such would allow the user to pick up and manipulate objects inside a computer 3D virtual world, and, depending how an application that uses this system is set up, would enable the user to control their movement in any way they like inside that 3D world, according to whatever parameters of movement the application programmer may set in the application. A given wand would use computer vision to determine its position in 3D space, using a marker on itsenddenoted by4 and Sin Fig.! andt oormo merasdenotec by 6 and 7 in Fig.l.
Determining the position of the wand in 3D space involves recognizing the fact that cameras have a diffuse and not a parallel pattern of vision which radiates from its optical centre as far as the angle of view. This means that if we want to determine the position in space we need to recognize that a single point as denoted by 1 in Fig.4 on a camera image as denoted by 2 in Fig.4 would represent a line of possible positions for that point as denoted by 3 in Fig.4 emanating from the optical centre of the camera as denoted by 4 in Fig.4. The camera in Fig. 4is as viewed from above. The 2D position of the point on the screen would be tracked by computer.
Using a second camera we can determine a second line of possible positions for the point, and use the intersection to determine the position in 3D space, as shown in Fig.5. This is regardless of the position of the cameras, as long as they are both detecting the point in question. The first camera is denoted by I in Fig.5, the second by 2 in Fig.5, .. the first line of possible positions for the point is denoted by 3 in Fig.5, the second line of :.:: possible positions for the point is denoted by 4 in Fig.5, and the 3D location of the point (x,y,z) is denoted by 5 in Fig.5. If the two lines are found to be skewed, the mid-point of the shortest joining line between those 3D lines should be used to determine a best estimate of the point's location.
A second wand is easily added for interaction with both hands simultaneously. The wand is designed as shown in the pictures, so that its trigger responds to a natural squeezing : action of the hand, in order to give the user a natural impression of grabbing an object, as opposed to the "thumb_middle..fingerplus..jfldexfig.,, action of the 3D mouse, which does not resemble the natural grabbing of an object. /
Four versions of the wand are shown in fig.2. Version 1 in fig.2 shows a thumb-operated button I on the wand so that the wand trigger is operated by squeezing between the thumb and other fingers. This is one of the natural grabbing actions of the hand, as mentioned. Version 2 in fig2 shows an index finger operated trigger button 2 so that the wand trigger is operated by squeezing between the index finger and the lower palm or the thumb, much like a gun. This is another natural grabbing action of the hand. Version 3 in fig.2 shows a trigger which can be activated anywhere along the length of the wand by squeezing between the two parts of the trigger 3 and 4. This can accommodate any natural grabbing action of the hand. Finally, version 4 in flg.2 shows a trigger 5 which is activated by squeezing anywhere on the wand. This also accormnodates all natural squeezing actions of the band.
One or more additional markers may also be included on the handheld device, either already attached or attachable by the user via an extension piece in each case, such as extension piece 11 in fig.! and 3 and 4 in flg.3, in order to allow the orientation of the handheld device to be additionally tracked. This is so that the handheld device may be used as a "sword", a "gun", an "object pointer" and so on.
Item I in flg.3 is an "object pointer" or "sword" version of the handheld device.
Item 2 in fig. 3 is a "gun" version.
In addition to this, what is proposed is again the use of computer vision from two or more cameras, which are either the same cameras or additional cameras, in order to determine the 3D coordinates of the user's left and right eye viewing positions, by tracking viewing position marker 3 in fig.! and estimating the displacement of the user's left and right eyes from the viewing position marker, or by individually tracking the position of the user's left eye 9 or right eye 10, or both, in fig. 1, in order that actual stereoscopic viewing realism of the computer's 3D world based on the 3D coordinates of the user's left and right eye viewing positions can be created, in other words, it would allow the user to move their head and view the side of objects stereoscopically, and when the user moves closer to the screen, denoted by 8 in fig. 1, the left and right eye images, viewed using stereoscopic apparatus, of the 3D virtual world would show the correct perspective of that 3D world for each eye in relation to the user's left and right eye viewing positions. It would do this by setting two virtual camera positions in the computer's 3D virtual world *. . to the 3D coordinates of the user's left and right eye viewing positions (or proportional :.:: equivalents), and setting the frustum of the virtual camera in each case in the computer's *.., virtual 3D world to match the 3D shape that the user's relevant eye viewing position makes with the corners of the image of that virtual 3D world on that screen at any given * . : time, typically the corners of the screen. The frustums would, thus, typically be * *" asymmetric. Examples of the frustums are shown by 12 and 13 in fig.l. The tracking of * the 3D coordinates of the user left and right eye viewing positions would use the same 3D line near-intersection method as described for tracking the 3D coordinates of the wand.
S... . . . . Thus, what is proposed is a system which uniquely creates total stereoscopic positional : interaction realism. With the system, the user can thus, uniquely, from any viewing position, use their natural depth perception to reach out for and interact with objects in the 3D virtual world, because of the fact that the system is stereoscopic.

Claims (9)

  1. I. A system which uses two cameras, or more, to detect the 3D coordinates of a person's left eye and right eye viewing positions relative to the screen they are looking at, which subsequently sets the frustums of two virtual cameras in the computer's virtual 3D world to match the 3D shapes that the user's left eye and right eye viewing positions make with the corners of the image of that virtual 3D world on that screen at any given time, in order to create a realistic stereoscopic picture of that virtual 3D world to the user, and which includes a handheld device which has a trigger which responds to a squeezing action between the thumb and one or more other fingers, whose signal is transmitted to the computer when the squeezing action is performed, which also uses the cameras to determine its position in 3D space, thereby allowing the user to interact stereoscopically with the 3D virtual world from the screen at any given time, while viewing that world stereoscopically using the two frustums as calculated.
  2. 2. A system which uses two cameras, or more, to detect the 3D coordinates of a person's left eye and right eye viewing positions relative to the screen they are looking at, which subsequently sets the fnistums of two virtual cameras in the computer's virtual 3D world to match the 3D shapes that the user's left eye and right eye viewing positions make with the corners of the image of that virtual 3D world on that screen at any given time, in order to create a realistic stereoscopic picture of that virtual 3D world to the user, and which includes a handheld device which has a trigger which responds to a squeezing action between one or more fingers and the palm, whose signal is transmitted to the computer when the squeezing action is performed, which also uses the cameras to determine its position in 3D space, thereby allowing the user to interact stereoscopically with the 3D virtual world fmm the screen at any given time, while viewing that world stereoscopically using the two frustums as calculated.
  3. 3. A system as claimed in Claim I or Claim 2 in which the position in 3D space of each article being tracked (the article being either the user's viewing position or the handheld device, where applicable) is calculated using the 2D position at which that article, or a marker on or in a fixed relative position near that article, appears on the image generated by each camera, to determine a straight 3D line of possible positions of that article or marker for each camera, and then the intersection of those 3D lines, or, if the 3D lines are found to be skewed, the mid-point of the shortest joining line between those 3D lines, to determine a single 3D position of location for that article or marker, and if that marker is displaced from the article itself by a fixed relative position, using a calculation or an estimation of that displacement to determine a single 3D position of location of the article itself.
  4. 4. A system as claimed in Claim I or Claim 2 or Claim 3 in which there is a marker placed on or in a fixed relative position near each article being tracked, which is a device which actively emits energy (for example an LED), which is capable of being detected by the cameras used.
  5. 5. A system as claimed in Claim 1 or Claim 2 or Claim 3 in which there is a marker placed on or in a fixed relative position near each article being tracked, which is a passive reflective marker which is capable of being detected by the cameras used.
  6. 6. A system as claimed in any preceding claim in which the cameras used comprise a combination of, or consist only of, infrared cameras, ordinary light cameras, ultrasound cameras or any other types of camera which are capable of detecting the markers or objects to be tracked.
  7. 7. A system as claimed in any preceding claim in which one or more additional markers are also included on the handheld device, either already attached or attachable by the user via an extension piece in each case, in order to allow the orientation of the handheld device to be additionally tracked.
  8. 8. A system as claimed in any preceding claim in which there are two of the handheld devices deployed, one for the left hand and one for the right hand.
  9. 9. A system substantially as herein described above and illustrated in the accompanying drawings.
GB0714844A 2007-07-28 2007-07-28 Camera based 3D user and wand tracking human-computer interaction system Withdrawn GB2451461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0714844A GB2451461A (en) 2007-07-28 2007-07-28 Camera based 3D user and wand tracking human-computer interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0714844A GB2451461A (en) 2007-07-28 2007-07-28 Camera based 3D user and wand tracking human-computer interaction system

Publications (2)

Publication Number Publication Date
GB0714844D0 GB0714844D0 (en) 2007-09-12
GB2451461A true GB2451461A (en) 2009-02-04

Family

ID=38528998

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0714844A Withdrawn GB2451461A (en) 2007-07-28 2007-07-28 Camera based 3D user and wand tracking human-computer interaction system

Country Status (1)

Country Link
GB (1) GB2451461A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100311512A1 (en) * 2009-06-04 2010-12-09 Timothy James Lock Simulator with enhanced depth perception
WO2011089538A1 (en) * 2010-01-25 2011-07-28 Naveen Chawla A stereo-calibration-less multiple-camera human-tracking system for human-computer 3d interaction
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
WO2015173256A2 (en) 2014-05-13 2015-11-19 Immersight Gmbh Method and system for determining a representational position
US9213410B2 (en) 2010-03-26 2015-12-15 Hewlett-Packard Development Company L.P. Associated file
DE102016102868A1 (en) 2016-02-18 2017-08-24 Adrian Drewes System for displaying objects in a virtual three-dimensional image space
GB2548341A (en) * 2016-03-10 2017-09-20 Moog Bv Movement tracking and simulation device and method
DE202014011540U1 (en) 2014-05-13 2022-02-28 Immersight Gmbh System in particular for the presentation of a field of view display and video glasses

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1114308A (en) * 1997-06-20 1999-01-22 Matsushita Electric Ind Co Ltd Three-dimensional position detecting method and device
EP1176559A2 (en) * 2000-07-21 2002-01-30 Sony Computer Entertainment America Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
US20020036617A1 (en) * 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US6766036B1 (en) * 1999-07-08 2004-07-20 Timothy R. Pryor Camera based man machine interfaces
WO2007078639A1 (en) * 2005-12-12 2007-07-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1114308A (en) * 1997-06-20 1999-01-22 Matsushita Electric Ind Co Ltd Three-dimensional position detecting method and device
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20020036617A1 (en) * 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
US6766036B1 (en) * 1999-07-08 2004-07-20 Timothy R. Pryor Camera based man machine interfaces
EP1176559A2 (en) * 2000-07-21 2002-01-30 Sony Computer Entertainment America Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
WO2007078639A1 (en) * 2005-12-12 2007-07-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100311512A1 (en) * 2009-06-04 2010-12-09 Timothy James Lock Simulator with enhanced depth perception
WO2011089538A1 (en) * 2010-01-25 2011-07-28 Naveen Chawla A stereo-calibration-less multiple-camera human-tracking system for human-computer 3d interaction
US9213410B2 (en) 2010-03-26 2015-12-15 Hewlett-Packard Development Company L.P. Associated file
WO2015173256A2 (en) 2014-05-13 2015-11-19 Immersight Gmbh Method and system for determining a representational position
DE102014106718A1 (en) 2014-05-13 2015-11-19 Immersight Gmbh Method and system for determining an objective situation
DE202014011540U1 (en) 2014-05-13 2022-02-28 Immersight Gmbh System in particular for the presentation of a field of view display and video glasses
DE102014106718B4 (en) 2014-05-13 2022-04-07 Immersight Gmbh System that presents a field of view representation in a physical position in a changeable solid angle range
CN104463899A (en) * 2014-12-31 2015-03-25 北京格灵深瞳信息技术有限公司 Target object detecting and monitoring method and device
CN104463899B (en) * 2014-12-31 2017-09-22 北京格灵深瞳信息技术有限公司 A kind of destination object detection, monitoring method and its device
DE102016102868A1 (en) 2016-02-18 2017-08-24 Adrian Drewes System for displaying objects in a virtual three-dimensional image space
GB2548341A (en) * 2016-03-10 2017-09-20 Moog Bv Movement tracking and simulation device and method

Also Published As

Publication number Publication date
GB0714844D0 (en) 2007-09-12

Similar Documents

Publication Publication Date Title
US10534432B2 (en) Control apparatus
GB2451461A (en) Camera based 3D user and wand tracking human-computer interaction system
US10606373B1 (en) Hand-held controller tracked by LED mounted under a concaved dome
US9972136B2 (en) Method, system and device for navigating in a virtual reality environment
KR100812624B1 (en) Stereovision-Based Virtual Reality Device
CN103347437B (en) Gaze detection in 3D mapping environment
JP5969626B2 (en) System and method for enhanced gesture-based dialogue
CN110647239A (en) Gesture-based projection and manipulation of virtual content in an artificial reality environment
CN104246664B (en) The transparent display virtual touch device of pointer is not shown
KR102147430B1 (en) virtual multi-touch interaction apparatus and method
CN103336575A (en) Man-machine interaction intelligent glasses system and interaction method
KR101441882B1 (en) method for controlling electronic devices by using virtural surface adjacent to display in virtual touch apparatus without pointer
US20110043446A1 (en) Computer input device
US11640198B2 (en) System and method for human interaction with virtual objects
KR20120138329A (en) Apparatus for 3d using virtual touch and apparatus for 3d game of the same
WO2017021902A1 (en) System and method for gesture based measurement of virtual reality space
CN108572730B (en) System and method for interacting with computer-implemented interactive applications using a depth aware camera
US11334178B1 (en) Systems and methods for bimanual control of virtual objects
US11944897B2 (en) Device including plurality of markers
CN115777091A (en) Detection device and detection method
CN206147523U (en) A hand -held controller for human -computer interaction
van de Pol et al. Interaction techniques on the Virtual Workbench
JP6977991B2 (en) Input device and image display system
US20240198211A1 (en) Device including plurality of markers
CN116243786A (en) Method for executing interaction on stereoscopic image and stereoscopic image display system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)