WO2010124333A1 - Computer input device and computer interface system - Google Patents

Computer input device and computer interface system Download PDF

Info

Publication number
WO2010124333A1
WO2010124333A1 PCT/AU2010/000492 AU2010000492W WO2010124333A1 WO 2010124333 A1 WO2010124333 A1 WO 2010124333A1 AU 2010000492 W AU2010000492 W AU 2010000492W WO 2010124333 A1 WO2010124333 A1 WO 2010124333A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
computer
input device
detected
orientation
Prior art date
Application number
PCT/AU2010/000492
Other languages
French (fr)
Inventor
Adrian Risch
Original Assignee
Jumbuck Entertainment Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2009901853A external-priority patent/AU2009901853A0/en
Application filed by Jumbuck Entertainment Ltd filed Critical Jumbuck Entertainment Ltd
Publication of WO2010124333A1 publication Critical patent/WO2010124333A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present invention relates to computer input devices and interfaces.
  • the present invention relates to a computer input device and a computer interface system comprising an input device and associated software enabling a user to interact with computer graphic objects generated within a computer program.
  • the present invention has particular, but not exclusive, application in the field of augmented reality systems.
  • Augmented reality (“AR”) systems are computer systems that process both real world and computer-generated data.
  • AR systems are able to blend (or "augment") real world video footage - captured by digital video cameras - with computer-generated graphical objects, that appear in real time in the captured video footage.
  • Real world objects appearing in the footage may be tagged with markers, used by motion tracking algorithms as measures and as points of reference. Markers are known in the art as fiducial markers or 'fiducials'. Other types of markers used in AR systems include images and special codes, such as BCH codes and barcodes.
  • a number of input devices have been proposed to allow users to interact with the computer-generated graphical objects of AR systems. Wireless mouse and similar remote devices exist, including internal accelerometers or gyroscopic devices to detect movement and orientation.
  • a hand-held marker wand comprising a handle with a two-sided planar element attached to the end thereof.
  • a fiducial is provided on each side of the planar element.
  • the device provides two modes of interaction, depending on which marker is in view of a camera capturing video footage of the surrounding scene.
  • a user switches between a selection mode and a modification mode by rotating the wand 180° between the fingers to place the requisite marker in view of the camera.
  • a computer interface system comprising: an input device having two or more detection areas each detectable by a detection means associated with a computer program; and processing means configured to receive an indication of particular detection areas detected by the detection means, and to generate an input for a computer graphic object in the computer program on the basis of the sequence in which detection areas are detected.
  • the detection means is a visual detection means such as a camera.
  • the present invention provides a computer interface system in which a user interacts with a computer graphic object by manipulating an input device with multiple detection areas. Different inputs are delivered to the object by sequentially detecting the detection areas and generating an input on the basis of the sequence of detection areas detected. This approach allows for an unlimited number of unique sequence strings to be generated, even from an input device having only two detection areas.
  • Mapping inputs to the order or sequence of detection of detection areas enables any number of inputs to be delivered to a computer graphic object.
  • the prior art use of a two-marker wand discussed above can be used only in a simple binary system, where each of the two faces is tied to a single interaction with the computer graphic object. There is no contemplation of using the order in which markers are detected or sequence of detections to deliver inputs to the computer graphic object. Consequently, the prior art system is limited to delivering the same number of inputs to computer graphic object as there are markers.
  • the computer program is an augmented reality application, whereby the input device is adapted to generate inputs to computer graphic objects created in the augmented reality application and overlayed onto real world video footage also captured by the detection means.
  • the computer interface system of the present invention can also be used to generate inputs to other computer programs including computer games and user interfaces.
  • the processing means is further configured to receive pose data indicating the position and orientation of the particular detection area, and to display the computer graphic object in or relative to the received position and orientation.
  • the input device may be used as a pointer, whereby the computer graphic object can be moved around the display device of the augmented reality application or other computer program by making corresponding movements of the input device.
  • the processing means is further configured to display the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area.
  • This embodiment accounts for the fact that the input device of the present invention is able to serve the dual purpose of altering the position of a graphic object and delivering an input to that graphic object.
  • the orientation of a detection area is changed (as opposed to its position)
  • the fixed orientation is an upright orientation.
  • the input device comprises a handle with a three-dimensional multi- faced element attached thereto, each face constituting a detection area.
  • particular detection areas are presentable to the detection means by rotating the input device about an axis defined by the handle.
  • the processing means is further configured to receive an indication when two or more detection areas are simultaneously detected by the detection means and to execute a simultaneous-detection routine in response thereto.
  • the simultaneous-detection routine may comprise ignoring an indication of simultaneously-detected detection areas by providing that a particular detection area (such as the last detection area to be detected by the detection means) is taken as the detection area being detected by the detection means.
  • the simultaneous-detection routine may comprise assigning a weighting or ranking to each simultaneously-detected detection area according to prescribed criteria, and taking as the detection area being detected by the detection means the detection area with the highest weighting or ranking.
  • the prescribed criteria may involve assigning a weighting or ranking on the basis of the orientation of the detection area relative to the detection means.
  • the simultaneous-detection routine may comprise taking as the detection area being detected by the detection means a combination of more than one of the simultaneously-detected detection areas.
  • the multi-faced element may comprise a three-dimensional geometric shape including a cube, cuboid, square-based pyramid, triangular-based pyramid, or polygonal prism (such as a triangular prism, having three rectangular faces).
  • the detection areas may be provided by any convenient means including fiducial markers, images, BCH codes, barcodes, infa-red sensitive areas, colour-coding and the like.
  • a detection area may be repeated on one or more of the faces of the three-dimensional element. This enables the same sequence of detection areas to be generated from presentations of different faces of the three-dimensional element to the detection means.
  • an input device for a computer program comprising a handle with a three-dimensional marker element attached thereto having at least three faces, each face constituting a detection area detectable by a detection means associated with the computer program.
  • each of the detection areas is different. More particularly, each of the detection areas may comprise different data.
  • the input is a hand-manipulated input device.
  • a method of generating an input to a computer-graphic object including the steps of: receiving a data sequence indicating the order in which particular detection areas of a multiple-detection area input device are detected; and generating an input for a computer graphic object on the basis of the data sequence.
  • the computer graphic object may be generated in an augmented reality application.
  • the method includes the steps of: receiving pose data indicating the position and orientation of the particular detection area, and displaying the computer graphic object in or relative to the received position and orientation.
  • the method may include the further step of displaying the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area.
  • the fixed orientation may be an upright orientation.
  • Figures 1 to 3 are illustrations of a first embodiment of a computer input device suitable for use in a system and method according to the present invention
  • Figure 4 is an illustration of a second embodiment of a computer input device suitable for use in a system and method according to the present invention
  • Figure 5 and 6 are illustrations of a third embodiment of a computer input device suitable for use in a system and method according to the present invention.
  • Figure 7 and 8 are illustrations of, respectively, a fourth and fifth embodiment of a computer input device suitable for use in a system and method according to the present invention.
  • Figures 9 to 12 are schematic drawings illustrating the display of a computer- graphic object in an augmented reality application according to an embodiment of the present invention
  • Figures 13 to 16 are schematic drawings illustrating the display of a computer- graphic object in an augmented reality application according to a further embodiment of the present invention
  • Figures 17 to 19 are schematic drawings illustrating the delivery of an input to a computer-graphic object through manipulation of an input device according to an embodiment of the present invention.
  • FIG. 20 is a schematic illustration of one approach to simultaneous multiple-face detection handling. Detailed Description of the Drawings
  • an AR system blends real world video footage captured by a detection means, such as a digital video camera, with computer-generated graphical objects.
  • a detection means such as a digital video camera
  • a number of software platforms are currently available for developing AR systems, such as the ARToolkit from Canterbury University, Wales, New Zealand and Studierstube from Graz University, Austria. These platforms provide APIs and library routines implementing basic AR functionalities, including image detection and fiducial marker recognition and tracking.
  • markers may be images, colours, or special codes such as BCH codes or bar codes.
  • Input device 10 suitable for use with an AR application is illustrated.
  • Input device 10 comprise an elongate solid cylindrical handle 12 with a head 14 attached to the end thereof.
  • Head 14 is a three-dimensional polygon having a plurality of faces 16. Each face (16A, 16B, etc) bears a fiducial marker A, B, etc (fiducials not shown).
  • Input device 10 is thus a non-electronic input device that has a different fiducial marker applied to each face 16 of head 14.
  • Each face 16 therefore defines a separate detection area that is detectable by an input means such as a camera.
  • the camera is linked to a suitable display device, such as a PC, mobile device, television or DVD player.
  • head 14 may be hollow and a light source (not shown) included in the interior thereof, to enable device 10 to function in the dark or in poor lighting conditions.
  • a light source (not shown) included in the interior thereof, to enable device 10 to function in the dark or in poor lighting conditions.
  • faces 16 are formed from a transparent material, such as a suitable plastic in different colours.
  • a conventional or LED light bulb is a suitable interior light source, arranged to transmit light through faces 16 to thereby illuminate them in their selected colour.
  • the fiducial marker in each case is therefore provided by a colour.
  • a light source in the form of an LED may be provided on the exterior of faces 16, emitting light of a selected colour from that face.
  • Device 10 is manipulated by a user holding handle 12 and effecting translational and rotational movement of head 14.
  • head 14 rotates about an axis defined by the longitudinal centreline of handle 12. This allows head 14 to be manipulated by the user's hand without obscuring the faces of the head.
  • the camera loses tracking of a previously detected detection surface and detects a new detection surface.
  • the inventors have applied this phenomenon of sequential detection of the multiple detection surfaces of device 10 to deliver an input to an associated computer-graphic object. Based upon the order in which individual detection areas are detected, various commands can be sent to control a computer-graphic object in an augmented reality application.
  • Rotation of head 14 in a manner that causes detection of a new detection surface constitutes an input, and it is therefore not necessary to rotate head 14 through 180° or through an entire revolution.
  • Rotation can be either clockwise or counter- clockwise, and as the input is communicated in the sequence or order in which surfaces are detected, rotations are not commutative ie. rotating from A to B and B to A may result in different inputs.
  • the fiducial on each face 16 is unique. However, one or more fiducials may be repeated on selected faces. This can result in two different rotations of head 14 creating the same input sequence.
  • a head in the shape of a triangular-based pyramid 18 is shown in Figure 4.
  • Handle 12 is not shown, but its position is schematically marked in dotted line.
  • Pyramid 18 has three usable faces 20 with fiducial markers A, B, C provided on each.
  • Pyramid 18 is shown in the second representation of Figure 4 as rotated about the axis in a clockwise direction to reveal previously hidden face C.
  • a sample of generated inputs is as follows: AB, ABC BC 5 BCA CA 5 CAB
  • selected inputs include: AC, ACB CB, CBA BA, BAC
  • a head 14 in the shape of a right triangular prism 22 is illustrated with reference to Figures 5 and 6.
  • This head 14 has four usable faces (two triangular, two rectangular) provided with fiducial markers A, B, C, D respectively.
  • Triangular prism 22 is shown in the second representation of Figure 5 rotated about an axis in a clockwis.e direction to reveal initially hidden faces C and D.
  • Sample inputs that are generated from triangular prism 22 when rotated in a clockwise direction are as follows:
  • Sample inputs that are generated from pyramid 22 when rotated in a counterclockwise direction are: AD, ADC, ADCB BA, BAD, BADC CB, CBA, CBAD DC, DCB, DCBA
  • redundancy may be built into the system by repeating a fiducial marker across a selected number of faces of head 14.
  • the four faces 20 are illustrated in Figure 6 having only two unique markers A and B provided thereon (respectively, on the two triangular faces and the two rectangular faces) . This of course reduces the number of possible inputs which may be generated.
  • a head 14 in the shape of a cube or cuboid (rectangular prism) 26 is illustrated in Figure 7.
  • Cuboid 26 has five usable faces with markers A, B, C, D and Z provided thereon.
  • the sixth side of the cuboid does not typically carry a marker, as it is used as the place of connection of head 14 to handle 12 (see Figure 1).
  • Cuboid 26 is shown in the second representation of Figure 7 rotated about an axis in a counter-clockwise direction to reveal initially hidden faces C and D.
  • Sample inputs for cuboid 26 rotated in a clockwise direction are as follows:
  • Sample inputs for cuboid 26 when rotated in a counter-clockwise direction are as follows:
  • AD ADC, ADCB BA, BAD, BADC CB, CBA, CBAD DC, DCB, DCBA DZ, DCZ, DCBZ, DCBAZ
  • a head 14 in the shape of a right pentagonal prism 28 is illustrated by reference to Figure 8.
  • Pentagonal prism 28 has six usable sides 30 provided with markers A, B, C, D, E, and Z.
  • Pentagonal prism 28 is shown in Figure 8 rotated in a counterclockwise direction to reveal initially hidden side C, D and E.
  • Sample inputs for pentagonal prism 28 when rotated in a clockwise direction are as follows: AB, ABC, ABCD, ABCDE BC, BCD, BCDEA, BCDEA CD, CDE, CDEA, CDEAB DE, DEA, DEAB, DEABC EA, EAB, EABC, EABCD AZ, ABZ, ABCZ, ABCDZ, ABCDEZ
  • EDC EDCB
  • EDCBA DC, DCB, DCBA, DCBAE CB, CBA, CBAE, CBAED BA, BAE, BAED, BAEDC AE, AED, AEDC, AEDCB EZ, EDZ, EDCZ, EDCBZ, EDCBAZ
  • Input device 10 is used as a pointer to control a computer-graphic object generated within an AR application about the screen of the display device.
  • Each of the markers on the faces of head 14 are linked to a particular object in the AR system, so that tracked movement of any one of the markers actuates a corresponding movement in the computer-graphic object.
  • the types of fiducial markers appropriate for use with this invention are known to the skilled addressee and will not be described in detail in this specification. Such markers are able to provide to a detector: identification of the particular marker device (and therefore the object to be controlled); identification of the particular marker; and position and orientation information of the marker.
  • a marker is tracked by the AR application detecting the position of the marker and returning an array of floats that are translated into 3D coordinates representing the pose of the marker (the marker's position and orientation).
  • the position of the marker instructs the AR application where to render the graphical object in 3D space.
  • the orientation of the marker instructs the AR application how the graphical object should be orientated when rendered.
  • the routines of the AR application may process both position and orientation data of a marker when rendering an associated graphical object. This is illustrated in Figures 9 to 12 showing changes in the orientation of marker B effecting a corresponding change in the orientation of associated displayed graphical object 40. In Figure 12, graphical object 40 is not displayed upon a failure to detect marker B.
  • routines of the AR application may process pose data from markers in a different manner.
  • a graphical object may always be rendered in an upright orientation irrespective of the detected orientation of an associated marker.
  • changes in the orientation of markers affixed to faces 16 of input device 10 may result from movements of the device 10 intended by the user to generate an input to an associated graphical object, rather than the user intending to change the orientation of the graphical object.
  • orientation may be essentially ignored within the AR application by setting the orientation input to zero.
  • This is illustrated in Figures 13 to 16, in which changes in the orientation of marker B do not effect a corresponding change in the orientation of graphic object 50. Instead, graphic object 50 is rendered in a fixed upright orientation irrespective of the orientation of marker B. Again, in Figure 16, graphic object 50 is not displayed upon a failure to detect marker B.
  • orientation information regarding detected marker B may be used to provide further input information by the AR system, rather than object orientation information.
  • FIG. 17 to 19 illustrate the translation of an input sequence from input device 10 to an action taken in relation to computer-graphic object 60.
  • the initial pose of device 10 is such that marker A on face 16A is detected by the camera.
  • Device 10 is rotated by manipulation of handle 12 in the direction illustrated.
  • Figure 18 shows a transition point on the rotation when marker A on face 16A and marker B on face 16B are both detected by the camera.
  • marker B becomes the current marker detected by the camera and a detection sequence AB is therefore recorded.
  • Sequence AB is mapped to a corresponding input in the AR software in order to move an arm of graphic object, robot 60, to an outstretched position.
  • More complex movements of robot 60 can be mapped to equally complex detection sequences from input device 10.
  • Input device 10 and handle 12 can be made from paper, card, plastic, metal or any other suitable material.
  • individual faces 16A of device 10 are detected by a camera and an input is delivered to an AR object on the basis of the sequence in which faces are detected.
  • the camera may have more than one face (and consequently more than one marker) in its field of view at the same time.
  • This situation may be processed by the AR system according to the invention in a number of different ways.
  • the solutions discussed below may be applied separately or, in certain circumstances, may be used in combination.
  • the system may be configured to process one marker at a time.
  • the last detected face is assigned a 'superior' ranking in the event that two or more faces are simultaneously detected. Hence, only when the last- detected face is no longer detected is a face-transition considered to have occurred and an input consequently delivered to an AR object.
  • An example marker sequence for this solution is as follows: A-side captured: A is detected
  • B-side captured B is detected (input due to AB sequence) BC sides captured: B was last detected hence B is detected
  • a pose orientation approach may be taken.
  • the incident angles of the two (or more) detected faces are calculated (by use of an appropriate image processing algorithm) and the one that is closest to 'face-on' is assigned a superior ranking. While the pose may be irrelevant in some of the implementations of the invention discussed above, it is useful for assigning rankings to simultaneously-detected faces 16 of device 10.
  • the angle of detection is defined as the angle between the camera vector and the marker face normal vector at the point where the camera vector intersects the face of the maker. This point of intersection has, of course, to be in the field of view of the camera.
  • the system is configured to recognise only angles of detection between 0° and 80°, 0° corresponding to a 'face-on' view. Faces with an angle greater than 80° are ignored. The closer the angle to 0°, the higher priority the face is given.
  • an example marker sequence using the pose orientation approach is as follows:
  • the system may be configured to select one of the faces randomly or according to a deterministic selection mode, such as choosing a face based on proximity to one of the edges.
  • a further solution involves an alternative approach of making use of both (or all) simultaneously-detected markers, instead of selecting a particular marker based on a ranking system.
  • simultaneously-detected faces are considered as detection areas in their own right.
  • the input sequence is A-AB-B-BC-C-CD.
  • the simultaneously-detected faces AB (faces A and B) , BC (faces B and C) and CD (faces C and D) each constitute new detection areas in their own right.
  • the system and method of the present invention has been developed for use with an input device having a three-dimensional marker element, the skilled reader will appreciate that it may be employed with an input device having a two- dimensional marker element, such as the planar wand of the prior art discussed in the introductory section of this specification.
  • an input device having a two- dimensional marker element such as the planar wand of the prior art discussed in the introductory section of this specification.
  • the sequence of detection areas ABA can be used to provide an input for the computer graphic object different to that provided by to sequence BAB.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to an input device (10) for a computer program, comprising a handle (12) with a three-dimensional marker element (14) attached thereto having at least three faces (16), each face (16)constituting a detection area detectable by a detection means associated with a computer program. The invention also relates to computer interface system comprising the input device (10) and processing means configured to receive an indication of particular detection areas detected by the detection means, and to generate an input for a computer graphic object in the computer program on the basis of the sequence in which detection areas are detected. The invention has particular, but not exclusive application to augmented reality applications.

Description

Computer Input Device and Computer Interface System Field of the Invention
The present invention relates to computer input devices and interfaces. In particular, the present invention relates to a computer input device and a computer interface system comprising an input device and associated software enabling a user to interact with computer graphic objects generated within a computer program. The present invention has particular, but not exclusive, application in the field of augmented reality systems.
Background of the Invention In this specification, where a document, act or item of knowledge is referred to or discussed, this reference or discussion is not an admission that the document, act or item of knowledge or any combination thereof was at the priority date:
(i) part of common general knowledge; or
(ii) known to be relevant to an attempt to solve any problem with which this specification is concerned.
Augmented reality ("AR") systems are computer systems that process both real world and computer-generated data. In particular, AR systems are able to blend (or "augment") real world video footage - captured by digital video cameras - with computer-generated graphical objects, that appear in real time in the captured video footage.
Real world objects appearing in the footage may be tagged with markers, used by motion tracking algorithms as measures and as points of reference. Markers are known in the art as fiducial markers or 'fiducials'. Other types of markers used in AR systems include images and special codes, such as BCH codes and barcodes. A number of input devices have been proposed to allow users to interact with the computer-generated graphical objects of AR systems. Wireless mouse and similar remote devices exist, including internal accelerometers or gyroscopic devices to detect movement and orientation.
O'Gwynn and Johnstone "Bezier Surface Editing Using Marker-based Augmented Reality" Proceedings of IEEE Virtual Reality 2008 Conference, 8-12 March 2008 describes a hand-held marker wand comprising a handle with a two-sided planar element attached to the end thereof. A fiducial is provided on each side of the planar element. The device provides two modes of interaction, depending on which marker is in view of a camera capturing video footage of the surrounding scene. A user switches between a selection mode and a modification mode by rotating the wand 180° between the fingers to place the requisite marker in view of the camera.
Buchmann et al "FingAR Tips: Gesture Based Direct Manipulation in Augmented Reality" Proceeding of the 2nd International Conference on Computer Graphics and Interactive Techniques in Australia and South East Asia, ACM Press, 2004, pp 212-221 describes an input device in the form of a glove with attached fiducial markers. Image processing software tracks gestures from the wearer of the glove. A buzzer provided in the palm portion of the glove provides haptic feedback to the user giving the sensation of 'feeling' computer-generated graphical objects. The present invention aims to provide an augmented reality system including an input device that enables an alternative mode of user interaction with computer- generated graphical objects.
Summary of the Invention
According to a first aspect of the present invention there is provided a computer interface system comprising: an input device having two or more detection areas each detectable by a detection means associated with a computer program; and processing means configured to receive an indication of particular detection areas detected by the detection means, and to generate an input for a computer graphic object in the computer program on the basis of the sequence in which detection areas are detected.
Preferably, the detection means is a visual detection means such as a camera.
The present invention provides a computer interface system in which a user interacts with a computer graphic object by manipulating an input device with multiple detection areas. Different inputs are delivered to the object by sequentially detecting the detection areas and generating an input on the basis of the sequence of detection areas detected. This approach allows for an unlimited number of unique sequence strings to be generated, even from an input device having only two detection areas.
Mapping inputs to the order or sequence of detection of detection areas enables any number of inputs to be delivered to a computer graphic object. In contrast, the prior art use of a two-marker wand discussed above can be used only in a simple binary system, where each of the two faces is tied to a single interaction with the computer graphic object. There is no contemplation of using the order in which markers are detected or sequence of detections to deliver inputs to the computer graphic object. Consequently, the prior art system is limited to delivering the same number of inputs to computer graphic object as there are markers.
According to one embodiment, the computer program is an augmented reality application, whereby the input device is adapted to generate inputs to computer graphic objects created in the augmented reality application and overlayed onto real world video footage also captured by the detection means. However, the computer interface system of the present invention can also be used to generate inputs to other computer programs including computer games and user interfaces. Typically, the processing means is further configured to receive pose data indicating the position and orientation of the particular detection area, and to display the computer graphic object in or relative to the received position and orientation. According to this embodiment, the input device may be used as a pointer, whereby the computer graphic object can be moved around the display device of the augmented reality application or other computer program by making corresponding movements of the input device.
Alternatively, the processing means is further configured to display the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area. This embodiment accounts for the fact that the input device of the present invention is able to serve the dual purpose of altering the position of a graphic object and delivering an input to that graphic object. In this embodiment, when the orientation of a detection area is changed (as opposed to its position), it is assumed that the user is manipulating the input device to deliver an input with regard to the graphical object, rather than changing the orientation of the graphic object. Accordingly, the orientation of the graphic object does not follow the orientation of the detection area.
Typically, the fixed orientation is an upright orientation.
Typically, the input device comprises a handle with a three-dimensional multi- faced element attached thereto, each face constituting a detection area.
According to this embodiment, particular detection areas are presentable to the detection means by rotating the input device about an axis defined by the handle.
Optionally, the processing means is further configured to receive an indication when two or more detection areas are simultaneously detected by the detection means and to execute a simultaneous-detection routine in response thereto.
The simultaneous-detection routine may comprise ignoring an indication of simultaneously-detected detection areas by providing that a particular detection area (such as the last detection area to be detected by the detection means) is taken as the detection area being detected by the detection means.
Alternatively, the simultaneous-detection routine may comprise assigning a weighting or ranking to each simultaneously-detected detection area according to prescribed criteria, and taking as the detection area being detected by the detection means the detection area with the highest weighting or ranking.
The prescribed criteria may involve assigning a weighting or ranking on the basis of the orientation of the detection area relative to the detection means.
According to another embodiment, the simultaneous-detection routine may comprise taking as the detection area being detected by the detection means a combination of more than one of the simultaneously-detected detection areas.
The multi-faced element may comprise a three-dimensional geometric shape including a cube, cuboid, square-based pyramid, triangular-based pyramid, or polygonal prism (such as a triangular prism, having three rectangular faces). The detection areas may be provided by any convenient means including fiducial markers, images, BCH codes, barcodes, infa-red sensitive areas, colour-coding and the like.
Optionally, a detection area may be repeated on one or more of the faces of the three-dimensional element. This enables the same sequence of detection areas to be generated from presentations of different faces of the three-dimensional element to the detection means.
According to a further aspect of the invention, there is provided an input device for a computer program, comprising a handle with a three-dimensional marker element attached thereto having at least three faces, each face constituting a detection area detectable by a detection means associated with the computer program. Preferably, each of the detection areas is different. More particularly, each of the detection areas may comprise different data.
Preferably, the input is a hand-manipulated input device. According to a further aspect of the present invention, there is provided a method of generating an input to a computer-graphic object, the method including the steps of: receiving a data sequence indicating the order in which particular detection areas of a multiple-detection area input device are detected; and generating an input for a computer graphic object on the basis of the data sequence.
The computer graphic object may be generated in an augmented reality application.
Typically, the method includes the steps of: receiving pose data indicating the position and orientation of the particular detection area, and displaying the computer graphic object in or relative to the received position and orientation. Alternatively, the method may include the further step of displaying the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area.
Typically, the fixed orientation may be an upright orientation. Brief Description of the Drawings
Embodiment of the invention will now be described with reference to the accompanying drawings in which:
Figures 1 to 3 are illustrations of a first embodiment of a computer input device suitable for use in a system and method according to the present invention; Figure 4 is an illustration of a second embodiment of a computer input device suitable for use in a system and method according to the present invention;
Figure 5 and 6 are illustrations of a third embodiment of a computer input device suitable for use in a system and method according to the present invention;
Figure 7 and 8 are illustrations of, respectively, a fourth and fifth embodiment of a computer input device suitable for use in a system and method according to the present invention;
Figures 9 to 12 are schematic drawings illustrating the display of a computer- graphic object in an augmented reality application according to an embodiment of the present invention; Figures 13 to 16 are schematic drawings illustrating the display of a computer- graphic object in an augmented reality application according to a further embodiment of the present invention;
Figures 17 to 19 are schematic drawings illustrating the delivery of an input to a computer-graphic object through manipulation of an input device according to an embodiment of the present invention; and
Figure 20 is a schematic illustration of one approach to simultaneous multiple-face detection handling. Detailed Description of the Drawings
The present invention will be described below for purposes of illustration as implemented within the context of an augmented reality (AR) system. As discussed above in the background to the invention, an AR system blends real world video footage captured by a detection means, such as a digital video camera, with computer-generated graphical objects. A number of software platforms are currently available for developing AR systems, such as the ARToolkit from Canterbury University, Christchurch, New Zealand and Studierstube from Graz University, Austria. These platforms provide APIs and library routines implementing basic AR functionalities, including image detection and fiducial marker recognition and tracking. As discussed above, markers may be images, colours, or special codes such as BCH codes or bar codes.
Turning to Figure 1, an input device 10 suitable for use with an AR application is illustrated. Input device 10 comprise an elongate solid cylindrical handle 12 with a head 14 attached to the end thereof. Head 14 is a three-dimensional polygon having a plurality of faces 16. Each face (16A, 16B, etc) bears a fiducial marker A, B, etc (fiducials not shown).
Input device 10 is thus a non-electronic input device that has a different fiducial marker applied to each face 16 of head 14. Each face 16 therefore defines a separate detection area that is detectable by an input means such as a camera. In turn, the camera is linked to a suitable display device, such as a PC, mobile device, television or DVD player.
In an alternative embodiment, head 14 may be hollow and a light source (not shown) included in the interior thereof, to enable device 10 to function in the dark or in poor lighting conditions. When an interior light source is used, faces 16 are formed from a transparent material, such as a suitable plastic in different colours. A conventional or LED light bulb is a suitable interior light source, arranged to transmit light through faces 16 to thereby illuminate them in their selected colour. The fiducial marker in each case is therefore provided by a colour.
Alternatively, a light source in the form of an LED may be provided on the exterior of faces 16, emitting light of a selected colour from that face. Device 10 is manipulated by a user holding handle 12 and effecting translational and rotational movement of head 14. When rotated (by rolling handle 12 between the fingers of the user's hand), head 14 rotates about an axis defined by the longitudinal centreline of handle 12. This allows head 14 to be manipulated by the user's hand without obscuring the faces of the head. During rotation, the camera loses tracking of a previously detected detection surface and detects a new detection surface. The inventors have applied this phenomenon of sequential detection of the multiple detection surfaces of device 10 to deliver an input to an associated computer-graphic object. Based upon the order in which individual detection areas are detected, various commands can be sent to control a computer-graphic object in an augmented reality application.
Rotation of head 14 in a manner that causes detection of a new detection surface constitutes an input, and it is therefore not necessary to rotate head 14 through 180° or through an entire revolution. Rotation can be either clockwise or counter- clockwise, and as the input is communicated in the sequence or order in which surfaces are detected, rotations are not commutative ie. rotating from A to B and B to A may result in different inputs.
Usually, the fiducial on each face 16 is unique. However, one or more fiducials may be repeated on selected faces. This can result in two different rotations of head 14 creating the same input sequence.
A head in the shape of a triangular-based pyramid 18 is shown in Figure 4. Handle 12 is not shown, but its position is schematically marked in dotted line. Pyramid 18 has three usable faces 20 with fiducial markers A, B, C provided on each. Pyramid 18 is shown in the second representation of Figure 4 as rotated about the axis in a clockwise direction to reveal previously hidden face C.
When rotated clockwise, a sample of generated inputs is as follows: AB, ABC BC5 BCA CA5 CAB Similarly, when rotated counter clockwise, selected inputs include: AC, ACB CB, CBA BA, BAC
These examples therefore include 12 different input signals to control a related computer graphic object, and further alternatives are possible (ABA, CBC, CAC, ACBC, etc).
A head 14 in the shape of a right triangular prism 22 is illustrated with reference to Figures 5 and 6. This head 14 has four usable faces (two triangular, two rectangular) provided with fiducial markers A, B, C, D respectively. Triangular prism 22 is shown in the second representation of Figure 5 rotated about an axis in a clockwis.e direction to reveal initially hidden faces C and D.
Sample inputs that are generated from triangular prism 22 when rotated in a clockwise direction are as follows:
AB, ABC, ABCD BC, BCD, BCDA CD, CDA, CDAB DA, DAB, DABC
Sample inputs that are generated from pyramid 22 when rotated in a counterclockwise direction are: AD, ADC, ADCB BA, BAD, BADC CB, CBA, CBAD DC, DCB, DCBA
These examples therefore include 24 different input signals to control a related computer graphic object, and further alternative sequences will be evident to the skilled reader.
As discussed above, redundancy may be built into the system by repeating a fiducial marker across a selected number of faces of head 14. As an example, in the case of triangular prism 22, the four faces 20 are illustrated in Figure 6 having only two unique markers A and B provided thereon (respectively, on the two triangular faces and the two rectangular faces) . This of course reduces the number of possible inputs which may be generated. A head 14 in the shape of a cube or cuboid (rectangular prism) 26 is illustrated in Figure 7. Cuboid 26 has five usable faces with markers A, B, C, D and Z provided thereon. The sixth side of the cuboid does not typically carry a marker, as it is used as the place of connection of head 14 to handle 12 (see Figure 1).
Cuboid 26 is shown in the second representation of Figure 7 rotated about an axis in a counter-clockwise direction to reveal initially hidden faces C and D. Sample inputs for cuboid 26 rotated in a clockwise direction are as follows:
AB, ABC, ABCD BC, BCD, BCDA CD, CDA, CDAB DA, DAB, DABC
AZ, ABZ, ABCZ, ABCDZ
Sample inputs for cuboid 26 when rotated in a counter-clockwise direction are as follows:
AD, ADC, ADCB BA, BAD, BADC CB, CBA, CBAD DC, DCB, DCBA DZ, DCZ, DCBZ, DCBAZ
These examples therefore include 32 different input signals to control a related computer graphic object, and further alternative sequences will be evident to the skilled reader.
A head 14 in the shape of a right pentagonal prism 28 is illustrated by reference to Figure 8. Pentagonal prism 28 has six usable sides 30 provided with markers A, B, C, D, E, and Z. Pentagonal prism 28 is shown in Figure 8 rotated in a counterclockwise direction to reveal initially hidden side C, D and E.
Sample inputs for pentagonal prism 28 when rotated in a clockwise direction are as follows: AB, ABC, ABCD, ABCDE BC, BCD, BCDEA, BCDEA CD, CDE, CDEA, CDEAB DE, DEA, DEAB, DEABC EA, EAB, EABC, EABCD AZ, ABZ, ABCZ, ABCDZ, ABCDEZ
Sample inputs for pentagonal prism when rotated in a counter-clockwise direction are as follows:
ED, EDC, EDCB, EDCBA DC, DCB, DCBA, DCBAE CB, CBA, CBAE, CBAED BA, BAE, BAED, BAEDC AE, AED, AEDC, AEDCB EZ, EDZ, EDCZ, EDCBZ, EDCBAZ
These examples therefore include 50 different input signals to control a related computer graphic object, and a very large number of further alternative sequences will be evident.
The skilled reader will appreciate that many other shapes, including polygonal prisms and polyhedral forms, are possible for use with the present invention. The more faces, the higher number of inputs possible. However, with a larger number of faces it becomes potentially more difficult to accurately distinguish which marker is being presented to the detection means.
Input device 10 is used as a pointer to control a computer-graphic object generated within an AR application about the screen of the display device. Each of the markers on the faces of head 14 are linked to a particular object in the AR system, so that tracked movement of any one of the markers actuates a corresponding movement in the computer-graphic object. The types of fiducial markers appropriate for use with this invention are known to the skilled addressee and will not be described in detail in this specification. Such markers are able to provide to a detector: identification of the particular marker device (and therefore the object to be controlled); identification of the particular marker; and position and orientation information of the marker.
A marker is tracked by the AR application detecting the position of the marker and returning an array of floats that are translated into 3D coordinates representing the pose of the marker (the marker's position and orientation). The position of the marker instructs the AR application where to render the graphical object in 3D space. The orientation of the marker instructs the AR application how the graphical object should be orientated when rendered. The routines of the AR application may process both position and orientation data of a marker when rendering an associated graphical object. This is illustrated in Figures 9 to 12 showing changes in the orientation of marker B effecting a corresponding change in the orientation of associated displayed graphical object 40. In Figure 12, graphical object 40 is not displayed upon a failure to detect marker B.
Alternatively, the routines of the AR application may process pose data from markers in a different manner. In particular, a graphical object may always be rendered in an upright orientation irrespective of the detected orientation of an associated marker. For example, changes in the orientation of markers affixed to faces 16 of input device 10 may result from movements of the device 10 intended by the user to generate an input to an associated graphical object, rather than the user intending to change the orientation of the graphical object.
Accordingly, orientation may be essentially ignored within the AR application by setting the orientation input to zero. This is illustrated in Figures 13 to 16, in which changes in the orientation of marker B do not effect a corresponding change in the orientation of graphic object 50. Instead, graphic object 50 is rendered in a fixed upright orientation irrespective of the orientation of marker B. Again, in Figure 16, graphic object 50 is not displayed upon a failure to detect marker B. In this embodiment, orientation information regarding detected marker B may be used to provide further input information by the AR system, rather than object orientation information.
The input sequences from input device 10 discussed above can be translated into any desired action in respect of the computer graphic object. Figures 17 to 19 illustrate the translation of an input sequence from input device 10 to an action taken in relation to computer-graphic object 60.
Turning to Figure 17, the initial pose of device 10 is such that marker A on face 16A is detected by the camera. Device 10 is rotated by manipulation of handle 12 in the direction illustrated. Figure 18 shows a transition point on the rotation when marker A on face 16A and marker B on face 16B are both detected by the camera.
In Figure 19, marker B becomes the current marker detected by the camera and a detection sequence AB is therefore recorded. Sequence AB is mapped to a corresponding input in the AR software in order to move an arm of graphic object, robot 60, to an outstretched position.
More complex movements of robot 60 can be mapped to equally complex detection sequences from input device 10.
Input device 10 and handle 12 can be made from paper, card, plastic, metal or any other suitable material. As discussed above, individual faces 16A of device 10 are detected by a camera and an input is delivered to an AR object on the basis of the sequence in which faces are detected. However, at various points in time, the camera may have more than one face (and consequently more than one marker) in its field of view at the same time. This situation may be processed by the AR system according to the invention in a number of different ways. The solutions discussed below may be applied separately or, in certain circumstances, may be used in combination. Firstly, the system may be configured to process one marker at a time. According to this solution, the last detected face is assigned a 'superior' ranking in the event that two or more faces are simultaneously detected. Hence, only when the last- detected face is no longer detected is a face-transition considered to have occurred and an input consequently delivered to an AR object.
An example marker sequence for this solution is as follows: A-side captured: A is detected
AB-sides captured: A was last detected hence A is detected
B-side captured: B is detected (input due to AB sequence) BC sides captured: B was last detected hence B is detected
C side captured: C is detected (input due to BC sequence)
Secondly, a pose orientation approach may be taken. According to this solution, the incident angles of the two (or more) detected faces are calculated (by use of an appropriate image processing algorithm) and the one that is closest to 'face-on' is assigned a superior ranking. While the pose may be irrelevant in some of the implementations of the invention discussed above, it is useful for assigning rankings to simultaneously-detected faces 16 of device 10.
As shown by reference to Figure 20, the angle of detection is defined as the angle between the camera vector and the marker face normal vector at the point where the camera vector intersects the face of the maker. This point of intersection has, of course, to be in the field of view of the camera.
The system is configured to recognise only angles of detection between 0° and 80°, 0° corresponding to a 'face-on' view. Faces with an angle greater than 80° are ignored. The closer the angle to 0°, the higher priority the face is given. In the case of a device 10 with head 14 in the shape of a cube, an example marker sequence using the pose orientation approach is as follows:
AB captured: α is at 25°, Ω is at 65°: A is detected. AB captured: α is at 60°, Ω is at 30°: B is detected. A captured: α is at 0°, Ω is not visible: A is detected. When AB is captured at equal angles, the relevant frame of video data captured by camera 14 is ignored and the system waits for the next definitive capture.
Alternatively, the system may be configured to select one of the faces randomly or according to a deterministic selection mode, such as choosing a face based on proximity to one of the edges.
A further solution involves an alternative approach of making use of both (or all) simultaneously-detected markers, instead of selecting a particular marker based on a ranking system. According to this solution, simultaneously-detected faces are considered as detection areas in their own right. For example, rather than assigning an input sequence of A-B-C-D, the input sequence is A-AB-B-BC-C-CD. In this example the simultaneously-detected faces AB (faces A and B) , BC (faces B and C) and CD (faces C and D) each constitute new detection areas in their own right.
This solution allows 'half-movements', such as a transition from face A to face AB to define a new detection event. Consequently, complex inputs are delivered to AR objects resulting from subtle movements of the input device.
Although the system and method of the present invention has been developed for use with an input device having a three-dimensional marker element, the skilled reader will appreciate that it may be employed with an input device having a two- dimensional marker element, such as the planar wand of the prior art discussed in the introductory section of this specification. This recognises that even though such a device comprises only two detection areas, they may nevertheless be read as a detection sequence. For example, the sequence of detection areas ABA can be used to provide an input for the computer graphic object different to that provided by to sequence BAB.
The word 'comprising' and forms of the word 'comprising' as used in this description and in the claims do not limit the invention claimed to exclude any variants or additions.
Modifications and improvements to the invention will be readily apparent to those skilled in the art. Such modifications and improvements are intended to be within the scope of this invention.

Claims

Claims:
1. A computer interface system comprising: an input device having two or more detection areas each detectable by a detection means associated with a computer program; and processing means configured to receive an indication of particular detection areas detected by the detection means, and to generate an input for a computer graphic object in the computer program on the basis of the sequence in which detection areas are detected.
2. A computer interface system according to claim 1, wherein the processing means is further configured to receive pose data indicating the position and orientation of the particular detection area, and to display the computer graphic object in or relative to the received position and orientation.
3. A computer interface system according to claim 1 or claim 2, wherein the processing means is further configured to display the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area.
4. A computer interface system according to claim 3, wherein the fixed orientation is an upright orientation.
5. A computer interface system according to any preceding claim, wherein the input device comprises a handle with a three-dimensional multi-faced element attached thereto, each face constituting a detection area.
6. A computer interface system according to any preceding claim, wherein the processing means is configured to receive an indication when two or more detection areas are simultaneously detected by the detection means and to execute a simultaneous-detection routine in response thereto.
7. A computer interface system according to claim 6, wherein the simultaneous-detection routine comprises ignoring an indication of simultaneously-detected detection areas by providing that a particular detection area is taken as the detection area being detected by the detection means.
8. A computer interface system according to claim 6, wherein the simultaneous-detection routine comprises assigning a weighting or ranking to each simultaneously-detected detection area according to prescribed criteria, and taking as the detection area being detected by the detection means the detection area with the highest weighting or ranking.
9. A computer interface system according to claim 8, wherein the prescribed criteria involves assigning a weighting or ranking on the basis of the orientation of the detection area relative to the detection means.
10. A computer interface system according to claim 6, wherein the simultaneous-detection routine comprises taking as the detection area being detected by the detection means a combination of more than one of the simultaneously-detected detection areas.
11. A method of generating an input to a computer-graphic object, the method including the steps of: receiving a data sequence indicating the order in which particular detection areas of a multiple-detection area input device are detected; and generating an input for a computer graphic object on the basis of the data sequence.
12. A method according to claim 11, wherein the computer graphic object is generated in an augmented reality application.
13. A method according to claim 12, including the steps of : receiving pose data indicating the position and orientation of the particular detection area, and displaying the computer graphic object in or relative to the received position and orientation.
14. A method according to claim 12, including the step of displaying the computer graphic object in a fixed orientation notwithstanding the receipt of pose data indicating changes in the orientation of the particular detection area.
15. A method according to claim 14, wherein the fixed orientation is an upright orientation.
16. An input device for a computer program, comprising a handle with a three- dimensional marker element attached thereto having at least three faces, each face constituting a detection area detectable by a detection means associated with a computer program.
17. An input device for a computer program according to claim 16, wherein each of the detection areas is different.
18. An input device according to claim 17, wherein each of the detection areas is provided with respectively different data for detection by the detection means.
19. An input device according to any one of claims 16-18, wherein more than one detection area is provided with the same data.
20. An input device according to any one of claims 16-19, wherein each detection area is provided with data in the form of one or more of: a fiducial marker, an image, a BCH code, a barcode, an infa-red sensitive area, a colour.
21. An input device according to any one of claims 16-20, including a light source for illuminating one or more of the faces.
22. An input device according to claim 21, wherein the faces are at least partially transparent and the light source is provided in the interior of the marker element, the light source arranged to transmit light through the faces and thereby illuminate the faces.
23. An input device according to claim 21, wherein the light source is provided on the exterior of the marker element.
24. An input device according to any one of claims 16-23, wherein the marker element has a three-dimensional geometric shape selected from the group of: a cube, a cuboid, a pyramid, and a polygonal prism.
PCT/AU2010/000492 2009-04-29 2010-04-29 Computer input device and computer interface system WO2010124333A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2009901853 2009-04-29
AU2009901853A AU2009901853A0 (en) 2009-04-29 Computer interface system and method

Publications (1)

Publication Number Publication Date
WO2010124333A1 true WO2010124333A1 (en) 2010-11-04

Family

ID=43031581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2010/000492 WO2010124333A1 (en) 2009-04-29 2010-04-29 Computer input device and computer interface system

Country Status (1)

Country Link
WO (1) WO2010124333A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119193A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Visual interactive device
RU197799U1 (en) * 2019-10-17 2020-05-28 Федеральное государственное автономное образовательное учреждение высшего образования "Дальневосточный федеральный университет" (ДВФУ) Augmented Reality Application Management App
EP3563568A4 (en) * 2017-01-02 2020-11-11 Merge Labs, Inc. Three-dimensional augmented reality object user interface functions

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020095276A1 (en) * 1999-11-30 2002-07-18 Li Rong Intelligent modeling, transformation and manipulation system
US20070098234A1 (en) * 2005-10-31 2007-05-03 Mark Fiala Marker and method for detecting said marker

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020095276A1 (en) * 1999-11-30 2002-07-18 Li Rong Intelligent modeling, transformation and manipulation system
US20070098234A1 (en) * 2005-10-31 2007-05-03 Mark Fiala Marker and method for detecting said marker

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"artoolkit rubik's cube marker", SONICIDEA, 26 February 2009 (2009-02-26), Retrieved from the Internet <URL:http://www.youtube.com/watch?v=DSHPLiWhtMY> [retrieved on 20100604] *
ANSAR, A. ET AL.: "Linear Augmented Reality Registration", COMPUTER ANALYSIS OF IMAGES AND PATTERNS, vol. 2124, 2001, BERLIN / HEIDELBERG, pages 383 - 390 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3563568A4 (en) * 2017-01-02 2020-11-11 Merge Labs, Inc. Three-dimensional augmented reality object user interface functions
CN110119193A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Visual interactive device
CN110119193B (en) * 2018-02-06 2024-03-26 广东虚拟现实科技有限公司 Visual interaction device
RU197799U1 (en) * 2019-10-17 2020-05-28 Федеральное государственное автономное образовательное учреждение высшего образования "Дальневосточный федеральный университет" (ДВФУ) Augmented Reality Application Management App

Similar Documents

Publication Publication Date Title
Deng et al. How to learn an unknown environment
Rabbi et al. A survey on augmented reality challenges and tracking
US9489040B2 (en) Interactive input system having a 3D input space
KR101491035B1 (en) 3-D Model View Manipulation Apparatus
US20140320524A1 (en) Image Display Apparatus, Image Display Method, And Information Storage Medium
US20040113885A1 (en) New input devices for augmented reality applications
US9110512B2 (en) Interactive input system having a 3D input space
Sukan et al. Quick viewpoint switching for manipulating virtual objects in hand-held augmented reality using stored snapshots
JP2006209563A (en) Interface device
US9218062B2 (en) Three-dimensional menu system using manual operation tools
EP3914367B1 (en) A toy system for augmented reality
WO2018025511A1 (en) Information processing device, method, and computer program
JP2004246578A (en) Interface method and device using self-image display, and program
Schjerlund et al. Ovrlap: Perceiving multiple locations simultaneously to improve interaction in vr
WO2010124333A1 (en) Computer input device and computer interface system
CN110140100B (en) Three-dimensional augmented reality object user interface functionality
Bikos et al. An interactive augmented reality chess game using bare-hand pinch gestures
JP4340135B2 (en) Image display method and image display apparatus
Lee et al. Tangible spin cube for 3D ring menu in real space
US20140337802A1 (en) Intuitive gesture control
JPH09311759A (en) Method and device for gesture recognition
Eitsuka et al. Authoring animations of virtual objects in augmented reality-based 3d space
JP4546953B2 (en) Wheel motion control input device for animation system
Chen et al. Performance Characteristics of a Camera-Based Tangible Input Device for Manipulation of 3D Information.
Metoyer et al. A Tangible Interface for High-Level Direction of Multiple Animated Characters.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10769147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10769147

Country of ref document: EP

Kind code of ref document: A1