WO2011029067A2 - Système tactile multipoint et à utilisateurs multiples à grande échelle - Google Patents

Système tactile multipoint et à utilisateurs multiples à grande échelle Download PDF

Info

Publication number
WO2011029067A2
WO2011029067A2 PCT/US2010/047913 US2010047913W WO2011029067A2 WO 2011029067 A2 WO2011029067 A2 WO 2011029067A2 US 2010047913 W US2010047913 W US 2010047913W WO 2011029067 A2 WO2011029067 A2 WO 2011029067A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
touch
physical
user interface
physical space
Prior art date
Application number
PCT/US2010/047913
Other languages
English (en)
Other versions
WO2011029067A3 (fr
Inventor
Niklas Lundback
Steve Mason
Michael Harville
Ammon Haggerty
Nikolai Cornell
Original Assignee
Obscura Digital, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/553,961 external-priority patent/US20110050640A1/en
Priority claimed from US12/553,966 external-priority patent/US8730183B2/en
Priority claimed from US12/553,962 external-priority patent/US9274699B2/en
Priority claimed from US12/553,959 external-priority patent/US20110055703A1/en
Application filed by Obscura Digital, Inc. filed Critical Obscura Digital, Inc.
Publication of WO2011029067A2 publication Critical patent/WO2011029067A2/fr
Publication of WO2011029067A3 publication Critical patent/WO2011029067A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90324Query formulation using system suggestions
    • G06F16/90328Query formulation using system suggestions using search space presentation or visualization, e.g. category or range presentation and selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04104Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger

Definitions

  • the present invention is related to the field of multi-user human-computer interfaces and, more specifically, is directed to large scale multi-touch systems for concurrent use by multiple users.
  • GUIs graphical user interfaces
  • BCI brain-computer interface
  • EEGI electro-encephalogram interface
  • NHCI neural human-computer interfaces
  • the system comprises a multi-touch display component fabricated in dimensions sufficient for at least a plurality of users and for displaying projected images and for receiving multi-touch input.
  • the system includes a plurality of image projectors, a plurality of cameras for sensing multi-touch input; and user interface software for managing user space.
  • the interface software implements techniques for managing multiple users using the same user interface component by allocating physical spaces within the multi-touch display component and coordinating movement of displayed objects within and between the physical spaces.
  • Embodiments include a plurality of audio transducers (speakers) and methods for performing audio spatialization using the plurality of audio transducers corresponding to the physical spaces by dynamic apportioning of volume levels to the audio transducers based on movement of a displayed object.
  • the user interface apparatus comprises a multi-touch display component in dimensions sufficient for a plurality of users to work or cooperate.
  • a plurality of (automatically calibrated) image projectors serve for projecting images on the multi-touch display component, while a plurality of (automatically calibrated) cameras serve for sensing multi-touch input.
  • the user interface software manages a plurality of users of a computing device by dynamically providing a plurality of zones (e.g. one for each user), wherein a zone provides a user interface displaying images and receiving touch input.
  • a user assigned to one zone can operate independently from (or cooperatively with) another user in another zone.
  • Figure 1 illustrates a large scale multi-user, multi-touch user interface apparatus, according to one embodiment.
  • Figure 2 illustrates an alternative form of a large scale multi-user, multi-touch user interface apparatus, according to one embodiment.
  • Figure 3 illustrates a large scale multi-user, multi-touch user interface apparatus top view, according to one embodiment.
  • Figure 4 illustrates a large scale multi-user, multi-touch user interface apparatus rear view, according to one embodiment.
  • Figure 5 illustrates a large scale multi-user, multi-touch user interface apparatus alternative rear view, according to one embodiment.
  • Figure 6 illustrates a large scale multi-user, multi-touch user interface apparatus alternative left side view using a mirror, according to one embodiment.
  • Figure 7 is a system schematic for a large scale multi-user, multi-touch user interface apparatus top view, according to one embodiment.
  • Figure 8 is a depiction of a plurality of sensing calibration points within a large scale multi-user, multi-touch user interface apparatus, according to one embodiment.
  • Figure 9 shows distorted and overlapping projected images upon a contiguous multi-touch user interface apparatus 900, and a data flow for producing a corrected image projection, according to one embodiment.
  • Figure 10 is a graphical depiction of a distortion correction technique, according to one embodiment.
  • Figure 11 is a depiction of a mathematical transformation as part of a distortion correction technique, according to one embodiment.
  • Figure 12 is a depiction of a technique for image projector calibration using a series of projected pattern sequences, according to one embodiment.
  • Figure 13 depicts image projector calibration using a correction functions, according to one embodiment.
  • Figure 14 is a flowchart for a method for image projector calibration using a series of projected pattern sequences, according to one embodiment.
  • Figure 15A is a depiction of an apparatus showing a configuration for camera calibration, according to one embodiment.
  • Figure 15B is a depiction of a mosaicked, unified view of a single frame in a video stream after camera calibration, according to one embodiment.
  • Figure 16 is a depiction of an apparatus in a configuration for managing interactivity among multiple users, according to one embodiment.
  • Figure 17 is a depiction of an apparatus in a configuration with an attract mode for managing interactivity among multiple users, according to one embodiment.
  • Figure 18 is a depiction of an apparatus in a configuration with an attract mode for dynamically managing interactivity among multiple users, according to one embodiment.
  • Figure 19 is a depiction of an apparatus in a configuration showing border clue icon objects for dynamically managing interactivity among multiple users, according to one embodiment.
  • Figure 20 is a depiction of icon objects for managing interactivity among multiple users, according to one embodiment.
  • Figure 21 is a depiction of an apparatus in a configuration with overlapping user zones, according to one embodiment.
  • Figure 22 is a depiction of an apparatus for formation of illumination planes for touch detection, according to one embodiment.
  • Figure 23 is a depiction of an apparatus for touch detection using multiple cameras with narrow band-pass filters, according to one embodiment.
  • Figure 24 is a depiction of touch scattering shown in a mosaic formed by multiple cameras, according to one embodiment.
  • Figure 25 is a flowchart of a method for imparting a physical parameter to a display object, according to one embodiment.
  • Figure 26 is a depiction of a method for passing a display object, according to one embodiment.
  • Figure 27 is a depiction of an apparatus for managing multiple users using the same user interface apparatus having a plurality of audio transducers, according to one embodiment.
  • Figure 28 is a depiction of an alternative apparatus for managing multiple users using the same user interface apparatus having a plurality of audio transducers, according to one embodiment.
  • Figure 29 is a sequence chart showing a protocol for calibration for mapping into a corrected coordinate system, according to one embodiment.
  • Figure 30 is a sequence chart showing a protocol for managing interactivity among multiple users, according to one embodiment.
  • Figure 31 is a sequence chart showing a protocol for managing multiple users using the same user interface apparatus having a plurality of audio transducers, according to one embodiment.
  • Figure 32 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • Figure 33 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • Figure 34 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • Figure 35 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • Figure 36 is a flowchart for a method for managing multiple users using the same user interface component, according to one embodiment.
  • Figure 37 is a flowchart for a method for managing multiple users using the same user interface component, according to one embodiment.
  • Figure 38 is a flowchart for a method managing multiple users using the same user interface component, according to one embodiment.
  • Figure 39 is a flowchart for a method for managing multiple users using the same user interface component having a plurality of audio transducers, according to one embodiment.
  • Figure 42 is a diagrammatic representation of a machine in the exemplary form of a computer system, within which a set of instructions may be executed, according to one embodiment.
  • Man-machine input/output (I/O) systems for computers have evolved over time.
  • CTRs cathode ray tubes
  • computer output was provided by a cathode ray tube; similar such tubes were used in early televisions.
  • computer input was provided by an alphanumeric keyboard, and later a mouse or other pointing device.
  • computing resources surged, more emphasis was placed on the utility of the man-machine interfaces, and ever more computing power became dedicated to aspects of the man-machine interface.
  • displays became larger and more able to render realistic images, coordination between pointing devices and displays became more user- friendly, and ubiquity of various features (e.g.
  • Some embodiments of the invention are practiced in an environment having an open space, suitably sized to accommodate multiple people (i.e. users).
  • a large scale multi-user, multi-touch system may appear to be embedded into a wall.
  • Figure 1 illustrates a large scale multi-user, multi-touch user interface apparatus, according to one embodiment.
  • the large scale multi-user, multi-touch user interface apparatus depicted in Figure 1 addresses several problems: (1) support of multiple visual and aural workspaces on a multi-touch user interface for multiple users within close proximity of each other, (2) construction of a large scale seamless multi-touch interface from multiple smaller display and touch sensing components and (3) creation of spatial correspondence between objects appearing on the multi-touch surface and the sounds associated with them.
  • Embodiments of the invention described herein address technical problems with large scale display interfaces and more specifically, in the following areas: (1) large scale multi-user, multi-touch user interface apparatus formed by a mosaic of multiple smaller units (e.g. multiple image projectors, multiple cameras, etc), (2) limitation of multiple smaller unit implementations as for supporting large motor gestures and dragging, (3) space management for multiple users in close proximity, (4) sound management for multiple users in close proximity, (5) multi-touch human interface management when multi-touch is used within the context of a large scale multi-user, multi-touch user interface apparatus, and (6) a seamless presentation of a large-scale multi-user, multi-touch user interface.
  • large scale multi-user, multi-touch user interface apparatus formed by a mosaic of multiple smaller units (e.g. multiple image projectors, multiple cameras, etc)
  • limitation of multiple smaller unit implementations as for supporting large motor gestures and dragging
  • space management for multiple users in close proximity e.g. multiple users in close proximity
  • sound management for multiple users in close proximity e.g
  • Figure 1 illustrates a large scale multi-user, multi-touch user interface apparatus 100 including a multi-touch display component 102, dimensioned with a height H 131 and a width W 132. Strictly for illustrative purposes, Figure 1 depicts a user 101 in proximity to the large scale multi-user, multi-touch user interface apparatus 100. As shown, the height H 131 of the multi-touch display component 102 is similar in dimension to the head and torso of the user 101. The user can thus approach the large scale multi-user, multi- touch user interface apparatus 100 for viewing and touching at a comfortable standing or sitting position. More specifically, the user can thus approach the user interface area (e.g.
  • the user interface area may be bounded by a frame 105.
  • a frame 105 may be used to provide mechanical support, and it may be used to provide a mechanical housing for touch illumination sources and calibration apparatus (also discussed infra).
  • Frame 105 might serve as a housing apparatus for sound transducers STi - ST N .
  • Figure 1 Also shown in Figure 1 is a depiction of a human left hand 115 and a human right hand 116.
  • the left hand 115 includes a left thumb 121, left index finger 120, left middle finger 119, left ring finger 118, and a left little finger 117.
  • the right hand 116 includes a right thumb 122, right index finger 123, right middle finger 124, right ring finger 125, and a right little finger 126. Hands and digits are used in multi-touch interface operations, for example, a left index finger 120 might be used to indicate a touch point at TP 127.
  • Figure 2 illustrates an alternative form of a large scale multi-user, multi-touch user interface apparatus, according to one embodiment.
  • the present large scale multi-user, multi-touch user interface apparatus alternate embodiment 200 may be implemented in the context of the architecture and functionality of Figure 1.
  • alternate embodiment 200 might be included in a large scale multi-user, multi-touch user interface apparatus 100.
  • the alternate embodiment 200 or any characteristics therein may be present in any desired configuration.
  • the interaction region shape 210 (corresponding to the aforementioned interaction region 110) may be a shape other than the rectangular shape as shown in Figure 1.
  • interaction region shape 210 is non-rectangular, other elements of the multi-touch display component 102, the interaction region 110, frame 105, and the sound transducers STi - ST N might be organized depending on interaction region shape 210 and still function within embodiments of the invention as described herein.
  • the frame 105 substantially surrounds the interaction region shape 210.
  • the frame 105 serves as a device for visually bounding the interaction region shape 210, and in some such cases, the frame 105 might appear as a rectangle or other rectilinear shape, or it might appear as a curve-linear shape more similar to the interaction region shape 210.
  • the frame 105 may function as a mechanical mount for a multi-touch display component 102.
  • the multi-touch display component 102 is a thick piece of glass, possibly with special treatments for tempering, safety and opacity.
  • a thin film suitable for use as a projection screen is attached to one side of the glass sheet, or is embedded within it.
  • the multi-touch display component 102 is a very thin membrane, possibly with a thickness measured in thousandths of an inch.
  • the frame 105 might be a substantial structure (e.g. a mechanical component, a mounting bracket, a stand, etc) able to support the weight and other forces involved in positioning, and maintaining position, of the multi-touch display component 102.
  • the multi-touch display component 102 may comprise a formed material (e.g. formed glass, formed acrylic, formed plastic, a formed composite material, a formed synthetic material, etc) that is substantially transparent. Other materials may be less transparent, and in still other cases, the materials used for the multi-touch display component 102 may be treated to achieve a desired degree (i.e. less than 100%) of opacity.
  • a thin layer of projection screen material may be mounted to either side of the formed material, or embedded inside it as in the case of materials such as StarGlas 100.
  • a multi-touch display component 102 can be further characterized as having a front side and a rear side.
  • Figure 2 shows a front view that is showing a front side of a multi-touch display component 102.
  • apparatuses arranged substantially on the rear side of a multi-touch display component, and such arrangements are shown in figures depicting rear views and top views of the large scale multi-user, multi-touch user interface apparatus 100.
  • Figure 3 illustrates a large scale multi-user, multi-touch user interface apparatus top view, according to one embodiment.
  • the present large scale multi-user, multi-touch user interface apparatus top view and rear area 300 may be implemented in the context of the architecture and functionality of Figure 1 and Figure 2.
  • elements shown in the large scale multi-user, multi-touch user interface apparatus top view and rear area 300 might be included in a large scale multi-user, multi-touch user interface apparatus 100.
  • the large scale multi-user, multi-touch user interface apparatus top view and rear area 300 or any characteristics therein may be present in any desired configuration.
  • image projectors Pi - P are situated in the rear area.
  • cameras Ci - CN are also situated in the rear area.
  • an image projector and a camera may be paired, as shown for pairs PiCi, P 2 C 2 , P5C5, P 6 C6 and P N C N , and such a pair may be mechanically affixed, or even housed, in the same housing as shown for housing 350i, housing 350 2 , housing 350 5 , housing 350 6 , and housing 350 N -
  • the image projector-camera pairs are shown as being laterally situated; however, a projector-camera pair may be formed by placing a projector atop a camera, or by placing a camera atop a projector.
  • an image projector P3 may be positioned singularly, with or without a housing 350 3 .
  • a camera C 4 may be positioned singularly, with or without a housing 350 4 .
  • the number of cameras and the number of projectors in embodiments of the invention need not be equal, nor does there need to be any correspondence between each projector and some camera, or between each camera and some projector.
  • an image projector has a field of projection (e.g. Fieldi, Field 3 ), and each camera has a field or view (e.g. Field 2 , Field 4 ).
  • the fields can be overlapping to varying degrees.
  • the projection fields overlap substantially, and adjustments (e.g. projection blending) to avoid artifacts in and near projector overlap regions are performed, using the techniques described below, to produce a seamless, unified display on multi-touch display component 102.
  • one camera field of view may overlap with another camera field of view, and field of view adjustments (e.g.
  • mosaicking are performed using the techniques described below to create a unified camera image viewing a multi-touch display component 102, with no double coverage or missing areas, as if a single camera were viewing the entire multi-touch display surface. It is not necessary for a correspondence to exist between projector fields of projection and camera fields of view.
  • the large scale multi-user, multi-touch user interface apparatus top view and rear area 300 includes a touch illumination region 340.
  • the touch illumination region 340 is a region in front of the multi-touch display component 102. This region is substantially planar, though in exemplary embodiments, the thickness of the touch illumination region 340 is non-zero and finite.
  • the illumination region is created through use of multiple lasers positioned inside frame 105 around the boundary of interaction region 110, each laser being fitted with a lens that broadens the laser beam into a plane, and each laser oriented such that the emitted plane of light is substantially parallel to and near interaction region 110.
  • light-emitting devices e.g.
  • the touch illumination region 340 is used for detecting multi-touch inputs from one or more users.
  • a large scale multi-user, multi-touch user interface apparatus 100 comprising a multi-touch display component 102 comprising dimensions sufficient for accommodating multiple users, and for displaying projected images and for receiving from multiple users a large number (e.g. dozens) of touch inputs.
  • a large scale multi-user, multi-touch user interface apparatus 100 may comprise a plurality of image projectors Pi - PN for projecting a projected image onto a multi-touch display component and may comprise a plurality of cameras Ci - CN for sensing user multi-touch input on a multi-touch display component.
  • FIG. 4 illustrates a large scale multi-user, multi-touch user interface apparatus rear view, according to one embodiment.
  • the present large scale multi-user, multi-touch user interface apparatus rear view 400 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 3.
  • elements shown include a plurality of image projectors Pi - PN for projecting a projected image onto a multi- touch display component 102, a plurality of cameras Ci - CN for sensing user multi-touch input, calibration points CPi - CPN at known locations on multi-touch display component 102, and a control panel 450.
  • the image projectors Pi - PN may have overlapping fields of projection; thus the actual image projection area (i.e. an area fully within the field of projection) for each of the image projectors Pi - PN may be calibrated so as to avoid brightness and color artifacts in the areas of overlapping fields of projection.
  • Various techniques to do so are disclosed below.
  • the control panel 450 serves for facilitating a human operator to control the setup and operation of the large scale multi-user, multi-touch user interface apparatus 100.
  • a human operator may start or stop the apparatus or components thereof, may override any software controls to perform calibration and/or adjustments, and/or perform other maintenance.
  • image projectors, cameras and calibration points need not be oriented precisely in a linear or rectilinear array, or disposed in an equidistant fashion, although it might be convenient to so orient.
  • embodiments of the invention may have unequal numbers of projectors and cameras, with the arrangement of the cameras not corresponding to the arrangement of the projectors.
  • the number and arrangement of projectors is not constrained by the invention, except that the fields of projection, when taken together, should cover multi-touch interaction region 1 10.
  • the number and arrangement of cameras is not constrained by the invention, except that the camera fields of view, when combined, should cover multi-touch interaction region 1 10.
  • Figure 6 illustrates a large scale multi-user, multi-touch user interface apparatus alternative left side view using a mirror, according to one embodiment.
  • the present large scale multi-user, multi-touch user interface apparatus alternative left side view 600 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 5.
  • the elements in Figure 6 include an image projector Pi juxtaposed so as to project onto a mirror 610, which mirror in turn reflects onto a multi-touch display component 102.
  • the mirror 610 is affixed to a support structure 620, which may facilitate adjustment of angles ⁇ - ⁇ , and ⁇ 3.
  • the shape and material comprising mirror 610 is substantially fiat.
  • any of the projectors Pi - P N and any of the cameras Ci - C N may communicate with any processing node Ni - N M - Moreover, any processing node from among Ni - N M may communicate with any other processing node Ni - N M - [0070]
  • particular functions may be implemented via processing within a particular node. As shown in the embodiment of Figure 7, graphics functions are served by graphics processing node Ni, user multi-touch sensing is performed by touch processing node N 2 , and audio functions are served by audio processing node N 3 .
  • image projectors Pi - P N are arranged such that their fields of projection produce some projection overlap 742 of image projection onto the rear surface of a multi-touch display component 102.
  • one or more techniques might be used to adjust projected imagery in order to create seamless display on multi-touch display component 102, and such techniques might use one or more graphics processing nodes Ni to perform graphics processing, possibly including calibrations of multiple image projectors and transformations of images for projection onto the rear surface of a multi-touch display component 102.
  • image projector Pi has a field of projection 744 1
  • similarly projectors P 2 - P N have fields of projection 744 2 - 744 N , respectively.
  • the seamless display on multi-touch display component 102 created from fields of projection 744i - 744 N may be divided in order to create more than one user physical spaces (e.g. zones, silos), each of which physical spaces may be dedicated to a particular user.
  • a single silo may be contained within a single field of projection, in other cases a single silo may correspond to portions of two or more fields of projection, and in some cases a single field of projection may contain multiple silos and a fraction thereof.
  • a first silo provides a first user interface for a first user by displaying images within a first silo.
  • user spaces e.g.
  • zones, silos may be partially or fully overlapping; for example in a game-playing situation when multiple players share the same game scene.
  • user spaces e.g. zones, silos
  • user spaces e.g. silos, tiled zones
  • User spaces e.g. zones, silos
  • the cameras Ci - C N serve in conjunction with any one or more instances of a touch processing node N 2 for receiving touch input from a user.
  • the large scale multiuser, multi-touch user interface apparatus 100 is operable to process multi-touch inputs for a particular user (e.g. within a particular silo) and is further operable to do so independent of projections or multi-touch inputs from any other silo(s).
  • a large scale multi-user, multi-touch user interface apparatus 100 comprising a multi-touch display component 102 having dimensions sufficient for accommodating multiple users, and which large scale multi-user, multi-touch user interface apparatus 100 includes user interface software for interfacing a plurality of users to a computing device by providing a plurality of silos, one for each user, wherein a first silo provides a first user interface for a first user by displaying images and receiving touch input from the first user independent of display of images and any multi-touch input corresponding to other silos.
  • Figure 8 depicts a plurality of sensing calibration points within a multi-touch user interface apparatus, according to one embodiment.
  • the multi-touch user interface apparatus 800 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 7.
  • calibration points CPi - CPs are distributed substantially evenly around the periphery of the multi-touch display component 102.
  • a plurality of projectors P N and P N + I are also shown, each with corresponding fields of projection FOP N and FOP N + I .
  • Within each field of projection are at least four calibration points.
  • FOP N has within it the four calibration points CPi (top left of FOP N ), CP 5 (top right of FOP N ), CP 3 (bottom left of FOP N ) and CP 7 (bottom right of FOP N ).
  • These calibration points may be calibration assemblies that include electronics capable of sensing the presence or absence of light (e.g. photocells or other photo-active elements capable of sensing the presence or absence of at least one wavelength of light).
  • these calibration points may include markings (e.g. fiducial markings) capable of being sensed by a camera.
  • the light sensing from these calibration points are used in operation within a method for calibrating a user interface apparatus having a display comprised of a plurality of image projectors.
  • the cameras may be calibrated by associating known locations of fiducial markings to memory locations within the cameras' image buffers.
  • one or more projectors may be used to project known image patterns onto the multi-touch display component 102, and the active/inactive (e.g. light dark) projection at the calibration points are compared to expected values.
  • Some embodiments further measure differences between the actual projection values of the known image patterns measured at the location of fiducial marking points as compared to expected values and the measured differences are collated to produce the projection transformation (and inverse transformation), as discussed further infra.
  • At least one method for calibrating a user interface apparatus having a plurality of image projectors relies in part on characteristics of an image projector that uses a display memory. That is, the image projectors described herein, and for which methods of use and calibration are disclosed herein, are of a class of image projectors that display images as are represented in a digital memory array (e.g. RAM, DRAM, on or in a graphics card, or on or in an image frame buffer device).
  • a digital memory array e.g. RAM, DRAM, on or in a graphics card, or on or in an image frame buffer device.
  • each instance of projectors P N through P N+ i is assigned a corresponding display memory 81 O N through 810 N + I - This is but one embodiment, and other embodiments are possible and envisioned, including shared display memories, ping-pong memories, or even a single memory combining the address space of all display memories 810 N through 810 N +i .
  • Figure 9 shows distorted and overlapping projected images upon the rear of a contiguous multi-touch user interface apparatus 900, and a data flow for producing a corrected, seamless image projection covering a multi-touch display component 102.
  • data flow for producing a corrected image projection 950 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 8, or it may be implemented in any environment.
  • Embodiments of the invention may define a physical display coordinate system 940 (e.g. a physical display coordinate space), on the surface of multi-touch display component 102.
  • physical display coordinate system 940 may be regarded as two-dimensional and rectangular, with its origin at the lower- left of the rear of multi-touch display component 102, with positive x- and y-directions extending to and upward, respectively.
  • Other embodiments of the invention may use non- planar or non-rectangular physical display coordinate systems, may orient coordinate axes differently, or may choose a coordinate origin at other locations on or near (e.g., just outside the border) to multi-touch display component 102.
  • a first distorted projection field 910, a second distorted projection field 912 and a third distorted projection field 914 are representative of the image distortions introduced by any one or more elements or characteristics of a projection system.
  • image projector optics may not perform orthogonally, may include lens distortion, and/or may not perform as designed.
  • a mirror if used may introduce optical distortion.
  • a projected image e.g. a square
  • the image e.g. a square
  • the image might be subjected to an image transformation to compensate for the geometric image distortions introduced by the projection process.
  • the projector corresponding to the first distorted projection field 910 were somewhat off-center from the precise trisect of the width dimension of the multi-touch display component 102, then a square image would be projected as a quadrilateral with the geometric distortions apparent from the first distorted projection field 910.
  • an input image will be subjected to one or more techniques for pre-compensating an input image for subsequent projection, such that the image as projected appears to a user to be an orthogonal projection of the input image, and such that the overall multi-projector display appears seamless.
  • a representation of the input image undergoes a geometric and photometric (i.e., brightness and color) transformation that compensates for the projection process and for the overlap of multiple projectors.
  • These transformations may also be used to account for non-uniformities in the color of multi-touch display component 102, and for ambient light in the environment that falls on multi-touch display component 102.
  • a data flow for describing this concept is shown at 950.
  • Figure 10 is a graphical depiction of a geometric distortion correction technique, according to one embodiment.
  • the geometric image compensation technique 1000 for producing a corrected image projection may be implemented in the context of the architecture and functionality of Figure 1 through Figure 9, or the geometric image compensation technique 1000 may be implemented in other environments.
  • the technique compensates for projector distortion by mapping an object from an input coordinate system 1020 to an inverse distorted coordinate system 1004. Projection of the object from its inverse distorted representation in 1006 then results in the projection in a physical display coordinate system having proportions nearly identical, up to a scale factor, to the original input image.
  • an object from an input coordinate system 1020 is viewed as being comprised of an image object in an input coordinate system 1020, which image is fully within the bounds of a quadrilateral.
  • the aforementioned quadrilateral is a rectangle.
  • the image object in input coordinate system 1020 undergoes a transformation, specifically an inverse projection distortion transformation 1022 that is the inverse of projection transformation 1026, resulting in the image object being represented in an inverse distorted coordinate system 1024 to compensate for the projection distortion (e.g. projection transformation 1026).
  • This mesh-based scheme may be more generally implemented as a two- dimensional lookup table, with interpolation being performed in each of the two dimensions to determine the output coordinate corresponding to an input coordinate for which no exact entry exists in the table.
  • a two-dimensional lookup table may represent arbitrary coordinate space transformations, including those that are not based on polygonal meshes.
  • the coordinate transformations may be represented as mathematical equations.
  • a homography is a reasonable choice for a mathematical representation of the projection transformation 1026.
  • a homography is typically expressed as a 3x3 matrix, applied to the homogeneous representation of two-dimensional coordinates.
  • Each homogeneous coordinate is a vector three numbers, consisting of the original two-dimension coordinate and a scale factor.
  • the inverse projection transformation 1022 may be obtained by inverting the matrix for the projection transformation homography, to obtain a new 3x3 matrix.
  • the initial pattern in the horizontal sequence may have the top half of the projected image being black and the bottom white (or vice versa), while the initial pattern in the vertical sequence may have the left half of the projected image black and the right half white (or vice versa).
  • Successive patterns in each sequence use bands of increasingly finer width (halving the width each time), until the last pattern consists of alternating rows or columns of black and white pixels.
  • a technique called Gray coding may be used to shift the location of the bands in each image, so that band boundaries line up less frequently across successive images.
  • every projector pixel projects a unique sequence of black and white values, different from that of any other projector pixel.
  • the projected pattern sequences are not limited to only those images shown in the depiction of the embodiment of Figure 12. Rather many other sequences are possible and envisioned, and may correlate to the shape and dimensions of the multi-touch display component 102
  • the projected pattern sequences do not extend all the way to images with bands having widths of one pixel, instead terminating with bands that are multiple pixels wide.
  • all pixels in a block of multiple projector pixels will project the same sequence of black and white values during the course of calibration, and the centroid coordinate of the block is used as the corresponding input image coordinate to the sensor receiving the same sequence of readings.
  • the general shape of the gradual increase or decrease may be linear in some embodiments, while in others it may take a nonlinear form such as co-sinusoidal.
  • Many methods for computing blend maps for arbitrarily overlapping projectors are known in the art of image processing and projection display, and any are suitable for use in various embodiments.
  • an image for projection might be read (see operation 1352), and any number of photometric image transformations might be performed (e.g. photometric transformation-A 1354 (e.g. a color transformation, a brightness transformation) and/or image transformation-B 1356 (e.g. an edge transformation), resulting in a blended edges image representation 1358.
  • photometric transformation-A 1354 e.g. a color transformation, a brightness transformation
  • image transformation-B 1356 e.g. an edge transformation
  • the alternating pattern sequence of Figure 12 namely a first projected pattern (e.g. pattern 1224) that matches an expected set of values, followed by a second projected pattern (e.g. 1228) that does not match a corresponding expected set of values, it can be known that the projector is calibrated only to the resolution of the last pattern that does match the corresponding expected set of values
  • One aspect of the camera calibration process is to determine a mapping between the coordinate space of a captured camera image and the coordinates of physical display space 940 as is projected on the rear of multi-touch display component 102.
  • a mapping should be determined between each camera and the same physical display coordinate space 940.
  • These mappings between camera image coordinates and physical display space coordinates may take any of the forms described above for mappings between projector input images and physical display space, including polygonal mesh-based mappings, lookup tables, and homography matrices.
  • a plurality of cameras Ci - CN are directed toward the rear side of a planar, rectangular multi-touch display component 102, and LEDs are placed at known locations (e.g. at CPui , CPus , CPL 3 , - CPL 7 , etc.) within physical display coordinate space 940, preferably around the border of the rear of multi-touch display component 102, and preferably at the peripheral vertices (e.g. corners).
  • each camera is able to observe at least four LEDs, and each camera's FOV overlaps a region on multi-touch display component 102 contains at least two LEDs between which a line may be drawn to divide the camera FOV overlap region into two pieces.
  • a homography matrix may be computed to map between coordinates in physical display space 940 and the camera coordinate space, as described above for projector calibration.
  • This mapping is used to warp the camera image into a mosaic space representing a mosaicked, unified video stream 1550 of multi-touch display component 102.
  • the region of a given camera image warped into the mosaicked view is the region bounded by the edges of multi-touch display component 102, and any mosaic tile boundary lines (e.g. Ci mosaic tile boundary 1526, C 2 mosaic tile boundary 1528) created by connecting LED pairs (e.g. CPui paired to CP L3 , CPUS paired to CP L7 ).
  • the multi-touch display component 102 accommodates multiple silos 1670i through 1670 N - Strictly as an option, the silos might be organized as rectangles covering some physical space. As shown, a first physical space 1670i is a rectangle, and an adjacent, second physical space 1670 2 is also a rectangle. The second physical space 1670 2 does not overlap the first physical space 1670i.
  • Functions of software 1662 include user interface software for interfacing a plurality of users 101.
  • Functions of software 1662 include functions for the display of a plurality of, and/or multiple instances of displayed objects (i.e. display of such objects using image projectors Pi - P N ), and functions for sensing user touch input.
  • the functions of software 1662 for interfacing a plurality of users 101 may allocate and configure regions (e.g. zones, silos) over the surface of a multi-touch display component 102 by any of several possible and envisioned methods.
  • regions e.g. zones, silos
  • a new user silo is allocated based on touch activity within the non-dedicated attract mode physical space (e.g. when a user touches at least one object from within the attract mode physical space 1870i.
  • the new user silo is created with an instance of the touched object being centered within it.
  • a new user physical space is inserted near the touch point, and calculations are performed to compute new user physical space sizes and center locations, for all user physical spaces, such that the constraints of the rule are obeyed.
  • Previously allocated user physical spaces are then adjusted to the new center locations and sizes, thus shifting center locations of one or more user physical spaces.
  • rules for new user physical space allocation attempt to preserve a non-dedicated user space (e.g., attract mode physical space 1870i) at a constant location on multi-touch display component 102, such that touching of an object within this space causes a new user physical space to be allocated elsewhere.
  • the new user physical space may be allocated in the nearest available non-dedicated space to the user touch, provided that the new location is outside of a reserved portion of non-dedicated space that new users can continue to interact with to obtain newly allocated physical spaces.
  • the new user physical space may be allocated adjacent to the reserved non-dedicated portion if possible, or on the opposite side of one or more previously allocated user physical spaces, as close as possible to the reserved portion.
  • Other rules for physical space allocation and management are possible and envisioned in various embodiments of the invention.
  • users may initiate allocation of a new user physical space by actions other than touching of an object displayed in a non-dedicated portion of multi-touch display component 102.
  • the user may use a trace gesture for defining a new user physical space. For example a user might trace out the size and shape of a new user physical space by dragging his finger within a non-dedicated portion of the surface of multi-touch display component 102.
  • the path being traced is displayed on multi-touch display component 102 as a trail behind the moving finger, so the user may more easily assess the size and shape of the space to be allocated.
  • a new user physical space is allocated when a user touches a point in a non-dedicated portion of multi- touch display component 102, and then drags the finger in a roughly circular path above a prescribed size before releasing the touch.
  • a new user physical space is allocated when two touches are made at nearly the same place and time (within some small location and time differences) within a non-dedicated portion of multi-touch display component 102, and the two touch points are dragged along paths approximately defining opposing halves of a rectangle before being released at nearly the same new location and time.
  • the size of the newly allocated space may or may not be related to the size of the area covered by the prescribed gesture.
  • Other gestures for initiating user physical space allocation are possible and envisioned.
  • other methods for user-initiated allocation of user physical spaces aside from those discussed above are possible and envisioned.
  • specific touch gestures such as touching with two or more fingers
  • requests for resizing of an allocated user physical space when the gesture occurs near a specific portion of the allocated user physical space, such as along its top.
  • various operations for user control including use of other types of gestures and border elements, are possible and envisioned.
  • interfacing a plurality of users includes not only interfacing a particular user with the computer 1660, but also interfacing one user with another user in an interactive manner.
  • users may interact with the computer 1660 and with one another through various techniques, some of which include visual and audio cues.
  • visual cues might include visual barriers and/or visual icon objects, and/or a user operation (e.g. a touch point input, a touch movement input) or an operation under computer 1660 control might include audio signals (e.g. sounds, music, voices, sound effects, etc), which audio signals are reproduced in coordination with the corresponding operation.
  • Figure 19 is a depiction of an apparatus in a configuration showing border clue icon objects for dynamically managing interactivity among multiple users, according to one embodiment.
  • the apparatus in a configuration showing border clue icon objects for dynamically managing interactivity among multiple users 1900 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 18.
  • any one or more border clue icon objects may be displayed (e.g. projected) into any physical space within the bounds of a multi-touch display component 102.
  • a user is provided visual border clues as to the bounds of the user's silo by appearance of zone demarcation marks on or near the border of the user's silo.
  • Demarcation marks may be used as visual border cues, for example, for demarking a left physical boundary of user silo 1960, for demarking a boundary of attract silo 1970, for demarking a user silo at a boundary with another user silo 1980, and for demarking a reserved silo 1990 at a boundary with a user silo.
  • Figure 20 is a depiction of icon objects for managing interactivity among multiple users, according to one embodiment.
  • the icon objects for managing interactivity among multiple users 2000 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 19.
  • any one or more icon objects may be displayed (e.g. projected) into any physical space within the bounds of a multi-touch display component 102.
  • the icon object is defined primarily for providing visual cues (e.g. right border icon object 2050, left border icon object 2060, double border icon object 2070, etc).
  • Other icon objects are defined primarily for providing user input corresponding to some action (e.g. cancel icon object 2030, movie clip icon object 2040, and refresh icon object 2080).
  • Other objects include a spinning search wheel 2090.
  • Still other icon objects are active to user touch and expand under user touch into one or more additional forms (e.g. info icon object 2010, and search icon object 2020).
  • Figure 21 is a depiction of an apparatus in a configuration with overlapping user zones, according to one embodiment.
  • the overlapping user zones for multiple users 2100 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 20.
  • a first user zone Useri substantially overlaps another different user zone User N , both located within a multi-user region 2170N- Adjacent to the multi-user region 2170 N is a stats region 2170 1 .
  • a plurality of users Useri - User N share one or more physical spaces, namely the multi-user region 2170N and the stats region 2170i.
  • display objects may be assigned to a particular user, and might be controlled by that particular user, even to the extent that such an assigned object might occupy a physical space or physical spaces allocated to other users.
  • various user interface operations might be performed by a user on a display object using a touch gesture, and the object will appear to respond to the gesture as corresponds to the characteristics and rules for the object and operation. For example, a user might wish to zoom (enlarge) a particular object. And that object might have a sufficient resolution so as to be enlarged. Thus a user might perform a spread touch gesture on the object. Within the limits of the object and rules, the user operation (e.g. zoom / enlarge / expand, shrink / minimize / compress, spin, etc) is performed by computer 1660 and the results of the user-requested operation are displayed.
  • Some example objects and associated touch gestures and rules are listed in Table 1 below. Table 1 :
  • Photo object Reverse pinch Touch Enlarge to maximum resolution or clip at two fingers near each limits of personal space within the user's other, then move them silo
  • Photo object Drag Touch, move, Move displayed location of photo object.
  • Photo object Flick Touch, move, Cause object to accelerate and move on and release while still display with modeled physical momentum, in motion as if it were an object with mass within a
  • finger touch-points might be recognized as a touch gesture, and the specific location of the finger touch-points forming the touch gesture might be used to impart physical momentum properties to a touched object.
  • a flick touch gesture e.g. a toss gesture
  • a touch gesture formed by a slower, smoother finger motion might impart relatively less momentum to the object.
  • a silo includes a personal user zone (e.g. the center of a silo), the objects within which are accessible only to the user associated with the silo
  • a silo includes a shared user zone (e.g. the periphery of a silo), the objects within which may be accessible to users associated with other silos
  • An adjacent space may be a physically adjacent space, or may be a logically adjacent space ⁇
  • a logically adjacent space may be formed when two users have searched on items resulting in the same selection(s)
  • ⁇ Objects may be passed from one physical space to an adjacent physical space by a drag gesture
  • ⁇ Objects may be passed from one logical space to an adjacent logical space by a drag gesture
  • a passed object may be accepted by the receiving user via a tap gesture, for example touching the object as it passes through the receiving user's allocated space
  • ⁇ Objects may be tossed from one physical space to an adjacent physical space via a flick gesture
  • ⁇ Objects may be tossed from one logical space to an adjacent logical space via a flick gesture
  • a tossed object may be accepted into an adjacent logical or physical space by the receiving user via a tap gesture, for example touching the object as it passes through the receiving user's allocated space
  • ⁇ Objects can be decreased in size via a pinch gesture within the limits associated with the object being resized
  • any selection of rules might be active at any moment in time.
  • a set of rules might be defined, and coordinated by computer 1660, and such set of rules may use some, all or none of the above-listed rules.
  • Many more rules and definitions are possible and envisioned.
  • At least the computer 1660 serves for (1) coordinating movement of at least one object between physical spaces based on receiving at least one touch gesture, (2) coordinating movement of at least one object between physical spaces based on a user passing the object from the first physical space into the region of the second physical space, (3) coordinating movement of at least one object between physical spaces comprises tossing the object from a first physical space into the region of a second physical space, wherein the periphery of the first physical space and the periphery of the second physical space does not abut, and (4) coordinating movement of at least one object between physical spaces while emitting a sound from the user interface apparatus.
  • This process of attracting users and allocating physical spaces for their exclusive use may continue until all of the physical space of multi-touch display component 102 is occupied by user-dedicated spaces.
  • allocation of space to user occurs only when a touch is sensed on a displayed object in the non-dedicated physical space, while in other embodiments the allocation occurs for a touch at any point in the non-dedicated physical space.
  • a background removal step may be performed. This helps to eliminate problematic image regions that consistently cause strong responses from the center- surround touch filter, but do not correspond to touches. These image regions typically correspond to bright spots or edges of light on the touch screen, sometimes due to other light sources in the environment near the touch screen.
  • a wide variety of computer vision background removal methods may be employed.
  • the background removal step consists of subtracting a background image from the camera image. Pixels for which the subtraction result is negative are set to zero.
  • a background image may be constructed by capturing images from each camera when the system is calibrated (e.g. at system initialization time when it is known that no users are touching the screen).
  • This adaptation process allows the background image to gradually adjust to reflect changes in the environment around the large scale multi-user, multi-touch user interface apparatus 100.
  • a slow learning rate allows transient environmental changes, such as shadows cast by users of the system, to be ignored, while still accounting for more persistent environmental changes such as a light source being activated and left on.
  • a touch within the mosaicked field of view of the cameras can be identified using any of the foregoing digital image processing techniques, and then mapped to a screen coordinate in physical display space 940, and from that physical display coordinate system, computer 1660 is able to map a screen coordinate to a particular silo within a multi-touch display component 102.
  • the touch point can be correctly ascribed to the zone of the user who created the touch point.
  • Figure 24 is a depiction of touch scattering shown in a mosaicked image formed by multiple cameras, according to one embodiment.
  • a touch is mapped to an object within a user silo, and aspects of the touch (e.g. drag, as in the present example), including changes of the touch over time, are used to apply a physical parameter to the mapped object.
  • the scatter pattern might comprise portions of the scatter pattern as sensed by more than one camera; however, when multiple cameras are mapped (and clipped) into a mosaicked view corresponding to physical display coordinate system 940 using any of the techniques described above, the scatter pattern appears as a single connected scatter pattern, and any one or more techniques in the art might be used for finding the centroid.
  • the corresponding touch may then be mapped (at operation 2530) to a particular user silo, whose coordinates on the display are also known by the software that draws and manages the silos.
  • the centroid might be precisely on the border of two silos, or for other reasons might be sufficiently ambiguous so as to warrant the use of a second technique for mapping to a single silo. In such a case, one technique might consider if the centroid is directly over a display object and, if so, the technique might bias assignment of the silo toward the silo in which the object's centroid exists.
  • a tossable object 0 5 is oriented near the lower-left corner of multi-touch display component 102.
  • the object 0 5 moves to the right.
  • a first user using a first user zone selects a displayed object via a touch gesture (e.g., by touching the displayed object on multi- touch display component 102 and not releasing the touch), and then performs a 'flick' gesture, as described above, that causes the displayed object to move toward a second user zone.
  • a touch gesture e.g., by touching the displayed object on multi- touch display component 102 and not releasing the touch
  • a 'flick' gesture as described above, that causes the displayed object to move toward a second user zone.
  • a second user may touch the displayed object to stop its movement in the second user zone.
  • the system aids in separating the aural experiences of the two users by performing audio spatialization using the plurality of audio transducers corresponding to the first physical space (i.e. the first user's space) and corresponding to the second physical space (i.e. the second user's space).
  • the aural experience of the two users are separated by virtue of reproduction of two sound streams, one that is substantially centered by and between speakers near the first user (STi and ST 2 in the case of the first user in Figure 27) and a second sound stream that is substantially centered by and between speakers near the second user (ST 3 and ST 4 in the case of the second user in Figure 27).
  • a look-up table might return a set of amplitude values, one for each transducer in the system.
  • the lookup table is designed to cause the total power of sound output by the transducers to be the constant for any input coordinate of a point on multi-touch display component 102. This has the benefit, for example, of causing the sound associated with a moving object to not be perceived as changing in amplitude as it moves around the multi-touch display component 102, from the perspective of a user whose movements follow the object such that the user's ears remain approximately constant distance from the moving displayed object.
  • the tables above depict spatialization values for locations that vary in the X coordinate dimension (e.g. horizontal dimension, length dimension, width dimension).
  • additional lookup table are provided for apportioning sound amplitude in the Y coordinate dimension (e.g. vertical dimension, height dimension), and/or the Z coordinate dimension (e.g. depth dimension).
  • the table lookup technique might include multiple lookup operations for handling stereo sound sources, or quadraphonic sound sources. In this manner sound sources may be reproduced with effects for a binaural listener (i.e. a human with left ear and a right ear).
  • the computer 1660 might operate for performing audio spatialization corresponding to the first physical space and the second physical space by centering the audio signals within the first physical space and centering the audio signals within the second physical space.
  • the first physical space is a rectangle and the second physical space is a rectangle and the second physical space does not overlap the first physical space, however other spatial partitioning is possible and envisioned.
  • Figure 28 is a depiction of an alternative apparatus for managing multiple users using the same user interface apparatus having a plurality of audio transducers, according to one embodiment, in which sounds are spatialized across two dimensions of a multi-touch display component.
  • the alternative apparatus for managing multiple users using the same user interface apparatus having a plurality of audio transducers 2800 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 27, or the alternative apparatus for managing multiple users using the same user interface apparatus having a plurality of audio transducers may be implemented in other contexts.
  • a multi-touch display component 102 has disposed near it, a plurality of sound transducers (e.g.
  • the sound transducers ST M , ST U2 , ST U3 , ST U4 , STU N -i, STU N and ST M , ST U2 , ST U3 , ST U4 , STU N -i, STU N might be disposed substantially equidistantly across the width W 132 of a multi-touch display component 102, or they might be disposed somewhat non- linearly, depending on the acoustics of the environment.
  • audio spatialization techniques may be applied across the dimension of height. That is, during the traversal of ObjectA from its initial position at 2710 through position 2720 and to its final position in Silo 2 at position 2730, a sound might be apportioned by varying the amplitude of the sound as shown in the relative volume settings.
  • a technique for managing multiple users using the same multi- touch display component 102 having a plurality of audio transducers might include: providing a plurality of audio transducers arranged horizontally across a user interface area, displaying at least one object on said user interface area located at a first position, apportioning volume of sound to at least one audio transducer based on proximity of said audio transducer to said first position, displaying said object on said user interface area across a plurality of continuous locations on said user interface area starting from said first position and continuing to a second position, and apportioning volume of sound to said audio transducers based on a function of proximal separation (e.g. a function of distance) of said audio transducers to said object moving as it moves from said first position to said second position.
  • a function of proximal separation e.g. a function of distance
  • Figure 29 is a sequence chart showing a protocol for calibration for mapping at least one projector into a physical display coordinate system, according to one embodiment. As shown, operations are performed on one or more computers, and may include communication with peripherals, which themselves may contain one or more computers.
  • Figure 30 is a sequence chart showing a protocol for managing interactivity among multiple users, according to one embodiment. As shown, operations are performed on one or more computers, and may include communication with peripherals, which themselves may contain one or more computers.
  • Figure 31 is a sequence chart showing a protocol for managing multiple users using the same user interface apparatus having a plurality of audio transducers, according to one embodiment. As shown, operations are performed on one or more computers, and may include communication with peripherals, which themselves may contain one or more computers.
  • FIG 32 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • the method 3200 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 31, or method 3200 may be implemented in any environment.
  • the method 3200 comprises operations for provisioning a multi-touch display component for use within the multi-touch user interface apparatus (see operation 3210), provisioning a plurality of projectors with at least two overlapping fields of projection on the multi-touch display component, with each projector corresponding to a projector frame buffer (see operation 3220), and provisioning a plurality of cameras for use in touch sensing, with at least two overlapping fields of view and each camera corresponding to a camera frame buffer (see operation 3230).
  • method 3200 comprises operations projecting, using at least one projector frame buffer, a first known light pattern (see operation 3240), sensing, at the multi-touch display component the first known light pattern to compute at least two projector transfer functions for combining at least two of the plurality of projector frame buffers into a unified coordinate system (see operation 3250); and capturing images of a second known light pattern wherein the captured images are used to compute a camera transfer function for combining at least two of the plurality of cameras frame buffers into the unified coordinate system.
  • Figure 33 is a flowchart for a method for calibrating a user interface apparatus, according to one embodiment.
  • the method 3300 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 32, or method 3300 may be implemented in any environment.
  • the method 3300 comprises provisioning a multi-touch display component for use within the multi-touch user interface apparatus (see operation 3310), provisioning a plurality of projectors with at least two overlapping fields of projection on the multi-touch display component, each projector corresponding to a projector frame buffer (see operation 3320), provisioning a plurality of cameras for use in touch sensing, with at least two overlapping fields of view and each camera corresponding to a camera frame buffer (see operation 3330), and projecting, using at least one projector frame buffer, a first known light pattern (see operation 3340).
  • the method continues by sensing, at the multi-touch display component the first known light pattern to compute at least two projector transfer functions for combining at least two of the plurality of corresponding projector frame buffers into a unified coordinate system (see operation 3350), capturing images of a second known light pattern wherein the captured images are used to compute a camera transfer function for combining at least two of the plurality of camera frame buffers into a unified coordinate system (see operation 3360), and associating a multi- touch gesture from at least one camera frame buffer with an object from at least one projector frame buffer (see operation 3370).
  • Figure 34 is a flowchart for a method for calibrating a multi-touch user interface apparatus, according to one embodiment. As an option, the method 3400 may be implemented in the context of the architecture and functionality of Figure 1 through Figure
  • the method 3400 comprises operations for provisioning a multi-touch display component having a plurality of light sensors at known locations relative to the multi-touch display component (see operation 3410), provisioning a plurality of projectors with at least two overlapping fields of projection on the multi-touch display component, with each projector corresponding to a projector frame buffer (see operation 3420), projecting, using at least one projector frame buffer, a first known light pattern (see operation 3430), and sensing, at the light sensors, the first known light pattern to compute at least two projector transfer functions for combining at least two of the plurality of corresponding projector frame buffers into a unified coordinate system (see operation 3440).
  • Figure 35 is a flowchart of a method for calibrating a multi-touch user interface apparatus, according to one embodiment. As an option, the method 3500 may be implemented in the context of the architecture and functionality of Figure 1 through Figure
  • the method 3500 comprises operations for provisioning a multi-touch display component having a plurality of light sensors at known locations relative to the multi-touch display component (see operation 3510), provisioning a plurality of cameras for use in touch sensing, with at least two overlapping fields of view the and each camera having a corresponding camera frame buffer (see operation 3520), actuating, using at least one of a plurality of light sources, a first known light pattern (see operation 3530), and capturing images of a first known light pattern wherein the captured images are used to compute at least two camera transfer functions for combining at least two of the plurality of cameras frame buffers into a unified coordinate system (see operation 3540).
  • Figure 36 is a flowchart for a method for managing multiple users using the same user interface component, according to one embodiment. As an option, the method 3600 may be implemented in the context of the architecture and functionality of Figure 1 through Figure
  • method 3600 may be implemented in any environment.
  • method 3600 comprises an operation for allocating a first physical space within the physical boundary of the user interface component for use by a first user (see operation 3610); an operation for allocating a second physical space within the physical boundary of the user interface component for use by a second user (see operation 3620); an operation for allocating a third physical space for attracting users (see operation 3630); and an operation for coordinating movement of at least one object between physical spaces allocated within the boundary of the same user interface component (see operation 3380).
  • Figure 37 is a flowchart for a method for managing multiple users using the same user interface component, according to one embodiment. As an option, the method 3700 may be implemented in the context of the architecture and functionality of Figure 1 through Figure
  • method 3700 may be implemented in any environment.
  • method 3700 comprises operations for allocating a first physical space within the physical boundary of the user interface component (see operation 3710), displaying a plurality of objects on said user interface component inside said first physical space (see operation 3720), receiving touch input from a user (see operation 3730), and allocating a second physical space within the physical boundary of the user interface component for use by said user (see operation 3740).
  • Figure 38 is a flowchart for a method managing multiple users using the same user interface component, according to one embodiment. As an option, the method 3800 may be implemented in the context of the architecture and functionality of Figure 1 through Figure
  • method 3800 comprises an operation for allocating a first physical space within the physical boundary of the user interface component for use by a first user (see operation 3810); an operation for displaying a plurality of objects on said user interface component outside said first physical space (see operation 3820); an operation for receiving input from a user to select one of said objects (see operation 3830); and an operation for allocating a second physical space centered around said object selected, and within the physical boundary of the user interface component for use by a second user (see operation 3840).
  • Figure 39 is a flowchart for a method for managing multiple users using the same user interface component having a plurality of audio transducers, according to one embodiment.
  • the method 3900 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 38, or method 3900 may be implemented in any environment.
  • method 3900 comprises an operation for providing a plurality of audio transducers arranged horizontally across a user interface area (see operation 3910), an operation for allocating a first physical space within the physical boundary of the user interface component for use by a first user (see operation 3920); an operation for allocating a second physical space within the physical boundary of the user interface component for use by a second user (see operation 3930); and an operation for performing audio spatialization using the plurality of audio transducers corresponding to the first physical space and the second physical space allocated within the boundary of the same user interface component (see operation 3940).
  • Figure 40 is a flowchart for a method for managing multiple users using the same user interface component having a plurality of audio transducers, according to one embodiment.
  • the method 4000 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 39, or method 4000 may be implemented in any environment.
  • method 4000 comprises an operation for providing a plurality of audio transducers arranged horizontally across a user interface area (see operation 4010); an operation for displaying a plurality of objects on said user interface area (see operation 4020); an operation for associating at least one sound with at least one of said plurality of objects (see operation 4030); and an operation for apportioning said at least one sound to at least two audio transducers based on a function of distance between a location of an object on said user's interface relative to locations of said audio transducers (see operation 4040).
  • Figure 41 is a flowchart for a method for managing multiple users using the same user interface component having a plurality of audio transducers, according to one embodiment.
  • the method 4100 may be implemented in the context of the architecture and functionality of Figure 1 through Figure 40, or method 4100 may be implemented in any environment.
  • method 4100 comprises an operation for providing a plurality of audio transducers arranged horizontally across a user interface area (see operation 4110); an operation for displaying at least one object on said user interface area located at a first position (see operation 4120); an operation for apportioning volume of sound to at least one audio transducer based on proximity of said audio transducer to said first position (see operation 4130); an operation for displaying said object on said user interface area across a plurality of continuous locations on said user interface area starting from said first position and continuing to a second position (see operation 4140); and an operation for apportioning volume of sound to said audio transducers based on a function of distance of said audio transducers to said object as it moves from said first position to said second position (see operation 4150).
  • Figure 42 is a diagrammatic representation of a network 4200, including network infrastructure 4206, and one or more computing nodes, any of which nodes may comprise a machine within which a set of instructions for causing the machine to perform any one of the techniques discussed above may be executed.
  • the embodiment of a computing node shown is purely exemplary, and might be implemented in the context of one or more of the figures herein.
  • Any node of the network 4200 may comprise a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof capable to perform the functions described herein.
  • a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices (e.g. a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration, etc).
  • a node may comprise a machine in the form of a virtual machine (VM), a virtual server, a virtual client, a virtual desktop, a virtual volume, a network router, a network switch, a network bridge, a personal digital assistant (PDA), a cellular telephone, a web appliance, or any machine capable of executing a sequence of instructions that specify actions to be taken by that machine.
  • Any node of the network may communicate cooperatively with another node on the network.
  • any node of the network may communicate cooperatively with every other node of the network.
  • any node or group of nodes on the network may comprise one or more computer systems (e.g. a client computer system, a server computer system) and/or may comprise one or more embedded computer systems, a massively parallel computer system, and/or a cloud computer system.
  • the computer node includes a processor 4208 (e.g. a processor core, a microprocessor, a computing device, etc), a main memory 4210 and a static memory 4212, which communicate with each other via a bus 4214.
  • the machine 4250 may further include a display unit 4216 that may comprise a touch-screen, or a liquid crystal display (LCD), or a light emitting diode (LED) display, or a cathode ray tube (CRT).
  • the computer system 4250 also includes a human input/output (I/O) device 4218 (e.g. a keyboard, an alphanumeric keypad, etc), a pointing device 4220 (e.g.
  • I/O human input/output
  • a mouse e.g. a mouse, a touch screen, etc
  • a drive unit 4222 e.g. a disk drive unit, a CD/DVD drive, a tangible computer readable removable media drive, an SSD storage device, etc
  • a signal generation device 4228 e.g. a speaker, an audio output, etc
  • a network interface device 4230 e.g. an Ethernet interface, a wired network interface, a wireless network interface, a propagated signal interface, etc).
  • the drive unit 4222 includes a machine-readable medium 4224 on which is stored a set of instructions (i.e. software, firmware, middleware, etc) 4226 embodying any one, or all, of the methodologies described above.
  • the set of instructions 4226 is also shown to reside, completely or at least partially, within the main memory 4210 and/or within the processor 4208.
  • the set of instructions 4226 may further be transmitted or received via the network interface device 4230 over the network bus 4214.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g. a computer).
  • a machine-readable medium includes read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g. carrier waves, infrared signals, digital signals, etc); or any other type of media suitable for storing or transmitting information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Projection Apparatus (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

L'invention concerne un système tactile multipoint et à utilisateurs multiples, à grande échelle, avec une interface utilisateur spécialisée fondée sur les zones. Ledit système comprend des procédés de gestion de l'espace et de répartition spatiale des repères audio. Le système comporte un composant d'écran tactile multipoint fabriqué en des dimensions suffisantes pour au moins une pluralité d'utilisateurs, ainsi que pour afficher des images projetées et pour recevoir des entrées tactiles multipoint. L'appareil comprend une pluralité de projecteurs d'images et une pluralité de caméras afin de détecter les entrées tactiles multipoint, ainsi qu'un logiciel d'interfaçage pour gérer l'espace utilisateur. Le logiciel d'interfaçage met en œuvre des techniques de gestion d'utilisateurs multiples à l'aide de la même composante d'interface utilisateur en attribuant des espaces physiques au sein du composant d'écran tactile multipoint et en coordonnant le déplacement des objets affichés entre les espaces physiques. Des modes de réalisation comprennent une pluralité de transducteurs audio et des procédés d'exécution d'une spatialisation audio à l'aide de la pluralité de transducteurs audio correspondant aux espaces physiques, en partageant les niveaux de volume entre les transducteurs audio sur la base du déplacement d'un objet affiché.
PCT/US2010/047913 2009-09-03 2010-09-03 Système tactile multipoint et à utilisateurs multiples à grande échelle WO2011029067A2 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US12/553,961 US20110050640A1 (en) 2009-09-03 2009-09-03 Calibration for a Large Scale Multi-User, Multi-Touch System
US12/553,959 2009-09-03
US12/553,966 US8730183B2 (en) 2009-09-03 2009-09-03 Large scale multi-user, multi-touch system
US12/553,962 US9274699B2 (en) 2009-09-03 2009-09-03 User interface for a large scale multi-user, multi-touch system
US12/553,961 2009-09-03
US12/553,962 2009-09-03
US12/553,959 US20110055703A1 (en) 2009-09-03 2009-09-03 Spatial Apportioning of Audio in a Large Scale Multi-User, Multi-Touch System
US12/553,966 2009-09-03

Publications (2)

Publication Number Publication Date
WO2011029067A2 true WO2011029067A2 (fr) 2011-03-10
WO2011029067A3 WO2011029067A3 (fr) 2011-06-23

Family

ID=43649671

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2010/047897 WO2011029055A1 (fr) 2009-09-03 2010-09-03 Appareils, procédés et systèmes pour constructeur de requêtes visuelles
PCT/US2010/047913 WO2011029067A2 (fr) 2009-09-03 2010-09-03 Système tactile multipoint et à utilisateurs multiples à grande échelle

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2010/047897 WO2011029055A1 (fr) 2009-09-03 2010-09-03 Appareils, procédés et systèmes pour constructeur de requêtes visuelles

Country Status (1)

Country Link
WO (2) WO2011029055A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430140B2 (en) 2011-05-23 2016-08-30 Haworth, Inc. Digital whiteboard collaboration apparatuses, methods and systems
US9465434B2 (en) 2011-05-23 2016-10-11 Haworth, Inc. Toolbar dynamics for digital whiteboard
US9471192B2 (en) 2011-05-23 2016-10-18 Haworth, Inc. Region dynamics for digital whiteboard
US9479549B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard with federated display
US9479548B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard access to global collaboration data
US10255023B2 (en) 2016-02-12 2019-04-09 Haworth, Inc. Collaborative electronic whiteboard publication process
US10304037B2 (en) 2013-02-04 2019-05-28 Haworth, Inc. Collaboration system including a spatial event map
US10802783B2 (en) 2015-05-06 2020-10-13 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11126325B2 (en) 2017-10-23 2021-09-21 Haworth, Inc. Virtual workspace including shared viewport markers in a collaboration system
US11212127B2 (en) 2020-05-07 2021-12-28 Haworth, Inc. Digital workspace sharing over one or more display clients and authorization protocols for collaboration systems
US11573694B2 (en) 2019-02-25 2023-02-07 Haworth, Inc. Gesture based workflows in a collaboration system
US11740915B2 (en) 2011-05-23 2023-08-29 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US11750672B2 (en) 2020-05-07 2023-09-05 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US11861561B2 (en) 2013-02-04 2024-01-02 Haworth, Inc. Collaboration system including a spatial event map
US11934637B2 (en) 2017-10-23 2024-03-19 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces
US12019850B2 (en) 2023-06-23 2024-06-25 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9280206B2 (en) * 2012-08-20 2016-03-08 Samsung Electronics Co., Ltd. System and method for perceiving images with multimodal feedback
CN111459888B (zh) * 2020-02-11 2023-06-30 天启黑马信息科技(北京)有限公司 一种文献检索的方法与设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256046B1 (en) * 1997-04-18 2001-07-03 Compaq Computer Corporation Method and apparatus for visual sensing of humans for active public interfaces
US20050188316A1 (en) * 2002-03-18 2005-08-25 Sakunthala Ghanamgari Method for a registering and enrolling multiple-users in interactive information display systems
US20060170614A1 (en) * 2005-02-01 2006-08-03 Ruey-Yau Tzong Large-scale display device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285995B1 (en) * 1998-06-22 2001-09-04 U.S. Philips Corporation Image retrieval system using a query image
US6980982B1 (en) * 2000-08-29 2005-12-27 Gcg, Llc Search system and method involving user and provider associated beneficiary groups
US7411575B2 (en) * 2003-09-16 2008-08-12 Smart Technologies Ulc Gesture recognition method and touch system incorporating the same
US7855811B2 (en) * 2006-10-17 2010-12-21 Silverbrook Research Pty Ltd Method of providing search results to a user
US20080267504A1 (en) * 2007-04-24 2008-10-30 Nokia Corporation Method, device and computer program product for integrating code-based and optical character recognition technologies into a mobile visual search

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6256046B1 (en) * 1997-04-18 2001-07-03 Compaq Computer Corporation Method and apparatus for visual sensing of humans for active public interfaces
US20050188316A1 (en) * 2002-03-18 2005-08-25 Sakunthala Ghanamgari Method for a registering and enrolling multiple-users in interactive information display systems
US20060170614A1 (en) * 2005-02-01 2006-08-03 Ruey-Yau Tzong Large-scale display device

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430140B2 (en) 2011-05-23 2016-08-30 Haworth, Inc. Digital whiteboard collaboration apparatuses, methods and systems
US9465434B2 (en) 2011-05-23 2016-10-11 Haworth, Inc. Toolbar dynamics for digital whiteboard
US9471192B2 (en) 2011-05-23 2016-10-18 Haworth, Inc. Region dynamics for digital whiteboard
US11886896B2 (en) 2011-05-23 2024-01-30 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US11740915B2 (en) 2011-05-23 2023-08-29 Haworth, Inc. Ergonomic digital collaborative workspace apparatuses, methods and systems
US9479549B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard with federated display
US9479548B2 (en) 2012-05-23 2016-10-25 Haworth, Inc. Collaboration system with whiteboard access to global collaboration data
US10304037B2 (en) 2013-02-04 2019-05-28 Haworth, Inc. Collaboration system including a spatial event map
US11887056B2 (en) 2013-02-04 2024-01-30 Haworth, Inc. Collaboration system including a spatial event map
US11861561B2 (en) 2013-02-04 2024-01-02 Haworth, Inc. Collaboration system including a spatial event map
US10949806B2 (en) 2013-02-04 2021-03-16 Haworth, Inc. Collaboration system including a spatial event map
US11481730B2 (en) 2013-02-04 2022-10-25 Haworth, Inc. Collaboration system including a spatial event map
US11262969B2 (en) 2015-05-06 2022-03-01 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11775246B2 (en) 2015-05-06 2023-10-03 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US10802783B2 (en) 2015-05-06 2020-10-13 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11816387B2 (en) 2015-05-06 2023-11-14 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US11797256B2 (en) 2015-05-06 2023-10-24 Haworth, Inc. Virtual workspace viewport following in collaboration systems
US10705786B2 (en) 2016-02-12 2020-07-07 Haworth, Inc. Collaborative electronic whiteboard publication process
US10255023B2 (en) 2016-02-12 2019-04-09 Haworth, Inc. Collaborative electronic whiteboard publication process
US11755176B2 (en) 2017-10-23 2023-09-12 Haworth, Inc. Collaboration system including markers identifying multiple canvases in a shared virtual workspace
US11126325B2 (en) 2017-10-23 2021-09-21 Haworth, Inc. Virtual workspace including shared viewport markers in a collaboration system
US11934637B2 (en) 2017-10-23 2024-03-19 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces
US11573694B2 (en) 2019-02-25 2023-02-07 Haworth, Inc. Gesture based workflows in a collaboration system
US11750672B2 (en) 2020-05-07 2023-09-05 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US11212127B2 (en) 2020-05-07 2021-12-28 Haworth, Inc. Digital workspace sharing over one or more display clients and authorization protocols for collaboration systems
US11956289B2 (en) 2020-05-07 2024-04-09 Haworth, Inc. Digital workspace sharing over one or more display clients in proximity of a main client
US12019850B2 (en) 2023-06-23 2024-06-25 Haworth, Inc. Collaboration system including markers identifying multiple canvases in multiple shared virtual workspaces

Also Published As

Publication number Publication date
WO2011029067A3 (fr) 2011-06-23
WO2011029055A1 (fr) 2011-03-10

Similar Documents

Publication Publication Date Title
US9274699B2 (en) User interface for a large scale multi-user, multi-touch system
US8730183B2 (en) Large scale multi-user, multi-touch system
US20110050640A1 (en) Calibration for a Large Scale Multi-User, Multi-Touch System
US20110055703A1 (en) Spatial Apportioning of Audio in a Large Scale Multi-User, Multi-Touch System
WO2011029067A2 (fr) Système tactile multipoint et à utilisateurs multiples à grande échelle
JP6078884B2 (ja) カメラ式マルチタッチ相互作用システム及び方法
CN112631438B (zh) 交互式投影的系统和方法
US9886102B2 (en) Three dimensional display system and use
US8643569B2 (en) Tools for use within a three dimensional scene
KR101823182B1 (ko) 동작의 속성을 이용한 디스플레이 상의 3차원 사용자 인터페이스 효과
Stavness et al. pCubee: a perspective-corrected handheld cubic display
US10739936B2 (en) Zero parallax drawing within a three dimensional display
US8502816B2 (en) Tabletop display providing multiple views to users
Schöning et al. Building interactive multi-touch surfaces
US9110512B2 (en) Interactive input system having a 3D input space
US20150370322A1 (en) Method and apparatus for bezel mitigation with head tracking
DE112020002268T5 (de) Vorrichtung, verfahren und computerlesbares medium zur darstellung von dateien computergenerierter realität
US20230092282A1 (en) Methods for moving objects in a three-dimensional environment
EP3814876B1 (fr) Placement et manipulation d'objets dans un environnement de réalité augmentée
US20220172319A1 (en) Camera-based Transparent Display
US20220044580A1 (en) System for recording and producing multimedia presentations
US11620790B2 (en) Generating a 3D model of a fingertip for visual touch detection
Muller Multi-touch displays: design, applications and performance evaluation
JP2016018363A (ja) 仮想空間平面上に配置したオブジェクトを表示制御するゲーム・プログラム
CN110291495A (zh) 信息处理系统、信息处理方法及程序

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10814602

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10814602

Country of ref document: EP

Kind code of ref document: A2