WO2018022145A1 - Viewing token for touch senstive screen - Google Patents

Viewing token for touch senstive screen Download PDF

Info

Publication number
WO2018022145A1
WO2018022145A1 PCT/US2017/016131 US2017016131W WO2018022145A1 WO 2018022145 A1 WO2018022145 A1 WO 2018022145A1 US 2017016131 W US2017016131 W US 2017016131W WO 2018022145 A1 WO2018022145 A1 WO 2018022145A1
Authority
WO
WIPO (PCT)
Prior art keywords
token
viewing
platform
touch sensitive
sensitive screen
Prior art date
Application number
PCT/US2017/016131
Other languages
French (fr)
Other versions
WO2018022145A8 (en
Inventor
Fredric Ellman
Steven Ellman
Jonathan Thomas STOKES
Original Assignee
Giapetta's Workshop Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/220,783 external-priority patent/US20170031558A1/en
Application filed by Giapetta's Workshop Llc filed Critical Giapetta's Workshop Llc
Publication of WO2018022145A1 publication Critical patent/WO2018022145A1/en
Publication of WO2018022145A8 publication Critical patent/WO2018022145A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/98Accessories, i.e. detachable arrangements optional for the use of the video game device, e.g. grip supports of game controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03543Mice or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/039Accessories therefor, e.g. mouse pads
    • G06F3/0393Accessories for touch pads or touch screens, e.g. mechanical guides added to touch screens for drawing straight lines, hard keys overlaying touch screens or touch pads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0445Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means using two or more layers of sensing electrodes, e.g. using two layers of electrodes separated by a dielectric layer

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A viewing token for interacting with a capacitive touch sensitive screen has a horizontal platform with an aperture and two or more capacitive probes. The touch screen is programed to detect the location of the touch capacitive probes and determine the location of the aperture. The touch sensitive screen is adapted to display an image that is visible through the aperture when the location of the aperture coincides with a display location on the touch screen. A user must also be touching the viewing token in order for the image to be displayed. The combination of a viewing token and a computer with a touch sensitive screen is a viewing token system. The system may be adapted to provide a story or game where the user interacts with the story or game using the viewing token.

Description

Title:
Viewing Token for Touch Sensitive Screen. Technical Field:
The inventions described herein are in the field of apparatuses for interacting with a touch sensitive screen.
Background Art:
Figure 18A is a perspective view of a prior art game apparatus 1800. Figure 18A is based on figure 1 of Hong Kong granted patent HK1147898, "Game Apparatus and Method of Use Thereof", by Ho, Yiu Ming Kenneth, published on 19 August 2011 (Ho). The apparatus comprises a game device 1802 and a smart phone 1808. The smart phone comprises a touch screen display 1806. The game device comprises a first probe 1811, second probe 1812 and third probe 1813. The touch screen display is adapted to sense the location of the probes when the probes touch the display. The probes may comprise an electrical conductor. Said sensing may be by a localized change in capacitance of said screen due to said probes touching said screen.
Figure 18B is a top view of the game apparatus 1800 showing the smart phone 1808, touch screen display 1806 and outline of the game device 1802. Figure 18B is based on figure 2 of Ho. The smart phone is adapted to determine the locations 1821, 1822 and 1823 of the probes on the screen when the probes contact the screen. Said locations form an isosceles triangle A'B'C (item 1824). Side A'B' of the triangle is the base. Sides B'C and C'A' and are equal in length. The smart phone may be adapted to present an image 1832 on the touch screen depending upon the sensed contact locations of the game device. The image must be presented outside of the outline of the game device in order for it to be visible to a user. No provision is made in Ho to provide an image that is visible underneath a game device. This limits the types of interactions that a user can have with the game apparatus. Disclosure of Invention:
The disclosure of invention is provided as a guide to understanding the invention. It does not necessarily describe the most generic embodiment of the invention or the broadest range of alternative embodiments.
Figure 1A is a top view of a viewing token 100 that will allow a user to view images presented underneath the token. Figure IB is a side view of the viewing token 100 of figure 1A. The viewing token comprises a horizontal platform 104 that comprises a top 111 and bottom 113. A viewing aperture 106 is provided in the platform to view images underneath the platform when the platform is placed on a touch sensitive screen. The viewing platform additionally comprises at least a first probe 101 at a first position 151 on the platform, and a second probe 102 at a second position 152 on said platform. Each probe comprises a top 105 and a bottom 107. The probes may be any item whose position can be sensed by a touch sensitive screen when the bottom of said probe is in close proximity to or actually touching said touch sensitive screen. Sensing may be by one or more of capacitive sensing, resistive sensing, optical sensing, RF sensing, acoustic sensing or any other technology that can determine a location on a screen where a probe touches a screen.
The probes may be capacitive probes. As used herein, a capacitive probe is a probe that is adapted to alter the capacitance of a capacitive touch sensitive screen at a location where the capacitive probe is in close proximity to or touches said touch sensitive screen. Each capacitive probe may comprise a vertical electrically conductive shaft 132. The shaft may pass through the thickness of the platform 122 and optionally protrude above the platform 126 and/or extend below the platform 128. The probes may be designed so that they must be in electrical contact with a person so that there will be enough change in capacitance when the probes touch a touch sensitive screen so that they will be detected. The touch sensitive screen may be originally designed to sense a person's finger touching it.
The probes, therefore, simulate a finger touch in a manner similar to a capacitive stylus. Touch sensitive screens on smart phones or tablet computers are examples. As used herein, the term "electrical contact" between two objects includes capacitive contact where an insulator may be between two conductors provided the insulator is sufficiently thin so that an increase in capacitance can be detected when the two conductors are brought together with only the insulator between them. The bottom of a probe may be at least partially covered by a flexible boot 134. The boot is normally bowed out so that the bottom 107 of the probe will stand off 124 from the bottom inside 133 of the boot when the viewing token sits on a horizontal touch screen under said token's own weight. Thus a touch sensitive screen might not sense the probe when the viewing token merely sits on the screen. The boot may be flexible enough so that the bottom of the probe will move down and touch 144 the inside surface of the boot when a user presses down on the viewing token. The boot is thin enough so that an increase in capacitance is detected by a touch sensitive screen below the boot. Thus at least one requirement for the viewing token to be sensed by the touch sensitive screen may be that a user must press down on the token.
The platform 104 of a viewing token may be electrically isolated from a probe such that a user must touch the probe in order for there to be an adequate increase in capacitance for a touch sensitive screen to sense the probe. Electrical isolation can be achieved by making the platform out of plastic.
Alternatively, the platform can be electrically conductive and in electrical contact with a probe such that a user can touch the platform and not the probe yet still be in electrical contact with the probe. This can be achieved by making the platform out of metal or coating a platform made out of an electrical insulator with metal.
The viewing token may comprise a third probe 103 located at a third position 153 on the platform. The first, second and third positions of the probes may form a triangle ABC (item 110). It has been surprisingly discovered that if the sides of the triangle are different in length, then the positions of the probes on the token can uniquely define a reference position 116 for the token and a reference vector 118 for the token.
The reference position may be defined by triangulation 120, 154 with the probe positions. The reference vector may be defined relative to one of the sides of the triangle, such as the longest side. The reference vector may be normal to the longest side with a direction that points from the longest side to the reference position.
When the viewing token is placed on a touch sensitive screen, the sensed locations of the probes on the screen can be used to determine the location of the reference position of the token on the touch sensitive screen and the orientation of the reference vector of the token on the touch sensitive screen. Thus the location and orientation of the aperture 106 of the token on a touch screen can be determined even if the there are asymmetries of the aperture perimeter 108 or the token perimeter 112. The aperture can even be located remotely from the probes. Figure 1A shows a simple case of asymmetry where a circular perimeter of an aperture 108 is offset relative to a circular perimeter 112 of the token.
As used herein "position" generally refers to a horizontal coordinate on a platform. A "location" generally refers to a horizontal coordinate on a touch screen. The positions of probes on a platform may be fixed, but the locations of probes on a touch screen may change as a viewing token is moved on said touch screen.
A suitable reference position of the token is the mathematical centroid of the aperture. An alternative suitable reference position is the apparent centroid of the overall shape if an aperture. For example, if an aperture has an overall circular shape except for minor protrusions into or out from the circle, then the reference position may be the apparent center of the circle ignoring the protrusions.
The aperture may be an opening, such as a circular hole. The aperture should be big enough so that an image can be readily visible through it. A diameter of 1 cm or greater for an aperture is suitable. The aperture may be covered by a transparent material, such as a window. The platform itself may be transparent. The transparent material over the window may be a lens, such as a Fresnel lens. The aperture may be an electronic screen where images are shown. The electronic screen may virtually show what is underneath the aperture. The viewing token may comprise a computer, a power source and communication means, such as Bluetooth® to communicate with a computer in control of a touch sensitive screen. The viewing token may comprise a variety of input and output devices, such as individual LED lights, microphones, cameras, speakers, etc.
Brief Description of Drawings:
Figure 1A is a top view of a viewing token.
Figure IB is a side view of the viewing token of figure 1.
Figure 2 is a top view of a viewing token system that comprises a viewing token, a computer and a touch sensitive screen controlled by said computer.
Figure 3 is a top view of the viewing token system of figure 2 with a user moving the viewing token.
Figure 4 is a top view of alternative viewing token designs. Figure 5 is a top view of a viewing token that comprises a platform divided into a first portion and a second portion.
Figure 6 is a top view of alternative viewing token designs.
Figure 7 is a sequence of animation images visible through the viewing aperture of a viewing token.
Figure 8 is a top view of a viewing token system that comprises two different viewing tokens on one touch sensitive screen.
Figures 9A through 14B are sequential top views of a user interacting with a touch sensitive screen using one or more viewing tokens.
Figure 15 is a top view of a viewing token on a touch sensitive screen showing different actions being performed depending on the orientation of the viewing token.
Figure 16 is a top view of a viewing token system showing a user virtually throwing an object displayed on the touch screen.
Figure 17 is a top view of a viewing token set with multiple interchangeable aperture inserts. Figure 18A is a perspective view of a prior art game apparatus.
Figure 18B is a top view of the prior art game apparatus of figure 18A.
Best Mode for Carrying out the Invention:
The detailed description describes non-limiting exemplary embodiments. Any individual features may be combined with other features as required by different applications for at least the benefits described herein. As used herein, the term "about" means plus or minus 10% of a given value unless specifically indicated otherwise.
A portion of the disclosure of this patent document contains material to which a claim for copyright is made. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but reserves all other copyright rights whatsoever.
As used herein, a "computer-based system" comprises an input device for receiving data, an output device for outputting data in tangible form (e.g. printing or displaying on a computer screen), a permanent memory for storing data as well as computer code, and a microprocessor for executing computer code wherein said computer code resident in said permanent memory will physically cause said microprocessor to read-in data via said input device, process said data within said microprocessor and output said processed data via said output device.
As used herein, the term "shaped" means that an item has the overall appearance of a given shape even if there are minor variations from the pure form of said given shape.
As used herein, the term "generally" when referring to a shape means that an ordinary observer will perceive that an object has said shape even if there are minor variations from said shape.
As used herein, relative orientation terms, such as "up", "down", "top", "bottom", "left", "right", "vertical", "horizontal", "distal" and "proximal" are defined with respect to an initial presentation of an object and will continue to refer to the same portion of an object even if the object is subsequently presented with an alternative orientation, unless otherwise noted.
As used herein, "touch screen" and "touch sensitive screen" mean the same thing.
Viewing Token System
Figure 2 is a top view of a viewing token system 200. The viewing token system comprises a viewing token 100 and a computer 202. The computer may be a tablet computer. The viewing token of figure 1 is shown but any alternative design may be used subject to the conditions set forth herein. The viewing token comprises a horizontal platform 104 with an aperture 106, a first probe 101 at a first position 151 on the platform, a second probe 102 at a second position 152 on the platform, and a third probe 103 at a third position 153 on the platform.
The computer comprises a touch sensitive screen 204. The touch sensitive screen may be a capacitive touch sensitive screen. The computer is adapted to detect a first location 251 of said first probe on said touch screen, a second location 252 for said second probe on said touch screen and a third location 253 for said third probe on said touch screen when said viewing token is placed on said touch sensitive screen and said viewing token is touched by a user.
The first, second and third positions of said probes on said viewing token define a triangle ABC (item 110). The first side AB, second side BC and third side CA of said triangle each have different lengths. A reference position 116 for said viewing token is defined relative to said triangle. A reference vector 118 is also defined relative to said triangle. The computer is adapted to determine a location on said touch sensitive screen for said reference position of said token based on said detected locations of said probes on said touch sensitive screen. The computer may also be adapted to determine an orientation 258 on said touch sensitive screen for said reference vector based on said locations of said probes on said touch sensitive screen. The computer, for example, may have a library of token positions in its permanent memory. Said library may comprise records for different tokens. Each record may comprise a token identifier, probe positions, a reference position and a reference vector. The computer can compare the sensed locations of the token probes with the stored token positions in said library to identify which token in said library the sensed token corresponds to. The computer may then use geometric calculations, such as triangulation, to determine a location for the reference position for the token based on the sensed probe locations. The computer may similarly use geometric calculations to determine an orientation for the token reference vector. The records for the tokens in the library may also comprise data defining the inner and outer perimeter of the token. Said data can be used to display images on the touch sensitive screen that are visible through a token aperture or visible on portions of the touch sensitive screen that are external to the token.
The computer may be adapted to define a screen inner perimeter 212 and screen outer perimeter 214 on said touch sensitive screen. The computer may be further adapted to display an image within said screen inner perimeter when said computer determines that said reference position of said viewing token has a reference location 256 that is within a location tolerance 217 of a display location 216. The display location is defined so that the image displayed in the screen inner perimeter is at least partially visible through said aperture of said platform of said token.
The computer may be further adapted to display said image within said screen inner perimeter when the detected orientation of the reference vector of the viewing token is within an orientation tolerance 219 of a display orientation 218. In the example shown in figure 2, the token outer perimeter 112 will be aligned with the screen outer perimeter 214 when the reference position of the viewing token is aligned with the display location of the touch sensitive screen and the reference vector 118 of the viewing token is aligned with the display orientation 218 of the touch sensitive screen. The computer may be further adapted to display an image within said screen inner perimeter when said reference position of said viewing token is aligned with said display location of said touch sensitive screen and the orientation 258 of said reference vector of said viewing token changes from a first display orientation to a second display orientation.
Occasionally, a user's finger or other object may touch a touch sensitive screen at a stray location 220. The computer may be adapted to identify all locations where said touch sensitive screen is being touched. The computer may then determine which three of the detected touch locations correspond to said triangle of said viewing token. The computer may then assign the first, second and third probe locations to the touch locations that correspond to the triangle and disregard the stray location.
Figure 3 shows a top view of the viewing token system 200 of figure 2. A user's left hand 300 and right hand 310 are shown touching the tops of the first 101, second 102 and third 103 probes of the viewing token 100. The user's right hand is also making a stray contact 220 with the touch sensitive screen 204 of the computer 202. The computer has identified the locations of the first, second and third probes by determining which of the detected touch locations have distances between them that correspond to the distances between the probe positions on file in the computer's memory. The computer has determined that the reference position of the viewing token is within a location tolerance of a display location and that the orientation of the reference vector of the viewing token is within an orientation tolerance of a display orientation. The inner perimeter 108 of the viewing token is therefore aligned with a screen inner perimeter 212 and the outer perimeter 112 of the viewing token is aligned with the screen outer perimeter 214. The computer is adapted to then display an inner image 304 within the screen inner perimeter such that said inner image is visible through the aperture 106 of the viewing token.
The computer is also adapted to display an outer image 306 on the touch sensitive screen. The outer image is displayed outside of the screen outer perimeter. The outer image may provide visual clues as to where the viewing token should be positioned in order for the computer to display the inner image. In this example, the outer image is a series of radial lines directed inward. Thus the computer may be adapted to first display the outer image when the location of the reference position of the viewing token is within a first relatively large location tolerance of the display location. This guides the user to where the viewing token should be moved so that the location of the reference position of the viewing token is then within a second relatively small location tolerance of the display location. The inner image may then be displayed.
Alternative Viewing Token Designs
Figure 4 is a top view of a variety of alternative viewing token designs. Viewing token 400 comprises a platform 401 with an equilateral triangular aperture 402. The token further comprises three probes 403, 405 and 407 arranged in an equilateral triangle offset from the equilateral triangle of the aperture. The aperture in combination with the probes have three-fold symmetry. Thus a reference position and reference vector can be defined so that the aperture can be aligned with an equilateral triangular inner viewing perimeter on a touch sensitive screen even though the distances between the probes are equal. The outer perimeter 404 of the token has a "gear" shape.
Viewing token 410 comprises a platform 411 with two matched apertures 414, 416. The token further comprises four probes 413, 415, 417 and 419. The platform has an oval outer perimeter with a raised rim 412. The viewing token exhibits two fold symmetry with respect to opposite probes. Thus either pair of opposite probes 413, 417 or 415, 419 can be used to define a reference position and reference vector so that a touch sensitive screen can display images aligned with the apertures.
Viewing token 420 comprises a platform 421 with an oval aperture 422. The token further comprises four probes 423, 425, 427 and 429. The outer perimeter comprises a filigree 426. The platform has a raised rounded surface 424. Any additional vertical extensions, such as handles, may be provided to the platform depending upon desired aesthetics or mechanical functionality.
Viewing token 430 comprises a platform 431 with an octagonal aperture 432 with a raised inner perimeter 434. The token further comprises three probes 433, 435, 437. The probes may have a generally equilateral arrangement. Some difference in the spacing between the probes must be provided, however, if an image is to be presented in the octagonal orifice and aligned with the inner perimeter of the orifice. The difference may be as small as one pixel of the touch sensitive screen.
Figure 5 is a top view of an alternative viewing token 500. The viewing token comprises a first portion 502 of a platform with a viewing aperture 508. The token further comprises a second portion 504 of the platform. The first portion of the platform comprises a first probe 514 and a second probe 516. The second portion of the platform comprises a third probe 522. The first and second portions of the platform are adapted 506 to be reversibly joined together. Said adaptation in figure 5 is a dovetail joint. Any reversible attachment, however, may be used, such as hook and loop fasteners, snaps, screws, mild adhesives, and interlocking mechanical shapes. As used herein, a "reversible attachment" is any attachment than can be made and broken multiple times.
The first portion of the platform has a circular aperture 508. The first and second probes are located on a centerline 518 of the aperture and are about equidistant from the center 512 of the aperture. Thus a reference position can be defined as the center of the aperture so that an image can be displayed below the viewing token and be visible through the aperture even though there are only two probes on the first portion of the platform. The first portion of the platform could be used alone without the second portion. When the second portion of the platform is reversibly joined to the first portion of the platform at first position 532, then a third probe position is defined and hence a first triangle 536 is defined. The triangle may be used to encode actions that the computer will take when the viewing token is placed on a touch screen surface. These actions may be different than if the token is used with the first portion of the platform alone. The second portion of the platform may unlock desired functionality.
A second position 534 or more may be provided so that there is a plurality of triangles that may be formed depending upon which location the second portion of the platform is joined to. The computer can be adapted to take different actions accordingly.
One can also encode information in the shapes of the bottoms of the probes. A touch sensitive screen, for example, may detect a round bottom versus a triangular bottom and take different actions accordingly.
Provision may be made in a platform for reversibly attaching different probes to different pre-specified locations of the platform. The pre-specified locations may be holed in the platform. There may be a large number of holes for where the different probes may be attached. The viewing token may therefore serve as a security device where the computer will only react to tokens with probes in certain locations.
Figure 6 is a top view of a "fox" viewing token 600 and a "raven" viewing token 620. The fox viewing token has an aesthetic shape evocative of a fox. It comprises a platform 602 of flat clear plastic with an iridescent coating. Thus a user can see it by virtue of the coating but can also see through it by virtue of the transparent plastic. The aperture 604 has a generally circular shape with a center 614 that is defined as the reference position of the token. The aperture differs from a true circle due to a first protrusion 606 evocative of the cheek of the fox and a second protrusion 607 evocative of the tail of the fox. Nonetheless, the user will perceive the aperture has having a generally circular shape and navigate it on a touch sensitive screen accordingly. An eyelet 616 is provided in the platform for attaching the token to another object, such as a charm bracelet. The platform comprises first 611, second 613 and third 615 probes. The probes are made of metal and protrude upward from the platform and must be individually touched by a user in order for a touch sensitive screen tuned to finger touching to detect them. The probe positions define a triangle ABC (item
612). The lengths of the sides of the triangle are 19 mm, 25.4 mm, and 30.2 mm. Since the sides of the triangle are different, a touch sensitive screen can determine the location and orientation of the token.
The raven token 620 has an aesthetic shape evocative of a raven. The raven token comprises an opaque black plastic platform 622 with a circular viewing aperture 628 and an eyelet 632. The aperture center 626 is defined as the reference position of the token. The raven token comprises first 621, second 623 and third 625 probes. The probes define a triangle 624. The lengths of the sides of the triangle are 22.2 mm, 28.6 mm and 30.2 mm. Since the lengths of the sides of the triangle are different than the triangle for the fox token, a touch sensitive screen can distinguish between the raven token and fox token. These tokens will be used in Example 1 with reference to figures 9A through 14B.
Figure 7 shows successive images of an animation 700 that may be presented in the viewing aperture 712 of a viewing token 710. The animation comprises successive images 701, 702, 703, 704 705 and 706. They show a dog going into and emerging from a doghouse.
Figure 8 is a top view of a viewing token system 800. The viewing token system comprises a computer 802 with a touch sensitive screen 804. The system further comprises a first "teddy bear" viewing token 810 and a second "little girl" viewing token 830. The first viewing token comprises a first platform 812 with a first aperture 813 and three probes arranged in a first triangle 816. The second viewing token comprising a second platform 832 with a second aperture 833 and three probes arranged in a second triangle 836. At least one side of the second triangle has a length that is different from any of the lengths of the other sides of the first triangle. Alternatively, the first and second triangles could be left and right hand versions of each other and hence distinguishable by the order of the sides even though they have the same lengths. The platforms of both viewing tokens are electrically conductive and in electrical contact with the probes so that the user can touch each probe with one hand 300 or 310 in order for the token to be sensed by the touch sensitive screen.
Once the probes are in proper location and proper orientation on the touch sensitive screen, the computer may do one or more of display a first image 814 in the first aperture, display a second image 834 in the second aperture, display an external image 864, play music 868 or play a dialog 866 between the characters represented by each token. The computer may also perform any other programmable function, such as communicate with other viewing token systems via the Internet through a social media site.
Example 1
Figures 9A through 14B are sequential top views of a user interacting with a touch sensitive screen using one or more viewing tokens. The user will use the tokens to navigate through a graphic novel story line.
Figure 9A shows a graphical user interface (GUI) 900 with successive panes 902 of a story. The images are generated by a computer (not shown) on a touch sensitive screen 904. The computer has defined a screen inner perimeter 906 on the touch sensitive screen. The inner perimeter is not directly viewable by the user but is implied by the graphics around it (e.g. a "magic" mirror). Image cues 912 and text cues 914 in the story suggest to a user that the fox token 600 of figure 6 should be used to get the computer to display an image in the screen inner perimeter. In figure 9B, the user has placed the fox token 600 on the touch sensitive screen and moved the token using said user's figure tips 922 placed on the tops of the probes. The computer has sensed the locations of the token's probes but determined that the reference position of the token, and hence the aperture 604 of the token, is too far away from the display position to show any new images in the screen inner perimeter.
In figure 10A, the user has moved the fox token so that its aperture 604 is close enough to the screen inner perimeter 906 so that external graphic animations 1002 are presented. This is a visual clue to the user that the token is close to being in the correct position. In figure 10B, the user has continued to adjust the token location until an internal graphical animation 1004 is presented and visible through the aperture.
In figure 11A, the internal graphics are replaced with a new image 1102 that extends outside of the token. This is a visual clue that the token may be removed 1104. In figure 11B, the new image 1102 has expanded and the story line continues 1106.
Figure 12A shows a new GUI 1200. The GUI comprises an overlay 1210 with a defined screen inner perimeter 1214 that is implied but not directly visible. The rest of the GUI is greyed out 1202 indicating that an action must be taken with respect to the overlay before the story line will continue. A text cue 1206 and image cue 1204 indicate to a user that the raven token 620 of figure 6 must be used in this GUI. The computer can
differentiate between the raven token and the fox token since the probes form different triangles. An action cue 1208 in the form of a curved arrow is provided to indicate to a user that not only must the raven token be positioned properly, but also it must be rotated in order for the computer to take action. Figure 12B shows the raven token properly located on the GUI, but the computer cannot detect the probes 621, 623, 625 since the user's fingers 922 aren't touching the probes. Hence for at least this reason, no action is taken.
Figure 13A shows that the user's fingers 922 are properly touching the probes, the raven token 620 is in the proper location with the proper orientation, but the computer is still taking no action since the proper motion has not been detected. Figure 13B shows that the user has taken the proper action of rotating 1302 the raven token 620. The computer senses the motion and takes the action of displaying an initial external image 1304. This indicates to the user that the proper action has been taken and that the raven token can be removed from the screen.
Figure 14A shows that the overlay has been removed and an animated action of the opening of an iris portal 1402 takes place. In figure 14B, a continued story line 1404 is initially presented in the iris portal and then expands. Example 2
Figure 15 shows a viewing token system 1500 that comprises a computer comprising a touch sensitive screen 1501 and a viewing token 1502. A graphical user interface (GUI) 1503 is presented on the touch sensitive screen. The GUI shows a pile of garbage that comprises an image of a desired object 1516 which is a fork. The viewing token comprises a platform 1509 and three probes 1505, 1506 and 1507. The platform comprises an aperture 1511. The platform is generally opaque except for a transparent window 1504 that extends from inner perimeter 1513 of the platform to the outer perimeter 1515 of the platform. This provides a visual connection from the aperture to the external area of the touch sensitive screen. Alternatively, the platform could have a "C" shape where the aperture is connected by an opening to the outer perimeter of the token.
The computer has detected that the probe is properly aligned with the screen inner perimeter 1510. A character 1512 is displayed and visible through the aperture. The character is shown with a hand proceeding out through the window in the platform and terminating in a hand 1514 as an external image. The computer is adapted to change the position of the hand depending upon the orientation of the token. Thus the user must rotate 1518 the token so that the hand will proceed up to the desired object 1516 to take an appropriate action. Example 3
Figure 16 is a top view of a viewing token system 200 showing a user 1601 virtually throwing an object 1602 displayed on a touch sensitive screen 204 of a computer 202. In operation, a viewing token 100 is placed on the touch sensitive screen and touched by the user. The computer is adapted to show the object 1602, such as a ball, in the aperture 1604, no matter where the viewing token is located. The computer may show the object rotating 1606 as the user moves the token 1608 on the touch sensitive screen. The user can rotate the token to give spin to the ball. An action by the user, such as a sudden stop, causes the computer to continue the motion of the ball towards a desired location, such as a net 1610 displayed on the touch sensitive screen.
In an alternative embodiment, the computer may be mounted on a stand 1626 and inclined at a vertical angle 1624. Thus the user has to push the token up against gravity to shoot the ball. This gives kinesthetic feedback to the user. Vibration 1622 may be provided to by the computer to give additional tactile feedback to the user, such as improper speed for the ball to go into the net.
Example 4
Figure 17 is a top view of a viewing token set 1700 with multiple interchangeable aperture inserts 1712, 1714, 1716, 1718. The set may be packaged in a box 1701 with appropriate graphics printed thereupon. A viewing token 1702 may be provided along with an attached figurine 1704. The inserts may be different colors of transparent plastic. The inserts may have alignment areas 1722 for proper positioning and orientation of the viewing token. The token may have a large mass of 100 gm or more so that it will be relatively stable in position when placed on a touch sensitive screen. This will be beneficial for users with hand tremors or other motor skill impairments.
Conclusion
While the disclosure has been described with reference to one or more different exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications may be made to adapt to a particular situation without departing from the essential scope or teachings thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention.

Claims

A viewing token for interacting with a capacitive touch sensitive screen, said viewing token comprising: a) a platform comprising a top, a bottom, and a viewing aperture; and b) a first capacitive probe comprising a top and a bottom located at a first position on said platform; and c) a second capacitive probe comprising a top and a bottom located at a second
position on said platform.
The viewing token of claim 1 wherein: a) said first capacitive probe comprises a first vertical electrically conductive shaft that protrudes above said platform, passes through said platform, and extends below said platform; and b) said second capacitive probe comprises a second vertical electrically conductive shaft that protrudes above said platform, passes through said platform, and extends below said platform.
The viewing token of claim 2 wherein: a) the bottom of said first capacitive probe is at least partially covered by a first flexible boot comprising an inside bottom; and b) the bottom of said second capacitive probe is covered by a second flexible boot comprising an inside bottom wherein: c) said first and second flexible boots are bowed out such that the bottoms of said first and second capacitive probes will stand off from said bottom insides of said boots when said viewing token rests on said touch sensitive screen under said viewing token's own weight; and d) said first and second flexible boots are flexible enough so that said platform can be pushed down by a user's hand such that the bottoms of said first and second capacitive probes touch said inside bottoms of said boots.
4. The viewing token of claim 2 wherein said platform is electrically insulated from said first capacitive probe such that a user must touch said first capacitive probe in order to have electrical contact with it.
5. The viewing token of claim 2 wherein said platform is electrically conductive and in electrical contact with said first capacitive probe such that a user will have electrical contact with said capacitive probe when said user touches said platform.
6. The viewing token of claim 1 which further comprises a third capacitive probe comprising a top and a bottom located at a third position on said platform.
7. The viewing token of claim 6 wherein said first, second and third positions form a triangle with a first, second and third side and wherein each of said sides has a different length such that: a) a reference position of said viewing token can be defined; b) a reference vector of said viewing token can be defined; and c) said capacitive touch sensitive screen can be adapted to determine a location of said reference position on said touch sensitive screen and an orientation of said reference vector of said viewing token on said touch sensitive screen when said touch sensitive screen detects contact locations for said first, second and third probes.
8. The viewing token of claim 7 wherein: a) said platform comprises a first portion and a second portion; b) said first and second capacitive probes are located on said first portion; c) said third capacitive probe is located on said second portion; and d) said first and second portions are adapted to be reversibly joined together.
9. The viewing token of claim 8 wherein said first and second portions are adapted to be reversibly joined together in a plurality of configurations such that said positions of said first, second and third probes may form different triangles depending upon where said second portion is reversibly joined to said first portion.
10. The viewing token of claim 1 wherein said viewing aperture is an opening.
11. The viewing token of claim 1 wherein said viewing aperture is at least partially covered by a transparent material.
12. The viewing token of claim 1 wherein said viewing aperture has a generally circular shape with a center and wherein said aperture has a diameter of at least 1 cm.
13. The viewing token of claim 12 wherein said first and second capacitive probes are located on about a line of symmetry for said aperture at positions that are about equidistant from said center.
14. A viewing token system comprising: a) a viewing token comprising: i) a platform comprising a viewing aperture; and ii) a first probe located at a first position on said platform, iii) a second probe located at a second position on said platform, and iv) a third probe located at a third position on said platform; and b) a computer comprising a touch sensitive screen wherein said computer is adapted to detect a first location for said first probe on said touch sensitive screen, a second location for said second probe on said touch sensitive screen and a third location for said third probe on said touch sensitive screen when said viewing token is placed on said touch sensitive screen and touched by a user.
15. The viewing token system of claim 14 wherein: a) said first, second, and third positions of said probes on said viewing token form a triangle with a first, second and third side; b) said first, second and third sides are of different lengths; c) a reference position for said token is defined relative to said triangle; d) a reference vector for said token is defined relative to said triangle; e) said computer is adapted to determine a location on said touch sensitive screen for said reference position of said token based on said locations of said probes on said touch sensitive screen; and f) said computer is adapted to determine an orientation on said touch sensitive screen for said reference vector based on said locations of said probes on said touch sensitive screen.
16. The viewing token system of claim 15 wherein said viewing aperture has a centroid and wherein said reference position is at about said centroid.
17. The viewing token system of claim 15 wherein said computer is adapted to display an image on said touch sensitive screen that is visible through said aperture of said token when said location of said reference position of said token is within a location tolerance of a display location on said touch sensitive screen.
18. The viewing token system of claim 17 wherein said image is an animation.
19. The viewing token system of claim 17 wherein said computer is adapted to display said image when said orientation of said reference vector is within an orientation tolerance of a display orientation.
20. The viewing token system of claim 17 wherein said computer is adapted to display said image when said orientation of said reference vector changes from a first display orientation to a second display orientation.
21. The viewing token system of claim 14 wherein said computer is adapted to: a) identify all locations where said touch sensitive screen detects being touched; b) determine if any three of said detected touch locations correspond to said triangle; c) assign said first, second and third probe locations to said detected touch locations that correspond to said triangle.
22. The viewing token system of claim 14 wherein said computer is adapted to identify a second viewing token comprising a second viewing aperture and three probes forming a second triangle wherein said second triangle has at least one side length that is different from the side lengths of said first triangle.
23. The viewing token system of claim 22 wherein said computer is adapted to display a second image on said touch sensitive screen visible through said second viewing aperture based on locations detected for said three probes of said second viewing token.
24. The viewing token system of claim 14 wherein said computer is adapted to detect said touching of said touch sensitive screen by said probes using one or more of capacitive sensing, resistive sensing, optical sensing, F sensing or acoustic sensing.
PCT/US2017/016131 2016-07-27 2017-02-02 Viewing token for touch senstive screen WO2018022145A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/220,783 2016-07-27
US15/220,783 US20170031558A1 (en) 2015-07-28 2016-07-27 Device and System for Interactive Touch Capacitive Movable Platform with One or More Viewing Apertures for Capacitive Touch Screens
US201662413556P 2016-10-27 2016-10-27
US62/413,556 2016-10-27

Publications (2)

Publication Number Publication Date
WO2018022145A1 true WO2018022145A1 (en) 2018-02-01
WO2018022145A8 WO2018022145A8 (en) 2018-02-22

Family

ID=61017255

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/016131 WO2018022145A1 (en) 2016-07-27 2017-02-02 Viewing token for touch senstive screen

Country Status (1)

Country Link
WO (1) WO2018022145A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120249430A1 (en) * 2011-03-31 2012-10-04 Oster David Phillip Multi-Touch Screen Recognition of Interactive Objects, and Application Thereof
US20130082963A1 (en) * 2011-10-03 2013-04-04 Man Fong CHU Waterproof housing for digital devices having capacitive touch screen and its actuator mechanism
US20130093713A1 (en) * 2011-10-17 2013-04-18 Nokia Corporation Method and apparatus for determining the presence of a device for executing operations
US20130302777A1 (en) * 2012-05-14 2013-11-14 Kidtellect Inc. Systems and methods of object recognition within a simulation
US20140285463A1 (en) * 2013-03-19 2014-09-25 Lenovo (Singapore) Pte. Ltd. Touchscreen and token interactions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120249430A1 (en) * 2011-03-31 2012-10-04 Oster David Phillip Multi-Touch Screen Recognition of Interactive Objects, and Application Thereof
US20130082963A1 (en) * 2011-10-03 2013-04-04 Man Fong CHU Waterproof housing for digital devices having capacitive touch screen and its actuator mechanism
US20130093713A1 (en) * 2011-10-17 2013-04-18 Nokia Corporation Method and apparatus for determining the presence of a device for executing operations
US20130302777A1 (en) * 2012-05-14 2013-11-14 Kidtellect Inc. Systems and methods of object recognition within a simulation
US20140285463A1 (en) * 2013-03-19 2014-09-25 Lenovo (Singapore) Pte. Ltd. Touchscreen and token interactions

Also Published As

Publication number Publication date
WO2018022145A8 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
US10725643B2 (en) Touch-display accessory with relayed display plane
Lee Hacking the nintendo wii remote
US9086732B2 (en) Gesture fusion
EP3462297A2 (en) Systems and methods for haptically-enabled conformed and multifaceted displays
US20160299531A1 (en) Cylindrical Computing Device with Flexible Display
CN103092406B (en) The system and method for multiple pressure interaction on touch sensitive surface
US8976501B2 (en) Magnetically movable objects over a display of an electronic device
US20060183545A1 (en) Multi-user touch-responsive entertainment device
JP2005323745A (en) Game machine
EP3209401B1 (en) A toy construction system and a method for a spatial structure to be detected by an electronic device comprising a touch screen
US8992324B2 (en) Position sensing gesture hand attachment
US10481742B2 (en) Multi-phase touch-sensing electronic device
US20120212427A1 (en) Driving device for interacting with touch screen panel assembly and method for interacting same with touch screen panel assembly
US20140168094A1 (en) Tangible alphanumeric interaction on multi-touch digital display
US20150169171A1 (en) No-touch cursor for item selection
CN107427719A (en) Including the toy system for the game element that can be detected by computing device
JP6115746B2 (en) Game device
CN110308832A (en) Display control apparatus and its control method and storage medium
CN108351731A (en) It is inputted including the user of event and the movement detected
EP3638386B1 (en) Board game system and method
KR101530340B1 (en) Motion sensing system for implementing hand position-posture information of user in a three-dimensional virtual space based on a combined motion tracker and ahrs system
WO2018022145A1 (en) Viewing token for touch senstive screen
TWI807372B (en) Virtualized user-interface device
JP6459066B2 (en) GAME SYSTEM, GAME DEVICE, AND GAME PROGRAM
KR20170081889A (en) Method for providing user interface for card game, and server and computer-readable recording media using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17834892

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17834892

Country of ref document: EP

Kind code of ref document: A1