WO2008029180A1 - An apparatus and method for position-related display magnification - Google Patents

An apparatus and method for position-related display magnification Download PDF

Info

Publication number
WO2008029180A1
WO2008029180A1 PCT/GB2007/050512 GB2007050512W WO2008029180A1 WO 2008029180 A1 WO2008029180 A1 WO 2008029180A1 GB 2007050512 W GB2007050512 W GB 2007050512W WO 2008029180 A1 WO2008029180 A1 WO 2008029180A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
image
actuating means
stylus
reference plane
Prior art date
Application number
PCT/GB2007/050512
Other languages
French (fr)
Inventor
Santosh Sharan
Graham Dodgson
Original Assignee
Santosh Sharan
Graham Dodgson
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Santosh Sharan, Graham Dodgson filed Critical Santosh Sharan
Publication of WO2008029180A1 publication Critical patent/WO2008029180A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • This invention relates to an apparatus and method for accessing, displaying and using data or electronic functions associated with icons or regions of a screen image.
  • User interfaces for televisions, computer systems, electronic storage systems etc often combine a processor, screen-based output and a remote input device.
  • the classical user interface in which a two-dimensional presentation is used to access a multi-level storage hierarchy can be, for many applications and for many users, unsuitable for efficient and rapid access to data or applications.
  • the operator enters at one level in the hierarchy and may be obliged to "drill down" multiple levels before accessing the storage location containing the destination data. This can be inconvenient and time-consuming for the user.
  • a further disadvantage is that movement between two distinct storage locations may require the operator to drill back up to a higher level which serves as common parent level to both the start storage location and the destination location. Again, this is a laborious and inefficient procedure.
  • the operator may lose in navigating within pyramidal hierarchies the user may lose any overview he has over the data storage landscape. In drilling up or down over several hierarchical levels, the operator loses sight of the layout of the hierarchy and can easily drill along an incorrect branch of the hierarchy, losing further time.
  • One response might be produce relative flat storage structures, in which the hierarchy is composed of fewer levels. Each level contains is equivalent to several branches of the pyramidal type structure and, correspondingly, assembles together larger numbers of data files or applications than in the equivalent pyramidal structure.
  • a single level may provide a greatly increased amount of information.
  • a single image, if capable of magnification may contain a vast store of information, although it is all presented on single level.
  • This storage structure presents its own difficulties in terms of user overview and in terms of data access. If all destination files are stored at more or less the same level in a flat structure, the operator may have difficulty finding the data in a structure which, at individual levels, does not categorise to the same degree as the pyramidal hierarchy. In the flat structure, the user would be obliged simply to trawl through a large number of different files in order to identify the file of interest.
  • pyramidal data hierarchies may be illustrated by consideration of paper geographical maps, which is essentially a reduced scale two-dimensional representation of the geographical features of a given area. If a map of Europe represents the top level data, the user "drills down" by the use of successive maps of increasing scale, first at national level, then perhaps regional level and final at the level of a city and finally at the level of a street plan. By this point the user may have passed through five levels and therefore requires five paper maps.
  • the drilling down may be eased by the use of an computer-stored electronic map represented by the computer on a screen, such as those used in GPS navigation systems: different levels are accessed perhaps by a scroll bar or menu-based instruction calling up increased levels of magnification.
  • the user moves a mouse and the moves the cursor within the screen representation, thereby manipulating a screen-based user interface.
  • the equivalent flat structure might include only one or two storage levels, eg a map of Europe containing continental level data of Europe and a street level map of the Europe. Without knowledge of both London and Kunststoff, the operator would have difficulty navigating between Oxford Street and Leopoldstrasse. Moving solely within the street level map it would be extremely tedious to translate from London, through southern England, Belgium, southern Germany and finally Bavaria. Although the problem of accessing different levels of information has been described above principally in relation to geographical maps and icon-based storage representations, the reader will appreciate that, in view of the drilling requirement and/or need to maintain user overview both pyramidal and flat structures can be inconvenient to the operator
  • the current invention provides a user interface which uses not only the position of a cursor reflecting a two dimensional position of a hand-held device, such as a mouse, within some reference plane, but also the position of the hand-held device out of that plane.
  • the position of the hand-held device is measured in three dimensions, rather than in two dimensions as for a conventional mouse, the third dimension being a distance between the reference plane and the hand-held device.
  • magnification of the image is directly responsive to the position of the hand-held device in the third dimension, for example as a function of the separation of the hand-held device to a screen a worktop or another reference plane, the laborious aspects of drilling down and up by zoom options on a graphical interface can be eliminated.
  • the user may zoom in on an image with greater ease and convenience than any menu-based or other graphical zoom option.
  • flat-based data storage structures can have greatly enhanced accessibility, with drilling down and up to locate particular destination data considerably easier.
  • many of the intermediate levels through the user has to drill, ie the intermediate folders, which simply serve to categorise or classify files held at a lower level in the hierarchy, become redundant and can be eliminated.
  • the user can simply magnify the image representing this (these) level(s) of data, by a simple hand movement of the hand- held device within the third dimension.
  • the basic user interface problems on small electronic devices can be divided into the input interface and the output interface:
  • the proposed solution makes use of user input technologies where the computer can process data from a stylus such that hardware or software applications running in the computer can accurately identify the 3 dimensional position and direction of movement of the stylus with respect to the computer screen.
  • Existing tracking technologies as previously explained can be utilized by the invention described herein to allow a small computer device such as a mobile phone to calculate the position of a stylus and to process this data to manipulate the size and position of displayed information on the mobile phone or PDA screen to improve text input and to improve the user interface.
  • Mobile gaming is also becoming more popular and, increasingly, there are software applications targeted at mobile phone users including productivity tasks like diary management, web browsers, office software etc.
  • the proposed invention will allow the user to magnify a selected part of the screen so that it fills the whole screen to better see the displayed data in that area, by moving a pointing device towards the region of interest.
  • This format will help a user get to the right file or email or contact immediately without having to search through a long list.
  • the current invention seeks to improve the prior art systems as described above.
  • the current invention provides a user interface which monitors the three dimensional position of an actuating means or hand-held device or monitors the position of the hand/finger of the user with respect to a predetermined reference plane. Movements parallel to the plane are converted into movement of a cursor or other position indicator in the plane of the screen and movements of in the third dimension are converted into a signal determining the magnification of the image
  • an apparatus for processing an image comprising: an actuating means, a means for measuring the position of the actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane, a screen for displaying a two dimensional image comprising a cursor which represents the position of the actuating means, a conversion means for receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane, means for computing the separation of the actuating means and the reference plane, and a means for magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
  • a method of processing an image comprising: continuously measuring the position of an actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane; displaying a two dimensional image comprising a cursor which represents the position of the actuating means; receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane; computing the separation of the actuating means and the reference plane; and magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
  • a computer program product comprising a readable storage medium for storing computer readable instructions for implementing the method described herein.
  • This invention makes use of advances in stylus and screen technologies by following not only translational movement of the stylus in the X (left and right) and Y (up and down) direction but also the Z (in and out) direction.
  • This information is used to change the displayed image on computing device such that it is easier to see and select text or data.
  • the invention can also make use of the orientation of the stylus where this information is available.
  • the invention improves user interaction with small screened computer devices, particularly PDA's and mobile phones.
  • the invention proposes a new way to enter data or text using a three dimensional tracker, and shows how this tracker can aid in the representation and selection of both text and of large datasets on mobile devices.
  • Electro magnetic resonance positioning technology was developed by Wacom over 14 years ago and is commonly applied in pen based computing where the user moves a stylus to select data on a computer screen.
  • a significant advantage of this technology is that it requires no batteries or magnets to establish communication between the stylus and the computing device, such that the computer recognizes the position of the stylus and the relevant software in the computer can respond accordingly e.g. by highlighting an icon on the display.
  • Tablet PCs use a version of this technology from Wacom.
  • positioning technology could relay continuous information with regard to the Z-coordinates of the actuating device or stylus.
  • Figure 1 depicts the basic elements of the invention including a displayed image of a keyboard, the pointing device, pointing device tracking system and the processor which interprets the 3-dimensional positional data from the tracking device and alters the data displayed on the screen accordingly.
  • Figure 2 depicts an example of the type of algorithm the processor could utilise to interpret pointer device position and movement and to translate this into changes in display magnification.
  • the algorithm also explains how a user can select items form a data field using pointer movement to select the area of interest in the data field and to magnify the image to allow selection of the appropriate data, e.g. entering text or selecting a file.
  • Figure 3 depicts the invention implemented so as to improve text entry using a pointing device in conjunction with a displayed qwerty keyboard on a small mobile phone screen or similar device.
  • Figure 4 depicts how the invention can be used to select from numerous data or program files displayed on a small screen.
  • Figure 5 illustrates that the screen itself can be used as the pointing device.
  • the screen acts as a window into a 2 D array of keyboards, allowing text entry without a stylus or other external pointing device.
  • FIG. 6 This figure shows how tilting the stylus little bit to the left can cause the letter chosen to move to the left and now the letter selected is A
  • Figure 7 to 9 shows the trigonometry involved in continually calculating the end point of the stylus as it moves towards and away from the screen. This is the data required by the computing device to allow the display of the appropriate area of the data field at the relevant magnification.
  • the key to the invention is to use a pointing device to dynamically alter the position and magnification of the displayed data on a computer screen in a particular way depending on the 3D position and/or velocity of the pointing device.
  • the invention utilises one or more of the existing movement and position sensing technologies for tracking the position, direction of movement, and speed of the pointing device e.g. electromagnetic, optical, ultrasonic, infra red etc. Any suitable position determining technology may be used.
  • Fig 1 the main elements of the invention are shown and includes a user interface 10, a pointing device 12 usually a stylus, containing electromagnetic, acoustic or optical position sensors 14 is used to interact with the data on the display 16.
  • a pointing device 12 usually a stylus, containing electromagnetic, acoustic or optical position sensors 14 is used to interact with the data on the display 16.
  • position signals are decoded by the tracking receiver 18 and passed to the processor 20 for interpretation by hardware or software based processing.
  • the tracking receiver 18 can receive electromagnetic, optical, acoustic or any other signals as necessary depending on the type of position and velocity sensors built into the pointing device 12.
  • the stylus can be passive and utilise electromagnetic resonance technology or can be tracked by active electromagnetic or ultrasonic technology in the computing device.
  • the processor calculates the likely end direction and speed of the stylus movement towards the display and changes the displayed data in a manner such that the user sees a specific area of the display being magnified. Similarly, as the stylus is moved away from the display, the magnified image reduces to the original size at a rate depending on the stylus velocity.
  • the pointing device does not have to be a stylus and can even be embedded into the screen itself.
  • the positioning determining system may comprise an optical system such as a camera arrangement capable of tracking the position of the user's hand or even a finger and outputting 3D position information accordingly.
  • position in the z-dimension or third dimension this is intended not only to refer to position/movement with respect to a monitor or screen, but refers to position/movement with respect to any predetermined reference plane.
  • the plane of reference may indeed be that of the screen, but it may also be that of the operator's desktop or worksurface, or demonstration/presentation board or any other suitable plane of reference. Movements toward/away from the reference plane, ie those which result in a change of separation between the actuating device and the reference plane cause a corresponding output which controls the level of magnification of the image produced by the screen.
  • Fig 2 shows the steps in a possible algorithm to be implemented in the processor to interpret position and velocity data from the pointing device and to so use the data to modify the image displayed on the screen such that the user can select part of a data field to magnify to enable the selection of the required input data - whether for repeated letter selection for word entry, or to select a particular file from a large array.
  • a small displayed image of a keyboard becomes easier to use and the selection of a file from a large number of files even on a small display becomes easily accessible.
  • this algorithm it is also possible to add predictive text entry and/or spell checking to further reduce errors.
  • the processor in the mobile device uses the algorithm to calculate the stylus velocity from a continual series of 3D positional readings. This velocity vector can then be used to calculate an end point in the virtual data field and so to select an area of the field to display on the screen as shown in Fig 7.
  • the algorithm shown assumes different images of the data field are stored in memory, but it is also possible to derive images of the data field at the appropriate magnification through computer graphics processing. The actual implementation will depend on the power of the processor and the memory available in the mobile device.
  • Figure 2 is illustrative of the following steps:
  • the computer memory stores a series of Images for a) different parts of a data field and b) different magnification ranges.
  • the Computer has sensors to accurately and rapidly calculate the position of the stylus in three dimensional space. 3. Start.
  • a standard Image of the data field appears on the computer screen - either the whole data field or a subset thereof.
  • the computer calculates the current position of the stylus.
  • the User starts to move the stylus towards a part of the virtual data field of interest - this part of the field may be displayed or may be a part of a virtual data field that is outside the currently displayed area where the user is aware there is required data to be selected e.g. up and to the left.
  • the computer updates the current position of the stylus several times a second, and calculates the stylus velocity.
  • the directional information is used to calculate the likely end point of the stylus on the data filed trigonometrically.
  • the end point is interpreted as a particular section of the data field.
  • the computer utilises the above information to retrieve from memory the appropriate Image of this part of the data field.
  • the magnification level is dependant on the distance of the stylus from the screen. 1 1.
  • the continually refreshed data field images are displayed as a smooth animation through graphical interpolation software or hardware.
  • the stylus approaches very close to the screen and the display then shows a highly magnified image e.g. of a QWERTY keyboard, say
  • the user can use the pointing device to touch the screen or click a button on the stylus to select either a letter or a file as appropriate.
  • the computer registers the input and displays the character entered in a text edit box.
  • the computer updates the position of the stylus and accordingly displays the zoomed out Image of the relevant part of the data field. 17. As the stylus is moved from the screen more items from the data field will be displayed and the updated display images will appear as a smooth "zoom out” animation.
  • Fig 3 shows a typical embodiment of the invention.
  • This figure shows a mobile phone or PDA screen 16 with a small QWERTY keyboard displayed 22.
  • the user would typically use a stylus 12 to select letters from the keyboard 22 for text entry.
  • the area of the keyboard that the user is moving the stylus towards is magnified e.g. if the user is moving the stylus to the top left of the keyboard then QWE, ASD and ZX might become the only letters visible.
  • the user would then tap the appropriate key with the stylus to select the required letter.
  • the display would shrink to a size dependant on the distance the stylus moves from the screen and at a rate depending on the speed of withdrawal.
  • the main text entry mechanisms for mobile phones depend on predictive matching. Some problems with predictive texting are that it does not work too well for words that are not available in the dictionary or for multi-lingual users.
  • the proposed interface gives the power of a full qwerty keyboard on a small screen mobile phone while allowing the user to enter any word irrespective of whether it is in the dictionary or not.
  • Fig. 4 shows an application of the invention where a computer screen 16 on a PDA or mobile phone 10b is showing many more icons 26 than is currently regarded as advisable. Even a small screen might have 100 or more icons indicating e.g. music files, picture files, emails, contact details or a mixture of data and program files. It does not matter that the user may not be able to identify individual icons. As the user moves the stylus 12 towards the screen in the direction of the dotted arrows 24, the display would again magnify the area to which the stylus was heading towards - in this case the black picture files 28 and in so doing the icons in the area of interest would increase in size to become legible.
  • the software could incorporate aids and options to allow users to group files and icons e.g.
  • the current invention allows a greatly increased number of files to be stored at, for example, "desktop" level.
  • "desktop” level Imagine, for example, word processing files, as well as audio, and photo files, were all stored on a single virtual desktop.
  • the current invention allows the user to concentrate a greater number of files on the desktop and, instead of drilling down through folders to the desired destination file, the user merely moves the actuating device or stylus in the direction of the screen, causing the screen to be magnified accordingly. Icons which may be too small, even to be legible, can become legible upon magnification.
  • files of a similar type may be given icons of a similar or identical colour, which may be grouped on a particular region of the virtual desktop.
  • Other file types eg word processing files, can be grouped together in another region of the desktop and in a different colour, such that even if too small to be legible, the file type is recognisable simply from the colour and position of the group. Movement fo the actuating device causes magnification.
  • the area of interest of the user is magnified by movement of the actuating device toward that part of the screen.
  • the movement of the stylus causes not only magnification of the image of the desktop per se, but also magnification around the area selected by the user according to the movements of the stylus.
  • the conversion means causes magnification and selective realignment or re-centering of the image - translation as well as magnification.
  • the position of the actuating means in a plane parallel to the reference plane identifies the position of the area of interest in the same way as a cursor moves in response to the movements of a mouse. However, when this is combined with the separation- dependent magnification, the effect is to relocate the centre of the magnification, about which the rest of the enlarged image is rebuilt, at the centre of the image.
  • the invention provides not only a basis for flat hierarchies expressed in icons, but is also suitable for any information-rich high resolution image which reveals or exposes further information on magnification.
  • geographical maps which reveal more detail on zooming in, would benefit from the approach set out in this description.
  • Architects drawings which may be required to set out in a single image overview, may also reveal finer and finer detail as the user magnifies the image.
  • any "flat" image or hierarchy which is required to provide a unified overview of a system, but where, nevertheless, more and more information can be unveiled on deeper and deeper magnification, would benefit from the current invention.
  • the most convenient way of controlling magnification level is to directly couple the magnification level to a third dimension of the measured position of the actuating device.
  • Fig 5 another embodiment of this invention is shown where the pointing device 12 is embedded into the mobile device itself 10, such that movement of the device 10 can change the display such that the device appears to be moving over a virtual data field.
  • the PDA or mobile phone or other small computer device contains the requisite position and direction sensors e.g. accelerometers and mercury switches 30 to enable the processor to track the motion and orientation of the device.
  • the data field of interest can be a virtual array of keyboards all connected to each other 32 such that when the user is moving the screen 16 different areas of the keyboard become visible.
  • Letter selection can be made by using a button as a click selection device when an aiming point or cursor displayed on the screen 16 is over a letter of interest.
  • the screen can be seen as a window onto a never-ending array of keyboard's.
  • Z axis movement of the phone screen will allow the keyboard image on the display to be zoomed in and out, while X and Y axis movement of the phone screen will enable the keyboard image to change in the X and Y directions.
  • the user gets to e.g. the right hand edge of the keyboard, say the letter "P”, by moving further to the right another keyboard appears, starting with the left hand edge e.g. "Q”, "A”, "Z”.
  • the user moves up or down they will always move to another keyboard.
  • Such an entry mechanism can also be single handed which may be of advantage in certain situations.
  • the data field can also be a large virtual 2D array of different types of files a subset of which is displayed at any time as in Fig 4.
  • Fig 6 shows a possible way in which stylus orientation can be used in selecting a particular letter on a displayed keyboard.
  • a mobile phone or PDA 10 is displaying a zoomed image of part of a QWERTY keyboard 22 which has already been selected by movement of the stylus 12 towards the required region of the QWERTY keyboard.
  • the letter "S" 36 is highlighted. This is actually not the letter of choice so the user tilts the stylus slightly to the left to change the selected letter to "A" 38.
  • Orientation information of the stylus can also be used in different ways to e.g. move the selected image across a data field as in Fig 5 by for example tilting the stylus to the left or right or up or down to move the display in the required direction.
  • Figures 7 to 9 shows a possible method of calculating the direction of the pointing device 12 assuming it is following a straight path - which will in fact effectively be the case assuming the stylus vector is calculated several times a second.
  • This directional data is required to enable the display of the required part of the data field 40.. Part or all of the data field may be shown on the device display 16.
  • the stylus 12 starts at position 42 and moves to the next position 44 some short time later. At the start position the stylus 12 will be a given distance 58 along the z axis from the plane of the display 16.
  • a vector can be calculated from the two positions 42 and 44 which has an end point 46 in the plane of the display 16.
  • Fig 7 All positions in Fig 7 are referred to the orthogonal directions x (left/right), y (up/down) and z (in/out).
  • the stylus 12 contains technology as previously explained allowing the computing device to monitor its position relative to a given point on the display or a given point on the data field 40. It is also assumed that the computing device is continually monitoring the position of the display with reference to the data field. Thus at the start position 42 the positions along the x, y and z directions are known and these are refreshed as the stylus moves to position 44 a short time later.
  • the distance moved by the stylus 12 from the start position 42 to the second position 44 along the x direction is shown at 50 in figure 8.
  • the corresponding distance moved by the stylus in the y direction is also shown at 52 in figure 9.
  • the distance of the stylus 12 at the start position 42 from the plane of the screen 16 in the z direction is shown as 58 in Fig 8.
  • the distance moved between these two points in the z direction 54 combined with the x and y translational movement can be used to readily calculate the end position of the stylus on the data field 40 if it continues to follow that particular vector.
  • the point towards which the stylus 3 is aiming at in the data field 6 can be calculated at regular intervals. In most cases this point will change as the user moves the stylus since they will be making corrections as they get closer to the part of the data field they are interested in.
  • the computing device would have sufficient data to selectively display the appropriate area of the data field at the required magnification at any particular time.
  • the rate of change of various parameters such as magnification relative to distance from screen, and rate of translational movement of display compared to stylus movement.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and apparatus for selecting characters or symbols on a keyboard or file or program icons displayed on a computer display using a pointing device. Said pointing device is tracked by the computer and depending on its 3 dimensional location and direction of movement different areas of the display are shifted and magnified to enable easier selection of the correct key on a keyboard or to select a file from a large array of file icons, even on very small displays.

Description

AN APPARATUS AND METHOD FOR POSITION-RELATED DISPLAY MAGNIFICATION
This invention relates to an apparatus and method for accessing, displaying and using data or electronic functions associated with icons or regions of a screen image. User interfaces for televisions, computer systems, electronic storage systems etc often combine a processor, screen-based output and a remote input device.
The classical user interface in which a two-dimensional presentation is used to access a multi-level storage hierarchy can be, for many applications and for many users, unsuitable for efficient and rapid access to data or applications. The operator enters at one level in the hierarchy and may be obliged to "drill down" multiple levels before accessing the storage location containing the destination data. This can be inconvenient and time-consuming for the user.
A further disadvantage is that movement between two distinct storage locations may require the operator to drill back up to a higher level which serves as common parent level to both the start storage location and the destination location. Again, this is a laborious and inefficient procedure.
Moreover, the operator may lose in navigating within pyramidal hierarchies the user may lose any overview he has over the data storage landscape. In drilling up or down over several hierarchical levels, the operator loses sight of the layout of the hierarchy and can easily drill along an incorrect branch of the hierarchy, losing further time.
One response might be produce relative flat storage structures, in which the hierarchy is composed of fewer levels. Each level contains is equivalent to several branches of the pyramidal type structure and, correspondingly, assembles together larger numbers of data files or applications than in the equivalent pyramidal structure.
A single level may provide a greatly increased amount of information. For example, a single image, if capable of magnification may contain a vast store of information, although it is all presented on single level.
This storage structure presents its own difficulties in terms of user overview and in terms of data access. If all destination files are stored at more or less the same level in a flat structure, the operator may have difficulty finding the data in a structure which, at individual levels, does not categorise to the same degree as the pyramidal hierarchy. In the flat structure, the user would be obliged simply to trawl through a large number of different files in order to identify the file of interest.
The problem with pyramidal data hierarchies may be illustrated by consideration of paper geographical maps, which is essentially a reduced scale two-dimensional representation of the geographical features of a given area. If a map of Europe represents the top level data, the user "drills down" by the use of successive maps of increasing scale, first at national level, then perhaps regional level and final at the level of a city and finally at the level of a street plan. By this point the user may have passed through five levels and therefore requires five paper maps.
Continuing the map analogy, the drilling down may be eased by the use of an computer-stored electronic map represented by the computer on a screen, such as those used in GPS navigation systems: different levels are accessed perhaps by a scroll bar or menu-based instruction calling up increased levels of magnification. Typically, the user moves a mouse and the moves the cursor within the screen representation, thereby manipulating a screen-based user interface. If the operator intends to successively access street level data in two distant locations, such as Oxford Street in London and Leopoldstrasse in Munich, he would scale up from the Oxford Street representation to a higher level which covers both the first and the second streets, eg a map of Europe and then drill down, via a map of Germany, to the street level map of Munich containing Leopoldstrasse. By moving the cursor and appropriately clicking on relevant options, he would drill up and down. However, the use of such options, whether provided in the form of a menu, a scroll bar or screen- based interface, requires further input from the operator and, to this extent, is still time-consuming and inconvenient.
In terms of an electronic map, the equivalent flat structure might include only one or two storage levels, eg a map of Europe containing continental level data of Europe and a street level map of the Europe. Without knowledge of both London and Munich, the operator would have difficulty navigating between Oxford Street and Leopoldstrasse. Moving solely within the street level map it would be extremely tedious to translate from London, through southern England, Belgium, southern Germany and finally Bavaria. Although the problem of accessing different levels of information has been described above principally in relation to geographical maps and icon-based storage representations, the reader will appreciate that, in view of the drilling requirement and/or need to maintain user overview both pyramidal and flat structures can be inconvenient to the operator
It is therefore an object of the current invention to eliminate or minimise the deficiencies and shortcomings of prior art systems as described above.
The current invention provides a user interface which uses not only the position of a cursor reflecting a two dimensional position of a hand-held device, such as a mouse, within some reference plane, but also the position of the hand-held device out of that plane. The position of the hand-held device is measured in three dimensions, rather than in two dimensions as for a conventional mouse, the third dimension being a distance between the reference plane and the hand-held device.
By adding a third dimension as a further component of the user interface, the user has greatly increased control of the image or representation provided on the computer screen. If for example, magnification of the image is directly responsive to the position of the hand-held device in the third dimension, for example as a function of the separation of the hand-held device to a screen a worktop or another reference plane, the laborious aspects of drilling down and up by zoom options on a graphical interface can be eliminated. The user may zoom in on an image with greater ease and convenience than any menu-based or other graphical zoom option.
As a result of the improved magnification facility flat-based data storage structures can have greatly enhanced accessibility, with drilling down and up to locate particular destination data considerably easier. For example if one considers again a pyramidal storage structure containing various levels of data folders and files, many of the intermediate levels through the user has to drill, ie the intermediate folders, which simply serve to categorise or classify files held at a lower level in the hierarchy, become redundant and can be eliminated. If all destination files are held on the same level or at least a reduced number of levels, the user can simply magnify the image representing this (these) level(s) of data, by a simple hand movement of the hand- held device within the third dimension. In the example given earlier the manipulation of the representation of London and Munich streets becomes much easier, because magnification is achieved merely by movement toward the reference plane, for example toward the screen projecting the image of Europe at any level. The use can zoom in or out of a geographical map in an extremely intuitive manner - moving the hand-held device toward the screen causes the image to magnify and moving the hand-held device away from the screen causes the image to be reduced. Zoom in and zoom out are achieved merely by movement of the hand to or away from the screen.
While position determination sensors and tracking technologies have existed for a long time - they have not been applied to electronic devices in an efficient manner such as to solve the common problems that users face in inputting commands or text on small devices, in accessing and viewing output data, especially on small display screens.
Today's mobile devices are more powerful than the desktop computers of a few years ago yet they still lack user interfaces efficient enough to take advantage of this computer power.
The basic user interface problems on small electronic devices can be divided into the input interface and the output interface:
Input Interface
Interacting with small screens can be awkward for the computer user. Mobile phones and PDAs typically have small screens. One of the most popular approaches towards text entry in these computers is a virtual keyboard that is used with a stylus. However, the problem with this approach is that the error rate associated with the text entry is inversely proportional to the size of the individual keys. This patent application describes an innovative technology to enter text where the keyboard representation can be extremely small and yet where the text entry mechanism can allow accurate selection of the correct letter. The proposed invention uses a compressed QWERTY (or other text layout) keyboard on a small screen device, allowing the user to enter text using a stylus by zooming in and out on parts of the keyboard.
The proposed solution makes use of user input technologies where the computer can process data from a stylus such that hardware or software applications running in the computer can accurately identify the 3 dimensional position and direction of movement of the stylus with respect to the computer screen. Existing tracking technologies as previously explained can be utilized by the invention described herein to allow a small computer device such as a mobile phone to calculate the position of a stylus and to process this data to manipulate the size and position of displayed information on the mobile phone or PDA screen to improve text input and to improve the user interface.
Whilst there have been a number of attempts to improve the entry of text into small form factor computer devices e.g. the popular T9 or iTAP systems for entering text using the numeric keypad on mobile phones, a regular QWERTY keyboard remains the text input of choice. There have been attempts to bring full size qwerty keyboards to PDAs, however it has not been done for smaller screen mobile phones since the screen is too small. It becomes impractical to have a full size qwerty keyboard on such a small screen because of the high error rate in selecting the correct key using a stylus.
Output Interface
Apart from text entry, it is becoming clear that the limitations of display size are hindering user interaction with large numbers of files and applications. Mobile phones and PDAs are becoming more capable and are able to run additional programs.
Users are typically storing music, pictures and even video files on such devices.
Mobile gaming is also becoming more popular and, increasingly, there are software applications targeted at mobile phone users including productivity tasks like diary management, web browsers, office software etc. The proposed invention will allow the user to magnify a selected part of the screen so that it fills the whole screen to better see the displayed data in that area, by moving a pointing device towards the region of interest.
Currently, the only way of querying through a large datasets such as contacts or large number of emails or documents or pictures is via a tabular format of textual data - typically a hierarchical file structure. To access a particular file e.g. a document or an image, the user must select the appropriate folder and then navigate down to the particular file of interest, often going through several layers of data. If the computer screen was large enough, it would probably be more efficient in many cases to display all the files of interest in a large two dimensional array, perhaps coloured and/or grouped by different type e.g. red for documents, green for images. Human memory is very good with visual information and it is straightforward to spot an appropriate file amongst even a large number of other files when displayed thus on a large computer screen.
This format will help a user get to the right file or email or contact immediately without having to search through a long list.
The current invention seeks to improve the prior art systems as described above.
SUMMARY OF THE INVENTION
The current invention provides a user interface which monitors the three dimensional position of an actuating means or hand-held device or monitors the position of the hand/finger of the user with respect to a predetermined reference plane. Movements parallel to the plane are converted into movement of a cursor or other position indicator in the plane of the screen and movements of in the third dimension are converted into a signal determining the magnification of the image
In a main embodiment of the invention an apparatus for processing an image is provided comprising: an actuating means, a means for measuring the position of the actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane, a screen for displaying a two dimensional image comprising a cursor which represents the position of the actuating means, a conversion means for receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane, means for computing the separation of the actuating means and the reference plane, and a means for magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
In a second aspect of the present invention, there is provided a method of processing an image comprising: continuously measuring the position of an actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane; displaying a two dimensional image comprising a cursor which represents the position of the actuating means; receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane; computing the separation of the actuating means and the reference plane; and magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
In a still further aspect of the present invention there is provided a computer program product comprising a readable storage medium for storing computer readable instructions for implementing the method described herein.
This invention makes use of advances in stylus and screen technologies by following not only translational movement of the stylus in the X (left and right) and Y (up and down) direction but also the Z (in and out) direction. This information is used to change the displayed image on computing device such that it is easier to see and select text or data. The invention can also make use of the orientation of the stylus where this information is available.
The invention improves user interaction with small screened computer devices, particularly PDA's and mobile phones. The invention proposes a new way to enter data or text using a three dimensional tracker, and shows how this tracker can aid in the representation and selection of both text and of large datasets on mobile devices.
The invention described herein requires a three dimensional tracking technology. There are several approaches to tracking the position and direction of pointing devices. The most well known and widely applied is a technology that uses electromagnetic resonance. Electro magnetic resonance positioning technology was developed by Wacom over 14 years ago and is commonly applied in pen based computing where the user moves a stylus to select data on a computer screen. A significant advantage of this technology is that it requires no batteries or magnets to establish communication between the stylus and the computing device, such that the computer recognizes the position of the stylus and the relevant software in the computer can respond accordingly e.g. by highlighting an icon on the display. Tablet PCs use a version of this technology from Wacom.
There are four elements of data that can be derived from this technology: • x-coordinate (horizontal translational movement)
• y-coordinate (vertical translational movement) • pressure data (how hard the stylus is pressing the computer screen or a touch pad)
• tilt data (orientation of stylus in space)
Prior art interfaces do not make use of the distance from the actuating device to the screen.
According to the invention positioning technology could relay continuous information with regard to the Z-coordinates of the actuating device or stylus.
There are several approaches capable of conveying the z location of a pointing device to a computer or portable electronic device. Possible technologies include infrared, optical or RF tracking with appropriate sensors on the computer to allow triangulation of the position of the stylus. Many of these alternative technologies require magnets or power sources in the pointing device. It is likely that these positioning technologies have not yet found their way into mobile devices because of a lack of suitable mobile interface technologies that can exploit full 3D positional and velocity information.
DESCRIPTION OF THE DRAWINGS
The present invention is described with reference to the attached drawings, in which:
Figure 1 depicts the basic elements of the invention including a displayed image of a keyboard, the pointing device, pointing device tracking system and the processor which interprets the 3-dimensional positional data from the tracking device and alters the data displayed on the screen accordingly.
Figure 2 depicts an example of the type of algorithm the processor could utilise to interpret pointer device position and movement and to translate this into changes in display magnification. The algorithm also explains how a user can select items form a data field using pointer movement to select the area of interest in the data field and to magnify the image to allow selection of the appropriate data, e.g. entering text or selecting a file.
Figure 3 depicts the invention implemented so as to improve text entry using a pointing device in conjunction with a displayed qwerty keyboard on a small mobile phone screen or similar device. Figure 4 depicts how the invention can be used to select from numerous data or program files displayed on a small screen.
Figure 5 illustrates that the screen itself can be used as the pointing device. In this example the screen acts as a window into a 2 D array of keyboards, allowing text entry without a stylus or other external pointing device.
Figure 6 This figure shows how tilting the stylus little bit to the left can cause the letter chosen to move to the left and now the letter selected is A
Figure 7 to 9 shows the trigonometry involved in continually calculating the end point of the stylus as it moves towards and away from the screen. This is the data required by the computing device to allow the display of the appropriate area of the data field at the relevant magnification.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The key to the invention is to use a pointing device to dynamically alter the position and magnification of the displayed data on a computer screen in a particular way depending on the 3D position and/or velocity of the pointing device. The invention utilises one or more of the existing movement and position sensing technologies for tracking the position, direction of movement, and speed of the pointing device e.g. electromagnetic, optical, ultrasonic, infra red etc. Any suitable position determining technology may be used.
Further embodiments require mechanisms for detecting the 3D orientation of the pointing device such as small gyroscopes. Extra utility might also be added by having additional user buttons or controls on the pointing device such as a scroll wheel or select button.
In Fig 1 the main elements of the invention are shown and includes a user interface 10, a pointing device 12 usually a stylus, containing electromagnetic, acoustic or optical position sensors 14 is used to interact with the data on the display 16. As the user moves the stylus left, right, up, down or towards or away from the screen, position signals are decoded by the tracking receiver 18 and passed to the processor 20 for interpretation by hardware or software based processing. The tracking receiver 18 can receive electromagnetic, optical, acoustic or any other signals as necessary depending on the type of position and velocity sensors built into the pointing device 12. Or the stylus can be passive and utilise electromagnetic resonance technology or can be tracked by active electromagnetic or ultrasonic technology in the computing device. The processor calculates the likely end direction and speed of the stylus movement towards the display and changes the displayed data in a manner such that the user sees a specific area of the display being magnified. Similarly, as the stylus is moved away from the display, the magnified image reduces to the original size at a rate depending on the stylus velocity.
The pointing device does not have to be a stylus and can even be embedded into the screen itself.
It is also envisaged that there may be no stylus or actuating device as such and the tracking mechanism directly tracks the movement of the operator's hand or finger. In such a case the positioning determining system may comprise an optical system such as a camera arrangement capable of tracking the position of the user's hand or even a finger and outputting 3D position information accordingly.
Where this description refers to position in the z-dimension or third dimension this is intended not only to refer to position/movement with respect to a monitor or screen, but refers to position/movement with respect to any predetermined reference plane. The plane of reference may indeed be that of the screen, but it may also be that of the operator's desktop or worksurface, or demonstration/presentation board or any other suitable plane of reference. Movements toward/away from the reference plane, ie those which result in a change of separation between the actuating device and the reference plane cause a corresponding output which controls the level of magnification of the image produced by the screen.
Fig 2 shows the steps in a possible algorithm to be implemented in the processor to interpret position and velocity data from the pointing device and to so use the data to modify the image displayed on the screen such that the user can select part of a data field to magnify to enable the selection of the required input data - whether for repeated letter selection for word entry, or to select a particular file from a large array. In such a manner a small displayed image of a keyboard becomes easier to use and the selection of a file from a large number of files even on a small display becomes easily accessible. When using this algorithm to enter text, it is also possible to add predictive text entry and/or spell checking to further reduce errors.
The processor in the mobile device uses the algorithm to calculate the stylus velocity from a continual series of 3D positional readings. This velocity vector can then be used to calculate an end point in the virtual data field and so to select an area of the field to display on the screen as shown in Fig 7. The algorithm shown assumes different images of the data field are stored in memory, but it is also possible to derive images of the data field at the appropriate magnification through computer graphics processing. The actual implementation will depend on the power of the processor and the memory available in the mobile device.
In more detail, Figure 2 is illustrative of the following steps:
1 . The computer memory stores a series of Images for a) different parts of a data field and b) different magnification ranges.
2. The Computer has sensors to accurately and rapidly calculate the position of the stylus in three dimensional space. 3. Start.
4. A standard Image of the data field appears on the computer screen - either the whole data field or a subset thereof.
5. The computer calculates the current position of the stylus.
6. The User starts to move the stylus towards a part of the virtual data field of interest - this part of the field may be displayed or may be a part of a virtual data field that is outside the currently displayed area where the user is aware there is required data to be selected e.g. up and to the left.
7. The computer updates the current position of the stylus several times a second, and calculates the stylus velocity. 8. The directional information is used to calculate the likely end point of the stylus on the data filed trigonometrically.
9. The end point is interpreted as a particular section of the data field.
10. The computer utilises the above information to retrieve from memory the appropriate Image of this part of the data field. The magnification level is dependant on the distance of the stylus from the screen. 1 1. The continually refreshed data field images are displayed as a smooth animation through graphical interpolation software or hardware.
12. The stylus approaches very close to the screen and the display then shows a highly magnified image e.g. of a QWERTY keyboard, say
A1S1D1Z ,X or say 6 out of hundreds of files in the data field.
13. The user can use the pointing device to touch the screen or click a button on the stylus to select either a letter or a file as appropriate.
14. The computer registers the input and displays the character entered in a text edit box.
15. To select further items from the data field not in view, the user moves the stylus away from the screen.
16. The computer updates the position of the stylus and accordingly displays the zoomed out Image of the relevant part of the data field. 17. As the stylus is moved from the screen more items from the data field will be displayed and the updated display images will appear as a smooth "zoom out" animation.
18. The user performs steps 6 to 17 above repeatedly to either enter text or to select multiple files. 19. End
Fig 3 shows a typical embodiment of the invention. This figure shows a mobile phone or PDA screen 16 with a small QWERTY keyboard displayed 22. The user would typically use a stylus 12 to select letters from the keyboard 22 for text entry. As the stylus approaches the screen in a particular direction indicated by the dotted arrow 24, the area of the keyboard that the user is moving the stylus towards is magnified e.g. if the user is moving the stylus to the top left of the keyboard then QWE, ASD and ZX might become the only letters visible. The user would then tap the appropriate key with the stylus to select the required letter. Similarly as the user withdraws the stylus from the screen the display would shrink to a size dependant on the distance the stylus moves from the screen and at a rate depending on the speed of withdrawal.
The main text entry mechanisms for mobile phones depend on predictive matching. Some problems with predictive texting are that it does not work too well for words that are not available in the dictionary or for multi-lingual users. The proposed interface gives the power of a full qwerty keyboard on a small screen mobile phone while allowing the user to enter any word irrespective of whether it is in the dictionary or not.
Fig. 4 shows an application of the invention where a computer screen 16 on a PDA or mobile phone 10b is showing many more icons 26 than is currently regarded as advisable. Even a small screen might have 100 or more icons indicating e.g. music files, picture files, emails, contact details or a mixture of data and program files. It does not matter that the user may not be able to identify individual icons. As the user moves the stylus 12 towards the screen in the direction of the dotted arrows 24, the display would again magnify the area to which the stylus was heading towards - in this case the black picture files 28 and in so doing the icons in the area of interest would increase in size to become legible. The software could incorporate aids and options to allow users to group files and icons e.g. in different coloured areas so that even when individual icons are of a size that cannot be read it is still obvious what type of data they are e.g. application files could be yellow, music files red, office files green, picture files black etc. Or the user could allocate different areas of the screen to different types of file or data. People have an inherent aptitude for spatial memory and will soon get used to the idea that e.g. picture files are at the top left of the screen, spreadsheets are bottom middle etc.
It is important to note that the actual number of file icons displayed on the screen can be a subset of a much larger virtual array of many more files. Movement of the pointing device can be towards file icons outside the displayed area. As the pointing device position and direction are decoded using the algorithm of Fig 2 so do originally non-visible icons become visible and expand in response to the movement of the pointing device towards the screen.
Flat hierarchies of data files and folders, for which the user has little sense of overview and poor navigation (between files), are made more user friendly by the invention. Very often computer users prefer to reduce the number of intermediate folders by flattening the hierarchy: the user may prefer, for example, to have a large number of icons on the virtual desktop, instead of embedding files with folders, which in turn are nested within other folders. The user thus gains rapid access to data files, which may be an advantage where files are frequently accessed. Where there are many such "desktop" files, the screen may become cluttered and the user needs a way to manage such a packed "desktop".
The current invention allows a greatly increased number of files to be stored at, for example, "desktop" level. Imagine, for example, word processing files, as well as audio, and photo files, were all stored on a single virtual desktop. The current invention allows the user to concentrate a greater number of files on the desktop and, instead of drilling down through folders to the desired destination file, the user merely moves the actuating device or stylus in the direction of the screen, causing the screen to be magnified accordingly. Icons which may be too small, even to be legible, can become legible upon magnification.
In one embodiment of the invention, files of a similar type, eg photos may be given icons of a similar or identical colour, which may be grouped on a particular region of the virtual desktop. Other file types, eg word processing files, can be grouped together in another region of the desktop and in a different colour, such that even if too small to be legible, the file type is recognisable simply from the colour and position of the group. Movement fo the actuating device causes magnification.
The area of interest of the user, eg photo files, is magnified by movement of the actuating device toward that part of the screen. Thus the movement of the stylus causes not only magnification of the image of the desktop per se, but also magnification around the area selected by the user according to the movements of the stylus. The conversion means causes magnification and selective realignment or re-centering of the image - translation as well as magnification. The position of the actuating means in a plane parallel to the reference plane identifies the position of the area of interest in the same way as a cursor moves in response to the movements of a mouse. However, when this is combined with the separation- dependent magnification, the effect is to relocate the centre of the magnification, about which the rest of the enlarged image is rebuilt, at the centre of the image.
The invention provides not only a basis for flat hierarchies expressed in icons, but is also suitable for any information-rich high resolution image which reveals or exposes further information on magnification. As mentioned before, geographical maps, which reveal more detail on zooming in, would benefit from the approach set out in this description. Architects drawings, which may be required to set out in a single image overview, may also reveal finer and finer detail as the user magnifies the image. In short, any "flat" image or hierarchy, which is required to provide a unified overview of a system, but where, nevertheless, more and more information can be unveiled on deeper and deeper magnification, would benefit from the current invention. In such circumstances, the most convenient way of controlling magnification level is to directly couple the magnification level to a third dimension of the measured position of the actuating device.
In Fig 5 another embodiment of this invention is shown where the pointing device 12 is embedded into the mobile device itself 10, such that movement of the device 10 can change the display such that the device appears to be moving over a virtual data field. The PDA or mobile phone or other small computer device contains the requisite position and direction sensors e.g. accelerometers and mercury switches 30 to enable the processor to track the motion and orientation of the device. As shown in Fig. 5 the data field of interest can be a virtual array of keyboards all connected to each other 32 such that when the user is moving the screen 16 different areas of the keyboard become visible.. Letter selection can be made by using a button as a click selection device when an aiming point or cursor displayed on the screen 16 is over a letter of interest. The screen can be seen as a window onto a never-ending array of keyboard's. Z axis movement of the phone screen will allow the keyboard image on the display to be zoomed in and out, while X and Y axis movement of the phone screen will enable the keyboard image to change in the X and Y directions. When the user gets to e.g. the right hand edge of the keyboard, say the letter "P", by moving further to the right another keyboard appears, starting with the left hand edge e.g. "Q", "A", "Z". Similarly if the user moves up or down they will always move to another keyboard. There will always be a viewable keyboard in whatever direction the pointing device moves. Such an entry mechanism can also be single handed which may be of advantage in certain situations. The data field can also be a large virtual 2D array of different types of files a subset of which is displayed at any time as in Fig 4.
Fig 6 shows a possible way in which stylus orientation can be used in selecting a particular letter on a displayed keyboard. A mobile phone or PDA 10 is displaying a zoomed image of part of a QWERTY keyboard 22 which has already been selected by movement of the stylus 12 towards the required region of the QWERTY keyboard. As the stylus approaches the screen the letter "S" 36 is highlighted. This is actually not the letter of choice so the user tilts the stylus slightly to the left to change the selected letter to "A" 38. Orientation information of the stylus can also be used in different ways to e.g. move the selected image across a data field as in Fig 5 by for example tilting the stylus to the left or right or up or down to move the display in the required direction.
Figures 7 to 9 shows a possible method of calculating the direction of the pointing device 12 assuming it is following a straight path - which will in fact effectively be the case assuming the stylus vector is calculated several times a second. This directional data is required to enable the display of the required part of the data field 40.. Part or all of the data field may be shown on the device display 16. The stylus 12 starts at position 42 and moves to the next position 44 some short time later. At the start position the stylus 12 will be a given distance 58 along the z axis from the plane of the display 16. A vector can be calculated from the two positions 42 and 44 which has an end point 46 in the plane of the display 16. All positions in Fig 7 are referred to the orthogonal directions x (left/right), y (up/down) and z (in/out).. It is assumed the stylus 12 contains technology as previously explained allowing the computing device to monitor its position relative to a given point on the display or a given point on the data field 40. It is also assumed that the computing device is continually monitoring the position of the display with reference to the data field. Thus at the start position 42 the positions along the x, y and z directions are known and these are refreshed as the stylus moves to position 44 a short time later.
The distance moved by the stylus 12 from the start position 42 to the second position 44 along the x direction is shown at 50 in figure 8. The corresponding distance moved by the stylus in the y direction is also shown at 52 in figure 9.
The distance of the stylus 12 at the start position 42 from the plane of the screen 16 in the z direction is shown as 58 in Fig 8.
The distance moved between these two points in the z direction 54 combined with the x and y translational movement can be used to readily calculate the end position of the stylus on the data field 40 if it continues to follow that particular vector.
For example the difference between the start and second positions on the x axis 50 together with the change in the distance of the stylus from the display screen 16 as it moves from position 42 to 44 allows the end point distance on the x axis 56 to be easily calculated i.e. 56 = 58 X 50/54 by the rules of similar triangles.
Similarly in the y direction the end point for the stylus 60 can be calculated as follows: 60 = 58 X 52/54. In this manner the point towards which the stylus 3 is aiming at in the data field 6 can be calculated at regular intervals. In most cases this point will change as the user moves the stylus since they will be making corrections as they get closer to the part of the data field they are interested in.
There are other methods to calculate the velocity of the stylus - for example z axis distance can be combined with stylus orientation if such data is available to give a value for the solid angle between the stylus direction and a vector orthogonal to the data field or the display. The method just described is however particularly suitable for portable devices due to its computational simplicity.
With such a method described for deriving stylus positional and velocity information at regular intervals (tens of times per second), the computing device would have sufficient data to selectively display the appropriate area of the data field at the required magnification at any particular time. For a practical device there will need to be user selection in terms of the rate of change of various parameters such as magnification relative to distance from screen, and rate of translational movement of display compared to stylus movement.

Claims

1. An apparatus for processing an image comprising: - an actuating means,
- a means for continuously measuring the position of the actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane, - a screen for displaying a two dimensional image comprising a cursor which represents the position of the actuating means,
- a conversion means for receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane,
- means for computing the separation of the actuating means and the reference plane, and
- means for magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
2. An apparatus as in Claim 1 wherein the separation is measured in the z- dimension.
3. An apparatus as in Claim 1 further comprising trajectory computing means for converting successive positions of the actuating means into a current trajectory of the actuating means and means for computing an extended trajectory which extends in the direction of the current trajectory from the current position to the reference plane.
4. An apparatus as in Claim 3 wherein the computing means is operable to compute the separation measured in the z-dimension.
5. An apparatus as in Claim 3 wherein the computing means is operable to compute the separation measured along the extended trajectory.
6. An apparatus as in any previous claim wherein the screen is located in the reference plane.
7. An apparatus as in any previous claim wherein the reference plane is coincident with the surface of a desktop
8. An apparatus as in any previous claim further comprising means for modifying the image by continuously centring the image, wherein the part of the image corresponding to the current position of the actuating means is translated to the centre of the screen, parts of the image which were off-screen being built up on- screen in accordance with the translation and part of the screen which were on-screen being removed in accordance with the translation.
9. An apparatus as in any previous claim wherein the position measuring means comprises at least one of electromagnetic position sensor, infra red position sensor, ultrasonic position sensor, optical position sensor, camera means
10. An apparatus as in Claim 9 wherein the position measuring means comprises an active position sensor located on the actuating means.
1 1. An apparatus as in Claim 9 wherein the position measuring means comprises a position sensor remote from said actuating means.
12. An apparatus as in any previous claim wherein the conversion means comprises a processing unit for processing signals and data and a storage means for storing signals, data and data files.
13. An apparatus as in Claim 12 wherein the screen is operable to display at least on regions of the image or icon within the image, each of which is associated with at least one function or task of the processing unit or at least one data file of the storage means.
14. An apparatus as an Claim 13 wherein the actuating means comprises at least one key and at least one communication means, the communication means being operable to communicate the depression of any of the at least one key to the process means and to implement any of the at least one function or task.
15. An apparatus as in any one of Claim 13 to 14 wherein the processing means and storage means are in combination operable to provide an image on said screen of a flat hierarchy of said regions or icons.
16. An apparatus as in any one of Claims 13 to 14 wherein the processing means and storage means are in combination operable to form a geographical representation.
17. An apparatus as claimed in any one of claims 1 to 16 and including a tilt sensor.
18. A method of processing an image comprising: continuously measuring the position of an actuating means in 3 orthogonal dimensions (x,y,z), dimensions x,y forming a plane parallel to a reference plane and dimension z being perpendicular to said x,y plane;
- displaying a two dimensional image comprising a cursor which represents the position of the actuating means; - receiving the output of the measuring means and converting the received x,y position into a corresponding cursor location on the screen, wherein the position of the cursor in the image is representative of the position of the actuating means in the x,y plane;
- computing the separation of the actuating means and the reference plane; and
- magnifying at least a portion of the image, wherein the degree of magnification is inversely proportional to said separation.
19. A method as claimed in claim 18 wherein the separation is measured in the z- dimension.
20. A method as claimed in claim 18 or 19 and including the step of converting successive positions of the actuating means into a current trajectory of the actuating means and computing an extended trajectory which extends in the direction of the current trajectory from the current position to the reference plane.
21. A method as claimed in any one of claims 18 to 20 in which the separation is measured in the z-dimension.
22. A method as claimed in any one of claims 18 to 21 including the step of computing the separation measured along the extended trajectory.
23. A method as claimed in any one of claims 18 to 22 including the step of locating the screen in the reference plane.
24. A method as claimed in any one of claims 18 to 23 including the step of locating the reference plane coincident with the surface of a desktop
25. A method as claimed n any one of claims 18 to 24 including the step of modifying the image by continuously centring the image, wherein the part of the image corresponding to the current position of the actuating means is translated to the centre of the screen, parts of the image which were offscreen being built up on- screen in accordance with the translation and part of the screen which were on-screen being removed in accordance with the translation.
26. A method as claimed in any one of claims 18 to 25 including the step of presenting at least on regions of the image or icon within the image, each of which is associated with at least one function or task of the processing unit or at least one data file of the storage means.
27. A computer program product comprising a readable storage medium for storing computer readable instructions for implementing the method of claims 18 to 26.
PCT/GB2007/050512 2006-09-06 2007-08-29 An apparatus and method for position-related display magnification WO2008029180A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0617400.7 2006-09-06
GBGB0617400.7A GB0617400D0 (en) 2006-09-06 2006-09-06 Computer display magnification for efficient data entry

Publications (1)

Publication Number Publication Date
WO2008029180A1 true WO2008029180A1 (en) 2008-03-13

Family

ID=37232352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/050512 WO2008029180A1 (en) 2006-09-06 2007-08-29 An apparatus and method for position-related display magnification

Country Status (2)

Country Link
GB (1) GB0617400D0 (en)
WO (1) WO2008029180A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009117521A1 (en) * 2008-03-20 2009-09-24 Algorithmic Implementations, Inc. Event driven smooth panning in a computer accessibility application
WO2010029415A2 (en) * 2008-09-10 2010-03-18 Opera Software Asa Method and apparatus for providing finger touch layers in a user agent
DE102009006082A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for controlling selection object displayed on monitor of personal computer, involves changing presentation of object on display based on position of input object normal to plane formed by pressure-sensitive touchpad or LED field
WO2010083821A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for controlling a selected object displayed on a screen
DE102009006083A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for implementing input unit on video screen, involves selecting object during approximation of input object at section up to distance, and utilizing functions on input, where functions are assigned to key represented by object
EP2172836A3 (en) * 2008-10-06 2010-08-11 LG Electronics Inc. Mobile terminal and user interface of mobile terminal
EP2341412A1 (en) * 2009-12-31 2011-07-06 Sony Computer Entertainment Europe Limited Portable electronic device and method of controlling a portable electronic device
EP2447818A1 (en) * 2010-10-07 2012-05-02 Research in Motion Limited Method and portable electronic device for presenting text
WO2012055762A1 (en) * 2010-10-27 2012-05-03 International Business Machines Corporation A method, computer program and system for multi-desktop management
WO2012089270A1 (en) * 2010-12-30 2012-07-05 Telecom Italia S.P.A. 3d interactive menu
CN103034345A (en) * 2012-12-19 2013-04-10 桂林理工大学 Geography virtual emulated three-dimensional mouse pen in actual space
US8423897B2 (en) 2010-01-28 2013-04-16 Randy Allan Rendahl Onscreen keyboard assistance method and system
DE102014114742A1 (en) * 2014-10-10 2016-04-14 Infineon Technologies Ag An apparatus for generating a display control signal and a method thereof
CN107197141A (en) * 2011-12-16 2017-09-22 奥林巴斯株式会社 The storage medium for the tracing program that filming apparatus and its image pickup method, storage can be handled by computer
CN109271618A (en) * 2018-08-30 2019-01-25 山东浪潮通软信息科技有限公司 A kind of form component for remembering user operation habits
CN112328107A (en) * 2021-01-04 2021-02-05 江西科骏实业有限公司 Wireless transmission space positioning pen based on virtual coordinates

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04128877A (en) * 1990-09-20 1992-04-30 Toshiba Corp Crt display device
EP0825514A2 (en) * 1996-08-05 1998-02-25 Sony Corporation Information processing device and method for inputting information by operating the overall device with a hand
WO1999054807A1 (en) * 1998-04-17 1999-10-28 Koninklijke Philips Electronics N.V. Graphical user interface touch screen with an auto zoom feature
JP2000207079A (en) * 1999-01-11 2000-07-28 Hitachi Ltd Data processor and program recording medium
WO2003104965A2 (en) * 2002-06-08 2003-12-18 Hallam, Arnold, Vincent Computer navigation
WO2006036069A1 (en) * 2004-09-27 2006-04-06 Hans Gude Gudensen Information processing system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04128877A (en) * 1990-09-20 1992-04-30 Toshiba Corp Crt display device
EP0825514A2 (en) * 1996-08-05 1998-02-25 Sony Corporation Information processing device and method for inputting information by operating the overall device with a hand
WO1999054807A1 (en) * 1998-04-17 1999-10-28 Koninklijke Philips Electronics N.V. Graphical user interface touch screen with an auto zoom feature
JP2000207079A (en) * 1999-01-11 2000-07-28 Hitachi Ltd Data processor and program recording medium
WO2003104965A2 (en) * 2002-06-08 2003-12-18 Hallam, Arnold, Vincent Computer navigation
WO2006036069A1 (en) * 2004-09-27 2006-04-06 Hans Gude Gudensen Information processing system and method

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009117521A1 (en) * 2008-03-20 2009-09-24 Algorithmic Implementations, Inc. Event driven smooth panning in a computer accessibility application
GB2471594A (en) * 2008-03-20 2011-01-05 Algorithmic Implementations Inc Event driven smooth panning in a computer accessibility application
RU2495477C2 (en) * 2008-09-10 2013-10-10 ОПЕРА СОФТВЭА ЭйСиЭй Method and apparatus for selecting object on display screen
WO2010029415A2 (en) * 2008-09-10 2010-03-18 Opera Software Asa Method and apparatus for providing finger touch layers in a user agent
WO2010029415A3 (en) * 2008-09-10 2010-04-29 Opera Software Asa Method and apparatus for providing finger touch layers in a user agent
US8547348B2 (en) 2008-09-10 2013-10-01 Opera Software Asa Method and apparatus for providing finger touch layers in a user agent
CN102197350A (en) * 2008-09-10 2011-09-21 Opera软件股份公司 Method and apparatus for providing finger touch layers in a user agent
US8497882B2 (en) 2008-10-06 2013-07-30 Lg Electronics Inc. Mobile terminal and user interface of mobile terminal
EP2172836A3 (en) * 2008-10-06 2010-08-11 LG Electronics Inc. Mobile terminal and user interface of mobile terminal
US8259136B2 (en) 2008-10-06 2012-09-04 Lg Electronics Inc. Mobile terminal and user interface of mobile terminal
US9804763B2 (en) 2008-10-06 2017-10-31 Lg Electronics Inc. Mobile terminal and user interface of mobile terminal
US9207854B2 (en) 2008-10-06 2015-12-08 Lg Electronics Inc. Mobile terminal and user interface of mobile terminal
WO2010083821A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for controlling a selected object displayed on a screen
DE102009006083A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for implementing input unit on video screen, involves selecting object during approximation of input object at section up to distance, and utilizing functions on input, where functions are assigned to key represented by object
DE102009006082A1 (en) * 2009-01-26 2010-07-29 Alexander Gruber Method for controlling selection object displayed on monitor of personal computer, involves changing presentation of object on display based on position of input object normal to plane formed by pressure-sensitive touchpad or LED field
EP2341412A1 (en) * 2009-12-31 2011-07-06 Sony Computer Entertainment Europe Limited Portable electronic device and method of controlling a portable electronic device
US8423897B2 (en) 2010-01-28 2013-04-16 Randy Allan Rendahl Onscreen keyboard assistance method and system
EP2447818A1 (en) * 2010-10-07 2012-05-02 Research in Motion Limited Method and portable electronic device for presenting text
WO2012055762A1 (en) * 2010-10-27 2012-05-03 International Business Machines Corporation A method, computer program and system for multi-desktop management
WO2012089270A1 (en) * 2010-12-30 2012-07-05 Telecom Italia S.P.A. 3d interactive menu
US9442630B2 (en) 2010-12-30 2016-09-13 Telecom Italia S.P.A. 3D interactive menu
CN107197141A (en) * 2011-12-16 2017-09-22 奥林巴斯株式会社 The storage medium for the tracing program that filming apparatus and its image pickup method, storage can be handled by computer
CN107197141B (en) * 2011-12-16 2020-11-03 奥林巴斯株式会社 Imaging device, imaging method thereof, and storage medium storing tracking program capable of being processed by computer
CN103034345A (en) * 2012-12-19 2013-04-10 桂林理工大学 Geography virtual emulated three-dimensional mouse pen in actual space
CN103034345B (en) * 2012-12-19 2016-03-02 桂林理工大学 Geographical virtual emulation 3D mouse pen in a kind of real space
DE102014114742A1 (en) * 2014-10-10 2016-04-14 Infineon Technologies Ag An apparatus for generating a display control signal and a method thereof
CN109271618A (en) * 2018-08-30 2019-01-25 山东浪潮通软信息科技有限公司 A kind of form component for remembering user operation habits
CN109271618B (en) * 2018-08-30 2023-09-26 浪潮通用软件有限公司 Form component realization method capable of memorizing user operation habit
CN112328107A (en) * 2021-01-04 2021-02-05 江西科骏实业有限公司 Wireless transmission space positioning pen based on virtual coordinates

Also Published As

Publication number Publication date
GB0617400D0 (en) 2006-10-18

Similar Documents

Publication Publication Date Title
WO2008029180A1 (en) An apparatus and method for position-related display magnification
US11422694B2 (en) Disambiguation of multitouch gesture recognition for 3D interaction
US10346016B1 (en) Nested zoom in windows on a touch sensitive device
US7880726B2 (en) 3D pointing method, 3D display control method, 3D pointing device, 3D display control device, 3D pointing program, and 3D display control program
US10852913B2 (en) Remote hover touch system and method
US8384718B2 (en) System and method for navigating a 3D graphical user interface
US9256917B1 (en) Nested zoom in windows on a touch sensitive device
EP1369822B1 (en) Apparatus and method for controlling the shift of the viewpoint in a virtual space
US20110316888A1 (en) Mobile device user interface combining input from motion sensors and other controls
EP2602706A2 (en) User interactions
US7752555B2 (en) Controlling multiple map application operations with a single gesture
US20140157208A1 (en) Method of Real-Time Incremental Zooming
KR20140017429A (en) Method of screen operation and an electronic device therof
US9128612B2 (en) Continuous determination of a perspective
WO2009084809A1 (en) Apparatus and method for controlling screen by using touch screen
US9792268B2 (en) Zoomable web-based wall with natural user interface
US20100145948A1 (en) Method and device for searching contents
KR20060125522A (en) Systems and methods for navigating displayed content
WO2002101534A1 (en) Graphical user interface with zoom for detail-in-context presentations
US9477373B1 (en) Simultaneous zoom in windows on a touch sensitive device
WO2014018574A2 (en) Manipulating tables with touch gestures
US20130326424A1 (en) User Interface For Navigating In a Three-Dimensional Environment
JP2004192241A (en) User interface device and portable information device
US10140003B1 (en) Simultaneous zoom in windows on a touch sensitive device
JP3357760B2 (en) Character / graphic input editing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07789387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07789387

Country of ref document: EP

Kind code of ref document: A1