WO2014167363A1 - Systems and methods for interacting with a touch screen - Google Patents

Systems and methods for interacting with a touch screen Download PDF

Info

Publication number
WO2014167363A1
WO2014167363A1 PCT/GB2014/051157 GB2014051157W WO2014167363A1 WO 2014167363 A1 WO2014167363 A1 WO 2014167363A1 GB 2014051157 W GB2014051157 W GB 2014051157W WO 2014167363 A1 WO2014167363 A1 WO 2014167363A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
touch
user
nodes
touch screen
Prior art date
Application number
PCT/GB2014/051157
Other languages
French (fr)
Inventor
Norman Stone
Original Assignee
Stormlit Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/862,382 external-priority patent/US9268423B2/en
Application filed by Stormlit Limited filed Critical Stormlit Limited
Priority to GB1517611.8A priority Critical patent/GB2527244B/en
Publication of WO2014167363A1 publication Critical patent/WO2014167363A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • touch-screen devices are mobile devices, such as mobile telephones, tablet computers, electronic readers and satellite navigation systems.
  • the user interface to some of these devices has evolved into the common use of certain gestures with fingers or thumbs in order to perform selection, panning, scrolling and zooming functions.
  • These gestures are often collectively referred to as Multi-touch, since they rely on more than one digit being in contact with a touch-sensitive surface at a time, or the same digit being used to tap the surface more than once.
  • (201 1 ) addresses the subject of multi-shape, multi-touch gestures, but this is at the level of identification of finger and non-finger touches by area patterns, and does not consider shape or area as a definition or input technique for the use of applications on a touch-screen.
  • a method for interpreting user touches on a touch screen device to create and edit points of definition, lines, routes and corridors on the display of said touch screen device comprising recognizing single and double, concurrent user touches to the touch screen device, interpreting said user touches as node positions, node touch sequences and associated node motions on the screen display of said touch screen, interpreting said node positions, said node touch sequences and said node motions to determine the point, line segment or route segment entities to be drawn on the touch screen display, retaining recognition and information of said entities persistently after said user touches to the touch screen device have ceased, allowing reselection by a user of a previously defined entity for operation on that entity, and allowing reselection by a user of any node of a previously defined entity for operation on that node.
  • the number of said concurrent user touches is interpreted as one, and the node produced by said concurrent user touch remains substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a point of definition and the drawing of a symbol on the touch screen to represent said point to the user.
  • said user touch remains substantially motionless for an additional predetermined length of time after said creation time, thereby resulting in a means being provided to the user for adding and viewing alphanumeric name or identification information to the said point of definition.
  • the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a line segment and the drawing of a line on the touch screen between positions of the two user touches.
  • the said drawn line has a predetermined style, color and thickness.
  • one or more latent nodes is automatically created at intervals along a line segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the line segment which thereby becomes a multi-segment line.
  • a node from one line segment is moved so that it is substantially at the same location on the touch screen as a second node of a different line segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment line.
  • the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the creation of a route segment and the drawing of an arrow from the point of the first touch in the direction of the second touch on the touch screen using a predetermined style, color and thickness.
  • one or more latent nodes is automatically created at intervals along a route segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the route segment which thereby becomes a multi-segment route.
  • a node from one route segment is moved so that it is substantially at the same location on the touch screen as a second node of a different route segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment route.
  • the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display of actual distance between the two user touches on the touch screen, to the user.
  • the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display to the user of representative distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image.
  • the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display of screen vector distance between the two user touches on the touch screen, to the user.
  • the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display to the user of representative two dimensional vector distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image.
  • a said reselection by a user of a previously defined entity is performed and said operation on said entity is selected as corridor creation, whereby a bounded area around said entity is calculated and displayed to the user on the touch screen, defined by the logical union of circle area around all nodes of said entity and rectangle area around all line or route segments of said entity.
  • the corridor width is predetermined and therefore the radius of the circles around said nodes is made equal to the predetermined corridor width and the width of the rectangles around said segments is also made equal to the predetermined corridor width.
  • the corridor width is defined by touch by the user, and therefore the radius of the circles around said nodes is made equal to the user-specified corridor width and the width of the rectangles around said segments is also made equal to the user-specified corridor width.
  • a said reselection by a user of a previously defined entity is performed through the means of the entity having one or more key nodes whereby operations specific to the whole entity such as the movement, deletion or addition of data is performed.
  • said operation on a node is taken from the list including movement, deletion, labelling, addition of data and definition as a key node.
  • said movement operation on the node is by the user dragging the node around within the perimeters of the multi-touch enabled input device without any background map or image being scrolled.
  • said movement of the node is by the user maintaining a touch within a predetermined distance of a perimeter of the touch screen, thereby causing the node to stay at the position of the touch, but any background map or image being scrolled in the opposite direction of said perimeter.
  • the two nodes are equated as being the same, and the new single node inherits the properties of said existing node.
  • said addition of data includes information taken from the list of start date, end date, elevation above sea level, planned altitude, depth below sea level, and free text information.
  • said deletion operation removes the node, and also a point of definition associated with a node.
  • the method is for interpreting multiple, concurrent user touches on the touch screen device to create and edit shapes on the display of said touch screen device, wherein the method further comprises recognizing multiple, concurrent user touches to the touch screen device, interpreting said user touches as node positions and associated node motions on the screen display of said touch screen, interpreting said node positions, and said node motions to determine a geometric shape to be drawn on the touch screen display, retaining recognition and information of said shape persistently after the user touches to the touch screen device have ceased, allowing reselection by a user of a previously defined shape for operation on that geometric entity, and allowing reselection by a user of any node of a previously defined shape for operation on that node.
  • the number of said concurrent user touches is interpreted as two, and one of the nodes produced by these remains substantially motionless for a predetermined length of time while the other node initially moves and subsequently also remains substantially motionless for a predetermined length of time, thereby resulting in the drawing of a circle centered on the initially stationary node and passing through the initially moving node.
  • the number of said concurrent user touches is interpreted as two, and both of the nodes produced by said concurrent user touches initially move and subsequently remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a rectangle determined by the positions of the two nodes at diagonally opposite corners.
  • the two concurrent user touches are initially detected as a single node due to proximity of said user touches to each other before the movement of the two resultant nodes apart.
  • the number of said concurrent user touches is interpreted as three and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a triangle between the three nodes such that each node position becomes an apex of the triangle.
  • the number of said concurrent user touches is interpreted as four and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a quadrilateral shape between the four nodes such that each node position becomes a corner of the quadrilateral.
  • the number of said concurrent user touches is interpreted as five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a pentagon between the five nodes such that each node position becomes a vertex of the pentagon.
  • the number of said concurrent user touches is interpreted as greater than five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a polygon between the plurality of nodes such that each node position becomes a vertex of said polygon.
  • reselection of the shape is via the user touching a node or side of the shape for a predetermined period of time.
  • said operation on said geometric entity is movement of the whole geometric entity with all nodes being moved together with respect to a virtual area or background so that the nodes are not moved in relation to one another.
  • said operation on said geometric entity is the addition of user- defined information into predetermined fields to categorise, label or define parameters relating to the geometric entity in the area of application.
  • said operation on said geometric entity is the deletion of the geometric entity.
  • said operation on said geometric entity is an independent moving of two component nodes of said geometric entity along two orthogonal axes of said touch screen module such that a two dimensional stretching or compressing of said geometric entity occurs proportional to the movement of the two nodes in each axis.
  • said two dimensional stretching or compressing is of two nodes on the circumference of a circle geometric entity such that an ellipse shape is created from the original circle.
  • one or more sub-nodes is automatically created at intervals along the sides of a shape, allowing the user to identify, select and move any said sub-node whereby said sub-node becomes a new node of said shape and a new side is added to said shape.
  • the number of sides in a geometric shape is decreased by user selection of a node of said shape and a subsequent deletion operation on said selected node occurs, whereby nodes previously connected to the deleted node are directly joined.
  • said geometric shape becomes the boundary within said touch screen module of an area comprising a two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
  • said geometric shape is created on top of an existing two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said shape define coordinates of said two dimensional image, map or surface displayed on said touch screen module, and a subsequent pan and zoom operation is performed either to the specific area of said image, map or surface defined by said geometric shape or to an area centered on and including said specific area which also shows additional surrounding area due to differences in shape or aspect ratio of the said shape and the available screen area.
  • said geometric shape becomes the boundary within said touch screen module of a new window which runs a software application independent from any main or background software application, and independent from a software application running in any other window.
  • selection options for applications to run in said new window are presented to the user in a menu or list adjacent to the new window after creation of the new window.
  • selection options for applications to run in said new window are presented to the user via icons appearing inside the new window after creation of the new window.
  • an application to run in said new window is selected by the user prior to creation of the new window.
  • a tangible computer readable medium storing instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 46.
  • a distance measurement and display system graphical user interface for touch screen devices with a mapping, navigation or image background, comprising a detection module configured to detect two concurrent user touches to a touch screen that permit a user to input two points of definition on a background map or image for which it is desired to know the distance between, a measurement module configured to calculate the representative distance between the two concurrent touches including scaling and conversion to the measurement units and axes of the background map or image, and a display unit configured to display the calculated representative distance between the two concurrent touches to the user of the touch screen device.
  • a windowing system for touch screen devices comprising a multiple independent window display and interaction module that permits a user to view concurrently a plurality of computer applications in a plurality of different windows of a plurality of different shapes and sizes, a selection module for identifying which of the plurality of said multiple independent windows is to be the subject of a user defined operation, a geometric shape detection module that is configured to define the shape, size and boundary of a new window conveniently, and a user selection module configured to permit a user to select an application for said new window.
  • an apparatus comprising a touch screen module incorporating a touch panel adapted to receiving user input in the form of multi-touch shape gestures including finger touches and finger movements, and a display surface adapted to present point of definition, line, route and corridor information to the user, a control module which is operatively connected to said touch screen module to determine node and point of definition positions from said finger touches, to determine node motions and touch sequence from said finger movements, to recognize a line or route segment from combinations of said node positions and touch sequences, to create multi-segment lines and routes from individual segments by node position equivalence detection, to create multi-segment lines and routes from detection of latent node selection and movement on line and route segments, to detect a selection touch to a pre-existing entity from the list including point of definition, line segment, route segment, multi- segment line and multi-segment route, to control the editing of said preexisting entity, and to generate a continuous graphical image including said no
  • the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen, a touch screen display which interacts with medical and scientific image display equipment and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
  • the node, point of definition, line segment, route segment, multi- segment line, multi-segment route and corridor information presented to the user includes symbols and lines currently detected or selected and those recalled from memory, previously defined, or received from a remote database, device or application.
  • a communications module is incorporated adapted to the transfer of node, point of definition, line segment, route segment, multi- segment line, multi-segment route and corridor information including node position and entity type, to and from other devices, networks and databases.
  • control module is configured to accept external said information from said communications module and pass the information to the touch screen module for display to the user.
  • control module is configured to pass locally created said information to the communications module for communication to other devices, networks and databases.
  • entities recognised by said control module from the said detected node positions, touch sequences and node motions include nodes, points of definition, line segments, route segments, multi-segment lines, multi- segment routes and corridors.
  • the said editing of pre-existing entities includes the movement of points of definition, the movement of entire entities, the deletion of entire entities, the stretching of lines by the movement of their individual nodes, the editing of a corridor width, the creation of multi-segment lines and routes by joining segments at common nodes, and the addition of a new node between two existing nodes of an entity.
  • the said nodes and points of definition recognised by said control module represent locations on a two dimensional image, map or surface having its own coordinate system which are readable by said control module from the memory module.
  • the display surface is adapted to present node and shape information to the user
  • the control module which is configured to determine node positions from said finger touches, to determine node motions from said finger movements, to recognize a geometric shape from combinations of said node positions and node motions, to recognize an application associated with said geometric shape, to detect a selection touch to a pre-existing said shape, to control the editing of said selected shape, and to generate a continuous graphical image including said node positions and plurality of said geometric shapes for display on the touch screen module
  • the memory module being configured to store from and provide to said control module a logical element selected from the group consisting of operating systems, system data for said operating systems, applications which can be executed by the control module, data for said applications, node data shape data, area data and windows data.
  • the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
  • the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows currently detected or selected.
  • the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows from previous user touches, recalled from memory, previously defined, or received from a remote database, device or application.
  • the apparatus comprises a communications module which is configured to the transfer of node, shape, area and window information including node position and shape type, to and from other devices, networks and databases.
  • the control module is configured to accept external node, shape, area and window data from said communications module and pass them to the touch screen module for display to the user.
  • control module is configured to pass locally created nodes, shapes, areas and windows to the communications module for communication to other devices, networks and databases.
  • said geometric shapes recognised by said control module from the detected node positions and node movements include circles, rectangles, triangles, quadrilaterals, pentagons and polygons with greater than five vertices.
  • the edits to selected shapes includes the transformation of a selected circle into an ellipse.
  • the edits to selected shapes includes the two dimensional stretching of a selected shape in the axes determined by node movements.
  • the edits to selected shapes, areas and windows includes the creation of shapes, areas and windows with one side more than when selected, by the addition of a new node between two existing nodes of a shape.
  • the node positions of a re-selected geometric shape, area or window are moved by the node being touched with a finger which is moved over the surface of said touch screen module wherein the node moves accordingly, and the said geometric shape, area or window is modified accordingly.
  • said geometric shapes are re-selected by a non-moving user touch of a constituent node of said geometric shape for a period of less than three seconds, after which the whole geometric shape is moved accordingly by subsequent movement of said constituent node.
  • the said geometric shapes recognised by said control module represent areas on a two dimensional image, map or surface having its own coordinate system wherein the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
  • the said geometric shapes recognised by said control module represent window boundaries on the display of said touch screen such that a plurality of independent computer programs or applications are run concurrently on said display.
  • said geometric shapes recognised by said control module represent area boundaries on the display of said touch screen such that a combined pan and zoom operation is performed to the specified said area boundary.
  • An embodiment of the invention seeks to define and manage lines and shapes rapidly and in a user-friendly manner using multi-touch gestures. Such means are produced through the creation and manipulation of nodes on a multi-touch enabled surface.
  • the nodes are interpreted as being lines or shapes according to the gestures used.
  • the nodes, lines and shapes take on a meaning relevant to the operating system or application processing them, such as the creation of geographic points, routes or areas on a map, respectively.
  • the node-based aspect also gives the advantage of sharing data in one embodiment such as the remote transmission of geometric information between different devices programmed or enabled to process or interpret the information, for example for remote drawing over a communications network.
  • Various applications of one or more embodiments include sharing planned wilderness hiking routes, defining flight plans, remote drawing, co-ordinating search and rescue, maritime navigation and defining unique geographic areas of interest for real estate searches of a remote server.
  • a geographic area defined or visible in this way by various embodiments will be advantageous to real estate buyers and sellers, farmers, pilots, property developers, navigators, air traffic controllers and emergency co-ordinators, for example.
  • embodiments allowing various means of defining the application within a newly-defined window on a touch screen device have been designed. These include methods of defining a window first, and then defining the application within it, and also drawing the window using multi-touch gestures once an application has been launched. One, or a combination of these methods, means that a multi-window display with truly independent applications running and visible concurrently, can for the first time be used on touch screen devices. Especially (but not exclusively) for larger touch screen devices such as tablet computers and touch screen personal computers, it will revolutionise the user interface with the device in the same way that Microsoft Windows replaced single DOS based applications on the personal computer.
  • multiple independent nodes on a mapping application are created by touch gestures on a touch screen, which correspond to multiple independent geographic points on the Earth's surface.
  • the nodes can be labelled by the user, moved by the user without the background map moving, and have attributes added by the user or application.
  • Multi-touch selections on the touch screen with two touches together are interpreted as line segments, and with two touches in rapid sequence are interpreted as route segments with direction implied by the order of touches.
  • the lines may exist for the duration of the creating touches or indefinitely, and may display distance or vector difference between the two touches, to the user. Line segments may be divided into several segments by the selection and movement of hidden (latent) nodes along existing line segments.
  • Lines and routes segments can be defined to exist together, and these can be joined to make complex, multi-segment lines and routes by the joining of the end nodes of line and route segments.
  • Lines and routes can be made into a two dimensional corridor by the definition of a width relevant either side of a line or route segment, or a composite, multi-segment line or route.
  • gestures for node, line, route and corridor definition and management are entirely new in one embodiment, while for others the gestures used are adapted from existing gestures, but used for an entirely new purpose or with a new human computer interaction interface.
  • the points, lines, routes and corridors can be easily manipulated and edited, and the lines, routes and corridors can be efficiently stretched and re-sized.
  • the use of key nodes permits operations on a composite entity made up of multiple line or route segments, such as the movement or deletion of a complex route.
  • the node- based aspect also gives the advantage of sharing data in one embodiment such as the remote transmission of geometric information between different devices programmed or enabled to process or interpret the information, for example for passing planned routes over a communications network.
  • FIG. 1 a represents the definition of a node by the user of a multi-touch enabled device.
  • FIG. 2a is a potential result on a mapping application of a multi-touch defined point.
  • FIG. 3Aa shows a flow-chart of a method to create and select a node using multi-touch.
  • FIG. 3Ba shows a flow chart of normal node and margin node movements.
  • FIG. 3Ca shows normal node movement.
  • FIG. 3Da shows margin node movement
  • FIG. 4a demonstrates potential information related to a multi-touch defined point.
  • FIG. 5a represents the definition of a line segment with a multi-touch enabled device.
  • FIG. 6a shows a result on a mapping application of a multi-touch defined line segment.
  • FIG. 7a shows a flow-chart of a method to detect a multi-touch defined line segment.
  • FIG. 8a highlights potential information related to a multi-touch defined line segment.
  • FIG. 9a demonstrates how 'sub-nodes' can be used to divide a line segment in two.
  • FIG. 10a illustrates how a line can be defined from multiple line segments.
  • FIG. 1 1 a shows how a line can be used as a route (with directionality).
  • FIG. 12a represents the formation of a corridor from a line.
  • FIG. 13a represents the definition of a circle using a multi-touch enabled device.
  • FIG. 14a is a potential result on a mapping application of a multi-touch defined circle.
  • FIG. 15a shows a flow-chart of a method to define a multi-touch defined circle.
  • FIG. 16a demonstrates potential information related to a multi-point defined circle.
  • FIG. 17a represents the definition of a rectangle by the user of a multi-touch enabled device.
  • FIG. 18a is a potential use of a multi-touch defined rectangle to define a window on a touch-screen device for a parallel application.
  • FIG. 19a shows a flow-chart of a method to detect a multi-touch defined rectangle.
  • FIG. 20a demonstrates potential information related to node-based polygons defined by multi-touch gestures.
  • FIG. 21 a represents the definition of a triangle (and by analogy, a quadrilateral) by the user of a multi-touch enabled device.
  • FIG. 22a is a demonstration of a multi-touch defined triangle in a drawing application, which is also directly analogous to definition of quadrilateral shapes.
  • FIG. 23a shows a flow-chart of a method to detect a multi-touch defined triangle or quadrilateral shape.
  • FIG. 24a represents the definition of a pentagon using a multi-touch enabled device.
  • FIG. 25a is a potential result on a Real Estate application of a multi-touch defined pentagon.
  • FIG. 26a shows a flow-chart of a method to detect a multi-touch defined pentagon.
  • FIG. 27a demonstrates how a user can create a hexagon from a pentagon.
  • FIG. 28Aa illustrates how route lines can be reversed in directionality.
  • FIG. 28Ba illustrates how a closed area can be converted between a route and area, and between a closed line and an area.
  • FIG. 29a is a modular view of a multi-touch device capable of defining, editing and displaying node-based points, lines and areas.
  • FIG. 30a demonstrates how a network of multi-touch enabled devices, touchscreen devices and databases can be used together to share node-based data between each other.
  • FIG. 1 b represents the definition of a circle using a touch screen device according to an embodiment of the invention.
  • FIG. 2b is a potential result on a mapping application of a multi-touch defined circle according to embodiments of the invention.
  • FIG. 3b shows a flow-chart of a method to define a multi-touch defined circle according to embodiments of the invention.
  • FIG. 4Ab demonstrates how cardinal nodes on a created circle can be used to stretch or compress the shape in different axes, permitting the creation of an ellipse according to embodiments of the invention.
  • FIG. 4Bb illustrates the result of the creation of an ellipse from a circle according to embodiments of the invention.
  • FIG. 5b shows a flow-chart of a method to create an stretched shape such as an ellipse according to embodiments of the invention.
  • FIG. 6b represents the definition of a rectangle by the user of a touch screen device according to embodiments of the invention.
  • FIG. 7b is a potential use of a multi-touch defined rectangle to define an area on a touch screen device map for a combined pan and zoom operation to an area of interest according to embodiments of the invention.
  • FIG. 8b shows a flow-chart of a method to detect a multi-touch defined rectangle according to embodiments of the invention.
  • FIG. 8Ab shows a flow-chart of a method to provide a combined pan and zoom on a map following a multi-touch defined rectangle definition gesture.
  • FIG. 9b represents the means of definition of a triangle by the user of a touch screen device according to embodiments of the invention.
  • FIG. 10b is a demonstration of a multi-touch defined triangle in a drawing application according to embodiments of the invention.
  • FIG. 1 1 b shows a flow-chart of a method to detect a multi-touch defined triangle according to embodiments of the invention.
  • FIG. 12b represents the means of definition of a quadrilateral shape on a touch screen device according to embodiments of the invention.
  • FIG. 13b is an illustration of a multi-touch defined quadrilateral shape in a drawing application according to embodiments of the invention.
  • FIG. 14b shows a flow-chart of a method to detect a multi-touch defined quadrilateral shape according to embodiments of the invention.
  • FIG. 15b represents the definition of a pentagon using a touch screen device according to embodiments of the invention.
  • FIG. 16b illustrates a potential result on a real estate application of a multi- touch defined pentagon according to embodiments of the invention.
  • FIG. 17b shows a flow-chart of a method to detect a multi-touch defined pentagon according to embodiments of the invention.
  • FIG. 18b represents the definition of a polygon of more than five sides and five nodes, using a touch screen device according to embodiments of the invention.
  • FIG. 19b demonstrates a hexagon created from a pentagon according to embodiments of the invention.
  • FIG. 20b shows a flow-chart of a method to detect a multi-touch defined polygon of six or more sides and nodes according to an embodiments of the invention.
  • FIG. 21 b illustrates how a node of a multi-touch defined shape, area or window may have operations performed on it by a user according to an embodiment of the invention.
  • FIG. 22b demonstrates how a node of a multi-touch defined shape, area or window may be deleted, thus rendering the shape, area or window with less sides and nodes.
  • FIG. 23Ab illustrates the nodes detection step in creating a rectangular shape or area which will ultimately result in a new window-first window according to an embodiment of the invention.
  • FIG. 23Bb illustrates the shape sizing and positioning step in creating a rectangular shape or area which will ultimately result in a new window-first window according to an embodiment of the invention.
  • FIG. 23Cb illustrates the selection of application step for a newly created window-first rectangular window according to an embodiment of the invention.
  • FIG. 23Db illustrates the completion of a multi-touch defined window-first rectangular window on a multi-touch enabled device according to embodiments of the invention.
  • FIG. 24Ab illustrates an alternative selection of application step for a newly created window-first circular window according to an embodiment of the invention.
  • FIG. 24Bb illustrates the completion of the alternative multi-touch defined window-first circular window according to embodiments of the invention.
  • FIG. 25Ab illustrates the application selection stage for creating an application- first window according to prior art.
  • FIG. 25Bb illustrates the default window shape size and position appearance stage for creating an application-first window according to embodiments of the invention.
  • FIG. 25Cb illustrates the sizing and positioning stage for creating an application-first window according to embodiments of the invention.
  • FIG. 25Db illustrates the completion of a multi-touch defined application-first window according to embodiments of the invention.
  • FIG. 26b illustrates a multi-touch device display containing many different windows, of various shapes running several different applications.
  • FIG. 27b shows a flow-chart defining how windows may be created and defined on a multi-touch device.
  • FIG. 28b is a modular view of a touch screen device capable of defining, editing and displaying node-based areas, shapes and windows according to embodiments of the invention.
  • FIG. 29b demonstrates how a network of multi-touch enabled devices, touch screen devices, workstations, networks and databases can be used together to share node-based area, shape and window data between each other according to embodiments of the invention.
  • FIG. 1 c represents the definition of a node or point of definition by the user of a multi-touch enabled device according to an embodiment of the invention.
  • FIG. 2c is a potential result on a mapping application of a multi-touch defined point of definition according to an embodiment of the invention.
  • FIG. 3c shows how a node on the touch screen, representing a point of definition may be moved according to an embodiment of the invention.
  • FIG. 4c shows how a point of definition on a map is moved in sympathy with the node being moved on the touch screen according to an embodiment of the invention.
  • FIG. 5Ac shows a flow-chart of a method to create and select a node or point of definition using multi-touch according to embodiments of the invention.
  • FIG. 5Bc shows a flow chart of normal node and margin node movements according to embodiments of the invention.
  • FIG. 5Cc shows normal node movement according to an embodiment of the invention.
  • FIG. 5Dc shows margin node movement according to an embodiment of the invention.
  • FIG. 6c demonstrates potential information related to a touch defined point of definition on a map according to embodiments of the invention.
  • FIG. 7c represents the definition of a line segment on a touch screen device according to embodiments of the invention.
  • FIG. 8c shows a result on a mapping application of a multi-touch defined line segment according to an embodiment of the invention.
  • FIG. 9c shows a flow-chart of a method to detect a multi-touch defined line segment according to embodiments of the invention.
  • FIG. 10c highlights potential information related to a multi-touch defined line segment according to an embodiment of the invention.
  • FIG. 1 1 c demonstrates how latent nodes can be used to divide a line segment in two according to embodiments of the invention.
  • FIG. 12c presents a flow-chart of how latent nodes can be used with fractional or space division to divide a line segment into two or more segments according to embodiments of the invention.
  • FIG. 13c demonstrates how the act of creating a line segment can also be used to display representative or actual distances between the touch nodes, according to embodiments of the invention.
  • FIG. 14c illustrates how a sequenced tapping of a touch screen can be used to define direction of travel, and therefore also a vector or route according to embodiments of the invention.
  • FIG. 15c indicates how when defining a route or line in a node based fashion, a display of vector difference between the nodes can also be displayed according to embodiments of the invention.
  • FIG. 16c presents a flow-chart defining how sequenced touches result in a route or vector line, and how it is determined whether to display distance between the nodes, according to embodiments of the invention.
  • FIG. 17c illustrates how a composite, multi-segment line can be defined from multiple line segments according to embodiments of the invention.
  • FIG. 18c shows how a line can be used as a route, with directionality, according to an embodiment of the invention.
  • FIG. 19c presents a flow-chart defining how a composite multi-segment line or route can be formed from joining nodes of other lines or routes together, according to embodiments of the invention.
  • FIG. 20Ac shows how a node on a line or route can be deleted according to an embodiment of the invention.
  • FIG. 20Bc indicates a resulting composite, multi-segment line after the deletion of a node, according to an embodiment of the invention.
  • FIG. 21 c represents the formation of a corridor from a line or route according to an embodiment of the invention.
  • FIG. 22Ac depicts a multi-segment route according to an embodiment of the invention.
  • FIG. 22Bc shows a multi-segment route with rectangles and a circle to show the creation of a corridor area, according to an embodiment of the invention.
  • FIG. 22Cc illustrates a complete corridor over a multi-segment route according to an embodiment of the invention.
  • FIG. 22Dc represents a process for the creation of a corridor area around a multi-segment line or route according to an embodiment of the invention.
  • FIG. 23Ac illustrates how route lines can be converted into non-route lines and vice versa, according to embodiments of the invention.
  • FIG. 23Bc illustrates how route lines can be reversed in directionality according to an embodiment of the invention.
  • FIG. 23Cc illustrates how a closed area can be converted between a route and area, and between a closed line and an area according to an embodiment of the invention.
  • FIG. 24c is a modular view of a multi-touch device capable of defining, editing and displaying node-based points, lines, routes and corridors according to embodiments of the invention.
  • FIG. 25c demonstrates how a network of multi-touch enabled devices, touchscreen devices and databases can be used together to share node-based points, lines, routes and corridors between each other according to embodiments of the invention.
  • node-based point, line and shape definition via a multi-touch surface One area which has not yet been addressed by multi-touch gestures is that of node-based point, line and shape definition via a multi-touch surface, and yet there is considerable use which this could afford users of mobile touch- screens and even large scale touch-sensitive surfaces.
  • the use of one or more fingers in contact with a touch-sensitive area can be used to create nodes, lines, rectangles, circles, triangles, quadrilaterals and pentagons. From these primitive entities, greater-sided polygons, lines and corridors can be quickly created, and any of these lines and shapes can be manipulated through the movement of the nodes defining them.
  • Various combinations of the following node, line and shape definitions, manipulations and edits can be used in an embodiment.
  • a prolonged touch at a given location has also been used to define a geographic location on a map.
  • a prolonged touch typically between 0.1 seconds and 3.0 seconds
  • a node would be created in this manner; a node being an entity which can be viewed and manipulated as a geometric point on a multi-touch enabled device, but which represents another logical or physical entity.
  • An example of a node is a small circle centered on a specific screen pixel on a map, which represents a particular latitude/longitude.
  • FIG. 1 a illustrates the creation and selection of a node on a touch-screen surface 102, via a prolonged touch with a single touch implement 104 - in this case a finger.
  • the creation of a node also selects that node, which allows movement of the node as indicated by the motion arrow 106.
  • the touch implement that creates or selects the node is moved while still maintaining contact with the touch-screen, the node will be moved along with the touch implement.
  • FIG. 2a The result of a node creation and movement is shown in FIG. 2a.
  • the created node representation 204, on the multi-touch enabled touch-screen device 202 is shown.
  • the direction arrow 208 represents movement of the node, equal in direction and distance on the touch-screen to the touch implement motion.
  • the selection of a node is similar to creation of a node except that it requires the existence of a node shown at the location of the touch (or within a number of pixels of the touch). Therefore touching a node on the touch-screen and remaining touching it will select that node for an operation including moving the node around the screen.
  • the node is only required to be present to be able to be selected, and therefore the selection - even of invisible nodes - is possible.
  • FIG. 2a shows a node on a map representing a geographic coordinate on a map which has been given a label 206 by a user.
  • Margin-based node movement is possible where the relevant area for which nodes are relevant is greater than the area shown by the screen.
  • a selected node will initially be moved in the direction of the finger controlling it, without the background being panned, when the finger and node beneath it get within a certain margin of the edge of the screen, the node will not move further across or off the screen. Instead, a scroll of the background will occur in the opposite direction to the previous direction of travel across the screen by the node. The effect will be to move the node further along the background in the desired direction. In this case the direction of movement of the background will be opposite to a panning multi-touch operation in the same direction.
  • FIG. 3Aa demonstrates how to create the functionality of node definition, selection and movement on the touch-screen.
  • Process box 302 is a background task which monitors for a multi-touch operation; when a multi- touch operation is detected, the decision logic at 304 detects the specific multi- touch input of a touch device such as a finger touching the screen for a duration appropriate with the application (for example one second). If this node event is detected there is another decision point 306 which determines whether there is an existing node at the location of the touch on the screen. If there is no existing node, the node creation process 310 is commanded, which creates a node at the location being touched, and then selects the point which has just been created.
  • node at, or close to (as applicable to the application) the detected node multi-touch, that node will be selected, as shown in 308. Whether the node was just selected by the node multi-touch, or created and selected, the user can move the node freely as summarised by process 312, and further elaborated in the process description of FIG. 3Ba.
  • the node position on the touch-screen will always track the position of the finger performing the multi-touch as shown by process 318. However a decision 320 as to whether to perform normal node movement or margin- based node movement depends on whether the node is within a screen margin.
  • a screen margin may be used - although it is not necessarily visible - in the situation where the background to a node (such as a map) occupies a larger area than can be seen on the touch-screen.
  • the node remains under the controlling finger, but the background moves in the opposite direction to the margin as described by 324. Therefore if a node is moved into a defined margin area of the left of the touch-screen, the user's controlling finger may stop there, in which case the background will move to the right.
  • borders will be relevant at the top, bottom, left and right sides of a rectangular touch-screen, although for example borders near the corners could act in two directions. Such scrolling of a background can occur for as long as the user's finger is in contact with the screen and within a margin.
  • Decision logic 314 determines whether any other operation is performed on the node after movement; an immediate time-out occurs if the node controlling finger is removed from the touch-screen, in which case the node stays at the position it was last at (where the finger was removed from) and is deselected. However if the controlling finger is detected as staying in the same position, but still in contact with the touch-screen for a certain time - for example two seconds - a user interface will be brought up to enable the user to assign additional information for the node, as shown in process 316.
  • FIG. 3Ca shows normal node movement on a touch-screen surface 102 (in this case a satellite navigation system), with 328 representing the position of a user's finger, which is evidently outside the margin 330.
  • the node is moved with the finger, for example as shown by arrow 326.
  • FIG. 3Da shows margin-based node movement, where finger position 328 is within the margin 330 - in this case the top margin.
  • Arrow 332 represents the movement or scrolling of the background as a consequence of the presence and position of the finger controlling the node being within the margin.
  • FIG. 4a shows some of the information which could be attributed to a node after creation and selection - particularly a node representing a geographic location on a map.
  • Latitude and longitude would be important for a location node - this would be received from a mapping of geographic model application once the position on a touch-screen has been established.
  • start date e.g. rendez-vous
  • finish date could be useful for geographic nodes.
  • elevation or altitude - perhaps with minimum and maximum elevation/altitude would allow a three dimensional location definition. Therefore the altitude of a surveying point on a mountain could be usefully defined, an altitude above ground could be defined, or a depth below the sea could be added to latitude and longitude data.
  • a name would also be useful - especially when sharing the node on a network, for a shared reference by those users with permissions to see specific nodes.
  • Certain information if required, could be attributed to a node by the operating system, such as the user and time at which a node was created. Miscellaneous information or notes about a location could also be added by a user.
  • visual information such as node icon and label color, size and shape could be defined, or these could be defined or defaulted by the application itself.
  • Line segments - with reference to FIG. 5a - are defined with the use of two fingers 502 (in this case a thumb and middle finger) touching a touch-screen 102 together, and for a minimum duration (for example 2 seconds) without being moved.
  • This action will create or select two nodes, at the locations of the two touches on the touch-screen, between which will be drawn a line which may be straight, or otherwise, as appropriate to the application. Of the two nodes created one will always be determined as the key node, and indicated to the user as the key node.
  • the key node for a line - or any node-based shape with more than one node - allows the selection of the whole shape, and operations on that shape - for example movement of the shape with all of its nodes, instead of the movement of just one node.
  • node 604 is marked as the key node, although other ways can be used to indicate the key node from other nodes (the shape nodes), including shape, contrast, color or fill.
  • FIG. 7a The creation method of a line segment by a touch-screen device user is shown in FIG. 7a.
  • User inputs to the touch-screen will be monitored for multi-touch node gestures by process 302.
  • the logic of 702 will determine whether two fingers are touching the touch-screen and remaining still for greater then a minimum duration (for example 1 .5 seconds), and if so the process 704 will create a line.
  • Process 704 will create a line firstly by selecting the two nodes specified by the finger positions. If a node already exists at a finger position (or within a defined radius), the node will be selected. If there is no pre-existing node at the position, a node will be created at the given screen position and the node will be selected.
  • a line will be drawn on the touch-screen between the two selected nodes.
  • the line will be straight, but there are various possibilities with regard to line type, which may for example be an arc, a spline, an arrow or other common line type.
  • the allocation of the key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touch-screen.
  • line sub-nodes will automatically be created for the purpose of line division as described below.
  • the logic of 706 will detect whether the two fingers remain on the created nodes for a minimum time-out period after the creation of the line. If not (for instance the fingers are removed immediately upon the drawing of the line) the line will be completed without any additional user information added to the line at that time. If the fingers do still remain at the time-out, process 708 will allow the user to add additional information for the line via a user interface.
  • FIG. 8a illustrates some of the information which could be added to a line; as for node information, some information can be added automatically by the operating system, such as Creation User and Creation Time. Other information can be graphical preferences from a user or defaults.
  • Some information may be defined by the user, such as Line Segment Name and Information.
  • Other information including node positions on screen, node position representation (such as latitude/longitude/elevation in a mapping application) will be inherited from the node data relating to the nodes either end of the line. This is advantageous to a touch-screen device user, since if one end of a line is not in the desired location, the user can select that node and move it in the manners described by node movement.
  • Selection of a whole line will consist of touching and holding, or double-tapping the key node of the line.
  • FIG. 9a demonstrates how a line may be sub-divided via node-based multi- touch.
  • a created line will automatically have one or more sub-nodes (as indicated by 906) created along its length in addition to the line-end defining nodes. These extra line sub-nodes may be visible or invisible to the user, and may be regularly spaced or irregularly spaced. If a user selects one of these line sub-nodes (as per the selection of any node), the node can be moved relative to the line-end nodes. This will bend the line, and create two line segments out of one, sharing a common end-line node (which was previously a sub-node of the original line). New line segments created by line sub-division will have their own new sub-nodes created.
  • FIG. 10a illustrates a composite line 1004, created from line segments 1006.
  • a composite line could for example represent a border, boundary or road on a map, or a complex line on a drawing application.
  • FIG. 1 1 a illustrates a composite line made up out of line segments 1 106, which are arrows, and therefore define directionality.
  • Such a composite line can be used to show flow or direction of travel.
  • a particular use of this type of line would be in defining a route which does not necessarily rely on existing geographic locations. This would be beneficial for example to define a planned wilderness route, journey on water or the flight plan for an aircraft.
  • FIG. 12a illustrates the use of line segments to create not only a composite line, but a corridor.
  • a corridor is a central line such as 1210 with associated parallel lines (1206 and 1208) which represent desired limits related to the central line.
  • One use for this is the definition of air corridors or sea lanes for touch-screen devices used for navigation or navigation planning.
  • Corridors can be created by the user of a touch-screen device by specifying distance offsets from a central line, as part of the user-added line information process 708 previously described.
  • the offset lines which may be to one or both sides of the central line, are drawn by the application, under control of the user. Selection of the key node shown in FIG. 12a by 1212 will allow the selection, movement and data entry of the complete corridor.
  • Frequently - such as in the examples given - corridors will not be defined by a touch-screen user, but from a central database, the nodes and related information about which will be sent over a network.
  • Navigation restricted corridors can therefore be provided centrally, which can be overlaid on a touch-screen display with local information - such as GPS position and planned route of the local user.
  • the key is the use of nodes to represent the required information between users and data sources. If a single node of a line is selected and deleted, a line consisting of only two points will disappear, but leave the remaining node as a singularity point.
  • the individual node For a line of more than two nodes, the individual node will disappear, but leave a line made up of the remaining nodes; neighbouring nodes will be joined if an intermediate node is deleted. If a key node is deleted, another node will become the key node, using a rule applicable to the application, such as the closest node to the deleted node becomes the key node.
  • Circles can be defined via multi-touch with the use of two fingers; one finger will be static at the center of the desired circle on the touch-screen, while the other will move from the center outwards until the required radius is achieved.
  • FIG. 13a illustrates the method, in which the user is using one finger 1304 (in this case the left thumb) to define the center of the circle.
  • Another finger 1302 is in motion over touch-screen 102 and away from the thumb. The finger which is moving away can move in any direction from the center, since it represents a point on the circumference of the circle, and therefore it can move back in towards the center to make the circle smaller.
  • finger 1302 is removed.
  • Finger 1304 is also removed if no further operation is required on the circle, however it can be left to retain selection of the circle to move it or add information.
  • FIG. 14a shows the result of the circle drawing operation with the circle still selected, since both the centre node 1402 and the radius node 1404 are visible.
  • the centre node and the radius node are shaped or colored differently to be able to distinguish them, and the radius node will become invisible when the circle is no longer selected.
  • the circle circumference 1406 is visible where it is on the touch-screen of the multi-touch enabled device 202.
  • a label 1408 has been given to the circle after creation.
  • some zoom in and zoom out touch-screen buttons 1410 are shown, since the multi-touch gesture defined for circle drawing is similar to the widely used zoom multi-touch gesture, and such buttons would allow zoom to be performed in addition to circle creation.
  • An application which uses the circle drawing method and apparatus defined here would not be compatible in the same mode as the here-to common pinch/zoom multi-touch gesture.
  • FIG. 15a describes how a circle drawing method can be implemented. If multi- touch detection process 302 has determined that a multi-touch operation has been initiated and logic 1504 detects that one finger is still and the other is moving away from the still finger, a dynamic circle will be drawn. The circle will be centred on the center node (created by the still finger) and have a radius determined by the distance between the center node and the radius node (created by the moving finger). The center node is always defined as being the key node for the circle shape. Once both fingers are still for a set duration (for example two seconds), logic 1508 will declare a timeout, and if both fingers are still present at that event, will keep the circle selected and initiate the user- added circle information process 1512.
  • a set duration for example two seconds
  • Logic block 1508 will activate logic block 1510 if only one finger is present after the timeout, and thereafter the radius can be changed until the radius node finger is removed. However, if only the center node finger is present, the circle will remain selected, and can be moved around like any singularity node described by FIG. 3Aa and FIG. 3Ba (including margin-based node movement). Circle definition and movement will be completed upon the centre node finger being removed from the touch screen, although other end events are possible such as a further timeout upon the centre node finger being still.
  • Selection of a circle for movement occurs through the touching of the centre node of a circle. This will allow its movement, and also display a radius node on the circumference of the circle, which can also be changed while the circle is selected. Double-tapping of the center node will select the circle for the addition of information as in process 1512, since the center node is also the key node.
  • Deletion of a radius node will result in the deletion of the circle, and the center node becoming a singularity node. Deletion of the whole circle will occur if either the center node is deleted, or if the shape is selected by double-tap of the center node and a deletion option being selected by the user.
  • FIG. 16a presents some examples of information which can be attributed to a circle.
  • the position of the circle on the screen (and representational position for example longitude and latitude) will automatically be attributed to the circle where applicable, and operating system information such as user and date/time can automatically be added. Similar to single nodes and lines, elevation and shape formatting information can be added. It is important to note that since the circle is node-based, information about it can be easily shared between applicable devices. Therefore circles defined by one user can - if enabled - be displayed on touch-screen devices of other users.
  • a rectangle can be defined via multi-touch with the use of two fingers, both of which will move across the touch-screen - initially away from each other.
  • each finger will create, select and control the motion of a node representing a diagonally opposite corner of the rectangle.
  • FIG. 17a shows how a rectangle is defined on a touch-screen 102.
  • the user is using one finger 1702 (in this case the left thumb) to define the position and motion of the bottom right corner node of the rectangle and another finger 1704 to define the position and motion of the top left corner node.
  • Both fingers must be in motion over the touch-screen for the touch-screen device to differentiate between rectangle creation and both the creation of a circle (one finger moving and one static) and a line (two static fingers).
  • Two fingers from different hands may be used to define rectangle corners.
  • definition of a rectangle may be made by controlling the bottom-left and top-right corner nodes, relative to the user.
  • FIG. 18a shows the result of the rectangle definition on touch-screen device 202.
  • a rectangle with a node at each of the four corners is displayed to the user; two of these nodes (1802 and 1804) are the nodes created and moved by the fingers of the user until the desired size and position of rectangle 1806 was achieved.
  • Zoom in and zoom out touch-screen buttons 1410 are shown, since the multi-touch gesture defined for rectangle drawing is similar to the widely used zoom multi-touch gesture, and such buttons would allow zoom to be performed in addition to rectangle creation.
  • the allocation of the key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touch-screen. Note that although two nodes are used to create the rectangle, four nodes will be created - one for each corner - of which one will be designated as the key node. One or more sub-nodes will also be created along each side of the rectangle (similar to the creation of lines). These are for creating new nodes on the rectangle shape if selected by the user; for example after the creation of the rectangle, the user may decide to drag the middle of one side outwards to create a pentagon shape.
  • process 310 After monitoring for a multi-touch event (process 310) and detecting a multi-touch, where two fingers are detected which are both moving, and moving apart (decision logic 1904), process 1906 is initiated. This initially creates four corner nodes - one of which is the key node.
  • the logic will interpret which node is highest according to the touch-screen orientation, and designate one of the nodes covered by a finger as a top corner node; the other node under a finger will be designated a bottom corner node.
  • the process will also determine which of the corner nodes is furthest left with respect to the current touch-screen orientation, and designate it as a left node; the other node under a finger will become a right node. In this manner one node under a finger will become a top-left node or a top-right mode, while the other will become a bottom-right node or bottom-left mode, respectively.
  • these two nodes will be moved by the user in the same manner as for any singularity node (including normal node movement and margin-based node movement), and may even swap as top/bottom or left/right nodes.
  • the other two corner nodes (not under a finger) will move in sympathy with the other two corner nodes, for instance if the top-left and bottom-right nodes are being controlled by the user, the process will position the top-right node at the same vertical position as the top-left, and at the same horizontal position as the bottom-right node.
  • the bottom-left node will be placed at the same vertical position as the bottom-right node and at the same horizontal position as the top-left node.
  • the rectangle nodes will continue to track the finger positions as described until there is a timeout event determined by decision logic 1908.
  • a timeout will occur immediately if both fingers are removed from the touch-screen (decision 1910), and in this case the corners of the rectangle will be where the fingers were last detected and the rectangle will become complete. If one finger is still present, then that node may be moved until the finger is removed from the touch-screen, as depicted by process 1914. If both fingers remain still for a short period (such as 2 seconds) without moving, a user interface process 1912 will be brought up to enable the user to add information about the shape created. At any time after creation (as for any shape) a double-tap of the key node of the shape will also allow shape information to be added by the user.
  • rectangle creation using the above method, once a rectangle has been created it will be classed as a shape made up of four nodes and will not be treated specifically as a rectangle after creation. However continued classification as a rectangle could also occur in which case further size editing could be made by touching diagonal corner nodes at the same time.
  • FIG. 20a shows examples of information which could be recorded about a rectangle, although this information is also common to any other shape with multiple nodes (any polygon). It can be seen that much of the information potentially underlying a shape is based upon the identification of the node positions which define the shape. This may be position on the touch-screen and a reference position represented by the node (such as a geographic latitude, longitude and elevation). Some information may be created automatically by the operating system or application, such as creation date/time and user. Other elements may be format information which may be a combination of application and user selections and defaults.
  • a triangle can be defined via multi-touch with the use of three fingers, all three of which will initially remain stationary on the touch-screen.
  • a quadrilateral shape can similarly be defined via the use of four fingers remaining stationary on the touch-screen.
  • FIG. 21 a shows three stationary fingers 2104 held against touch-screen 102. The three finger positions define nodes, which determine where the three corners are.
  • FIG. 22a shows a triangle 2204 resulting from the touching of the multi-touch device 202 to create the nodes 2206. Whether or not the node points are normally visible, one of the points will be designated as the key-node.
  • the multi-touch device is a drawing application which has a pre-defined triangle shape selected as denoted in selection area 2202, and in this case the nodes are not visible.
  • FIG. 23a shows the mechanism required to create multi-touch, node-based triangles and quadrilaterals, including selection.
  • Process 302 determines whether a multi- touch event has occurred, and the logic of 2304 determines whether a triangle or quadrilateral is being defined. This is established by determining whether three fingers or four fingers respectively are touching the touch-screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a triangle node will be created at the screen position below each finger, and these nodes will be joined to create a triangle shape.
  • time-out process 2308 (such as one second) where not all three fingers are in contact with the touch-screen, or an immediate time-out when the user removes all three fingers from the touch-screen will result in the completion of the triangle.
  • a user- interface will be presented to the user to define additional information in process 2310 - such as illustrated in FIG. 20a.
  • one or more additional (typically hidden) sub-nodes will be created along the length of each side, so that the user can efficiently change the triangle into another polygon by dragging one or more sub-nodes off the line to divide it. Also (as per all other node-based lines and shapes) if the key node of the triangle is selected by double-tap after creation as a polygon, the shape will be selected by the user for movement of the whole shape together, or for the addition or editing of information desired.
  • a pentagon can be defined via multi-touch with the use of five fingers, all five of which will initially remain stationary on the touch-screen.
  • FIG. 24a shows five stationary fingers 2404 held against touch-screen 102. The five finger positions define nodes, which determine where the five corners are.
  • FIG. 25a shows a pentagon resulting from the touching of the multi-touch device 202 with corners of the pentagon defined by nodes 2504, 2506, 2508, 2510 and 2512. Whether or not the node points are normally visible, one of the points will be designated as the key-node, which in FIG. 25a is marked as 2506.
  • the multi-touch device shows a real estate application with a map.
  • the shape created by the user -which in this case is a pentagon - represents the geographic area of interest for a search of properties meeting criteria already defined.
  • Property 2502 is an example of nine properties for sale which match the user's criteria, and in this case it has a banner 2514 summarizing the address and market price. Note the hidden sub-node 2516 which are always created when any line or polygon side is created, although there can be more than one between two nodes.
  • sub-node 2516 is selected by the user by (in this case) touching the center of the line between node 2506 and node 2508, that sub-node will become visible (if not already visible) and can be dragged from its initial location, which will create a new node, and create a hexagon from the polygon, as shown in FIG. 27a. It can be seen that node 2702 has become a node to replace the previous sub-node 2704. The effect of this on the real estate application is to enlarge and detail the geographic area of search, and it can be seen that one new property 2706 has appeared as relevant to the search. In both FIG. 25a and FIG.
  • FIG. 26a shows the mechanism required to create multi-touch, node-based pentagons, including selection.
  • Process 302 determines whether a multi-touch event has occurred, and the logic of 2604 determines whether a pentagon is being defined. This is established by determining whether five fingers are touching the touch-screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a pentagon node will be created at the screen position below each finger, and these nodes will be joined to create a pentagon shape. The end of a time-out (such as one second) where not all five fingers are in contact with the touch-screen, or an immediate time-out when the user removes all five fingers from the touch-screen will result in the completion of the pentagon.
  • a time-out such as one second
  • a user- interface will be presented to the user to define additional information - such as illustrated in FIG. 20a.
  • the key node of the pentagon is selected by double-tap after creation as a polygon, the shape will be selected by the user for movement of the whole shape together, or for the addition or editing of information desired in process 2610.
  • a device with a multi-touch interface is required, as indicated by 2902 on FIG. 29a.
  • the output signals from the multi-touch interface in response to user touches are fed to an Input / Output processor 2906 to interpret.
  • the processor will determine whether a multi-touch event relating to nodes, node- based lines, node-based circles or any node-based polygon has occurred. If so the relevant event and data - such as the positions of the node/s on the multi-touch device will be communicated to the Central Processing Unit 2908.
  • the Central Processing Unit will perform the required calculations and processing to create shapes and perform node movements - including margin- based node movements - some of which will call on data from Memory 2910.
  • Node and shape data may be shared on a network under control of a Communications module 2912.
  • Node, line, circle and polygon information may be displayed under control of the Input / Output Processor 2906, on the Display 2904.
  • the Display module 2904 and Multi-touch interface 2902 are combined in one unit - a touch-screen for which nodes, lines and shapes appear where commanded by the user - under the user's fingertips.
  • the Display module 2904 and Multi-touch interface 2902 do not have to be the same; a multi-touch pad could be used as the input device with a conventional screen used for display to the user.
  • FIG. 30a shows how node-based point, line and area information may be exchanged between different devices and computers via a network.
  • networks combining different communications links and different servers and databases could be used, depending on the application.
  • a tablet computer 3002 a large touch-screen device 3004, a personal computer 3006 (which does not have to have a touch-screen or be multi-touch enabled), a smart phone 3008 and a satellite navigation system 3010 are shown communicating node-based point, line and area information via a network.
  • the information being provided by some or all of the devices is processed and stored at a central server with database 3012.
  • the central server will share information as requested and required by the devices' applications.
  • the link 3014 represents a one-to-one communication of node- based point, line and area information between two users with suitable apparatus, and shows that a centralised information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share information.
  • node-based definition of line segments and area in a drawing application would be an efficient means of drawing lines and shapes.
  • applications and user interfaces which will draw lines and (through this) shapes directly while following a user-controlled pointer, and there are also applications coming from the mouse-type point and click tradition by which a shape type can be selected, and then drawn using a digit like a computer mouse, there is not currently a means of drawing via nodes to represent corners, directly from multi-touch gestures.
  • a second application for node-based multi-touch area definition is defining areas for a particular function on a touch-screen itself.
  • a picture, or window within a screen for example where most of a screen is being used for one application - such as web browsing, but there could be other user-drawn shapes, such as rectangles, which are showing another application independently - for example a TV display, a messaging screen, or a news-feed.
  • node-based point, line and area definition is potentially in the field of where a defined line or area on the touch-screen or touchpad interacts with a map or space model, to establish real, geographic coordinate-based points, lines, corridors and areas. This would bring benefit to navigation systems for, personal and vehicle navigation. It would also enhance mapping and geographic information systems including agriculture planning and real-estate management. In combination with application specific real estate data from central databases, searches and data could be provided about locations and areas specifically defined and customised by a client using a tablet computer or smart phone.
  • Nodes can be easily selected, moved and have information added. Key nodes allow a whole entity such as a shape, line or corridor to be moved, or edited as one operation, and sub-nodes allow the division of lines or shape sides into smaller, linked lines.
  • the use of two or more pointing devices in contact with a touch-sensitive area can be used to create rectangles, circles, triangles, quadrilaterals pentagons and other polygons through the recognition and manipulation of the key nodes which define them. From these areas and shapes, greater-sided polygons, can be quickly created. Any of the resultant entities can be used to define shapes or areas on the touch screen itself, geographic areas or zoom boundaries on an associated map which may be shared with other devices or applications, or windows which may run multiple independent applications which are visible to the user at the same time on a touch screen.
  • finger is used to refer to a user's finger or other touch pointing device or stylus, and “fingers” represents a plurality of these objects, including a mixture of human fingers, thumbs and artificial pointing devices.
  • node is taken to mean a singular two dimensional point on a two dimensional surface, plane, screen, image, map or model, which can persist in logical usage and memory after being initially defined, and can have additional data linked to it.
  • node-based is used to define an entity created from, or pertaining to, one or more nodes.
  • cardinal nodes refers to a minimum set of nodes which will completely define the size and form of a specific shape.
  • shape is used to describe a pure geometric form, whereas "area” is an application of a shape to a specific two dimensional space such as on a finite screen or a map which may be larger than can be seen with a finite screen space.
  • a “window” is taken to mean a shape specifically used to interact with a specific computer system or embedded device software application, which can run independently of other software applications or background tasks.
  • the term “window” is analogous to its use in conventional computer systems where graphical windows and windowing systems are typically driven by a mouse or other such controller. However windows in the context of the following description are those created and controlled with one or more fingers, and which can be any geometric shape, whereas conventional computer system windows are rectangular.
  • touch screen is used to mean an integrated touch input mechanism and display surface, such as that typically present on a tablet computer.
  • a circle is a symmetrical geometric shape with a constant radius, which can also represent a circular area and bound a circular window. Circles can be defined via multi-touch with the use of two fingers; one finger will be static at the center of the desired circle on the touch screen, while the other will move from the center outwards until the required radius is achieved.
  • FIG. 1 b illustrates the method, in which the user is using one finger 104 (in this case the left thumb) to define the center of the circle. Another finger 106 is in motion over touch screen 102 and away from the thumb. The finger which is moving away can move in any direction from the center, since it represents a point on the circumference of the circle, and therefore it can move back in towards the center to make the circle smaller. However once the circle is the required radius, finger 106 is removed. Finger 104 is also removed if no further operation is required on the circle, however it can be left to retain selection of the circle to move it or add information.
  • FIG. 2b shows the result of the circle drawing operation with the circle still selected, since both the centre node 203 and the radius node 204 are visible.
  • the radius node is invisible when the circle is no longer selected, but the circle circumference 206 is visible where it overlaps with the touch screen area of the multi-touch enabled device 202.
  • a label 208 has been given to the circle after creation in this example.
  • buttons 210 are shown, since the multi-touch gesture defined for circle drawing is similar to the widely used zoom multi-touch gestures, and such buttons would allow zoom to be performed in addition to circle creation.
  • An application which uses the circle drawing method and apparatus defined here would not be compatible in the same mode as the common pinch/zoom multi- touch gesture.
  • FIG. 3b describes how a circle drawing operation can be implemented. If multi- touch detection process 302 has determined that a multi-touch operation has been initiated and logic 304 detects that one finger is still and the other is moving away from the still finger, a dynamic circle will be drawn by process 306. The circle will be centered on the center node (created by the still finger) and have a radius determined by the distance between the center node and the radius node (created by the moving finger). The center node is defined as being the key node for the circle shape. Once both fingers are still for a set duration (for example two seconds), logic 308 will declare a timeout, and if both fingers are still present at that event, will keep the circle selected and initiate the user-added circle information process 312.
  • a set duration for example two seconds
  • This process will present a user interface applicable to the application, for example to name the circle.
  • Logic block 308 will activate logic block 310 if only one finger is present after the timeout, and thereafter the radius can be changed until the radius node finger is removed. However, if only the center node finger is present, the circle will remain selected, and the whole circle can be moved around by following the touch path of the key node finger across the screen of the touch screen device as in process 314. Circle definition and movement will be completed upon the center node finger being removed from the touch screen, although other end events are possible in different embodiments such as a further timeout upon the centre node finger being still.
  • Re-selection of a circle after creation occurs through the touching of the center node (key node) of the circle. This allows movement of the circle as previously described, and also displays one or more radius nodes on the circumference of the circle, which can also be changed while the circle is selected.
  • double-tapping of the center node will select the circle for an operation selected from a menu adjacent to the circle; such as for the addition of information, circle deletion, node position editing or conversion to an ellipse.
  • the same operations are available from a persistent touch of a center node for more than a second.
  • radius node or nodes will also be visible, and subsequent radius node selections by touch in one embodiment will allow operations on the radius node, such as movement, naming or deletion via a menu adjacent to the radius node. Deletion of a radius node will result in the deletion of the circle, although in one embodiment the center node will remain.
  • Deletion of the whole circle will occur in various embodiments if either the center node is deleted, or if the whole shape is selected by key node selection and the subsequent selection of circle deletion by the user.
  • An ellipse is a curved geometric shape which has a varying curve around its circumference, which can also represent an elliptical area and bound an elliptical window. Ellipses are created as a subsequent operation to the creation of a circle.
  • FIG. 4Ab presents an example of a circle which has been selected prior to conversion to an ellipse.
  • four radius nodes 406, 408, 410 and 412 are displayed to the user, evenly distributed around the circumference (404), along with the center of the circle (402). These nodes which together completely define the shape, are collectively referred to as the cardinal nodes of the shape. Every shape, including circles have a minimum set of cardinal nodes.
  • circles only require two cardinal nodes to be created - the center node and one radius node -, in order to be able to create an ellipse from it, more nodes are required to be shown.
  • the opposite radius nodes (410 and 412, or 406 and 408) move together but in opposite directions - either both towards the center node or both away from the center node - when one of the pair is moved, to give a symmetrical modification.
  • the arrows shown next to nodes 412 and 406 show an example of an elongation of the shape to create an ellipse.
  • FIG. 4Bb shows the resultant ellipse from the circle elongation indicated by the arrows in FIG. 4Ab. Note that this created ellipse used the embodiment of symmetrical node manipulation, since the ellipse is symmetrical, and also because node 418 has been used to elongate the ellipse to such an extent that its opposite cardinal node has moved off the touch screen. In another embodiment of circle manipulation to create an ellipse, all radius nodes are fully independent, allowing separate skewing in the different axes.
  • FIG. 5b describes how an already-created shape can be modified via manipulation of its cardinal nodes.
  • Continuous shape selection monitoring 502 occurs until there is recognition 504 that a node-based shape has been selected, for instance by the user touching or double-tapping the key node.
  • a process 506 displays all the cardinal nodes to the user where the nodes are within the touch screen display area, and allows any of the cardinal nodes to be selected and moved.
  • Decision logic 508 detects any touches made to the cardinal nodes, which in the shown embodiment incorporates a timeout limit.
  • the shape changing process 510 modifies shape and geometry according to the movement of cardinal nodes, such as the elongation of a circle to create an ellipse.
  • process 512 terminates the shape modification process, which in one embodiment results in the cardinal nodes being made invisible to the user and the shape being de-selected. In another embodiment, the shape is not de-selected until another shape is selected.
  • a rectangle is a four-sided geometric shape containing four right angles, which can also represent a rectangular area and bound a rectangular window.
  • a rectangle can be defined via multi-touch with the use of two fingers, both of which will move across the touch screen - initially away from each other in one embodiment. During the rectangle definition, each finger will create, select and control the motion of a node representing a diagonally opposite corner of the rectangle. Node movement, and therefore creation is with respect to the borders of a touch screen or other multi-touch input device, and is relative to the associated axes.
  • FIG. 6b shows how a rectangle is defined on a touch screen 102.
  • the user is using one finger 602 (in this case the left thumb) to define the position and motion of the bottom right corner node of the rectangle and another finger 604 to define the position and motion of the top left corner node. Both fingers must be in motion over the touch screen for the touch screen device to differentiate between rectangle creation and the creation of a circle (one finger moving and one static). Two fingers from different hands may be used to define rectangle corners. Also definition of a rectangle may be made by controlling the bottom-left and top-right corner nodes, relative to the user.
  • FIG. 7b shows the result of the rectangle definition on touch screen device 202.
  • a rectangle with a node at each of the four corners is displayed to the user; two of these nodes (702 and 704) are the nodes created and moved by the fingers of the user until the desired size and position of rectangle 706 was achieved.
  • Zoom in and zoom out touch screen buttons 708 are shown, since this would be a way to zoom in and zoom out which is compatible with the embodiment described. That is to say that the rectangle creation method being described is incompatible with the widely used pinch and zoom multi-touch gesture in the same mode or application since the touch screen device logic would not be able to distinguish between whether the user wanted to zoom or define a rectangle.
  • the allocation of the key node of a rectangle can vary according to application and user defaults and preferences; in one embodiment the highest and furthest left on the touch screen with respect to the user becomes the key node. Note that although two nodes are used to create the rectangle, four nodes will be created - one for each corner. In one embodiment a center node will also be created at the geometrical center of the rectangle which may also be designated as the key node. In one embodiment, one or more sub-nodes will also be created along each side of the rectangle for the purpose of creating new nodes on the rectangle shape if selected by the user; for example after the creation of the rectangle, the user may decide to drag the middle of one side outwards to create a pentagon shape.
  • the creation of a rectangle would enable the user to create shapes on drawing applications, define areas on a mapping or geographic modelling application, and can be used to define specific screen areas or windows on a touch screen, such as for a picture-in-a-picture window or secondary application.
  • process 806 After monitoring for a multi-touch event (process 302) and detecting a multi-touch, where two fingers are detected which are both moving, and moving apart (decision logic 804), process 806 is initiated. This initially creates four corner nodes. The logic will interpret which node is highest according to the touch screen orientation, and designate one of the nodes covered by a finger as a top corner node; the other node under a finger will be designated a bottom corner node. The process will also determine which of the corner nodes is furthest left with respect to the current touch screen orientation, and designate it as a left node; the other node under a finger will become a right node.
  • one node under a finger will become a top-left node or a top-right mode, while the other will become a bottom-right node or bottom-left mode, respectively.
  • these two nodes will be moved by the user until they are at the final desired positions to define a rectangle, and may even swap as top/bottom or left/right nodes.
  • the other two corner nodes (not under a finger) will move in sympathy with the other two corner nodes, for instance if the top-left and bottom-right nodes are being controlled by the user, the process will position the top-right node at the same vertical position as the top-left, and at the same horizontal position as the bottom-right node.
  • the bottom-left node will be placed at the same vertical position as the bottom-right node and at the same horizontal position as the top-left node.
  • the rectangle nodes will continue to track the finger positions as described until there is a timeout event determined by decision logic 808. A timeout will occur immediately if both fingers are removed from the touch screen (decision 810), and in this case the corners of the rectangle will be where the fingers were last detected and the rectangle will become complete. If one finger is still present in one embodiment, then that node may be moved until the finger is removed from the touch screen, as depicted by process 814. If both fingers remain still for a short period (such as 2 seconds) without moving, a user interface process 812 will be brought up to enable the user to add information about the shape created.
  • the rectangle definition completion process 819 concerns the defining nodes being put into memory, including the finger-defined nodes, the other corner nodes, a center node (where applicable) and mid-point nodes along the rectangle sides (where applicable).
  • a key node will also be defined.
  • a double-tap of the key node of the shape will allow shape information to be added by the user, as well as allowing the movement of corner nodes, or the whole shape.
  • a decision block 816 recognises that the rectangle definition is on an application such as a map for which a combined pan and zoom operation is valid.
  • the area defined will define the border of a part of the existing map or view which will be magnified in a subsequent process 818 defined by FIG 8Ab.
  • the rectangle area 706 created will form the basis of a new zoomed view.
  • FIG. 8Ab shows how the process of the automatic pan and zoom by the definition of a rectangle can be achieved. Initially there is a calculation 820 of the geometric center point of the newly created rectangle.
  • the display is a map or an earth model for which every point on the display represents a latitude/longitude coordinate
  • the map or earth model latitude and longitude for the corner nodes and the center node are retrieved so that the subsequent pan and zoom operations directly reference geographic points.
  • Another step 822 calculates the aspect ratio of the newly created rectangle, and then decision block 824 compares it with that of the whole display area (window or screen). If the aspect ratio of the newly created rectangle is greater than that of the original display area then the new view calculation 826 will represent the full width of the rectangle, with extra area added at top and bottom accordingly. If however the aspect ratio of the newly created rectangle is less than or equal to that of the whole display area, task 828 will calculate a new view representing the maximum height, with additional area shown to the left and right.
  • a completion task 830 centers the new view at the center of the display area, which effectively implements an automatic pan operation. The same task also creates the display over the whole display area according to the previously calculated area derived from the aspect ratio, and this effectively implements an automatic zoom operation.
  • Subsequent pan and zoom operations using rectangle definition on a touch screen may be performed to iteratively zoom in on detailed information in a display.
  • a triangle is a three-sided, closed geometric shape, which can also represent a triangular area and bound a triangular window.
  • a triangle can be defined via multi-touch with the use of three fingers, all three of which will initially remain stationary on the touch screen.
  • FIG. 9b shows three stationary fingers 904 held against touch screen 102. The three finger positions define nodes, which determine where the three corners are.
  • FIG. 10b shows a triangle 1004 resulting from the touching of the multi-touch device 202 to create the nodes 1006. Whether or not the node points are normally visible, in one embodiment one of the points will be designated as the key-node.
  • the multi-touch device is a drawing application which has a pre-defined triangle shape selected as denoted in selection area 1002, and the nodes are not visible.
  • the application or operating system of the multi-touch device is pre-programmed to recognise three fingers - where stationary for a defined time - as the initiating event for creating a triangle, a pre-selection of shape type is not required.
  • the process of FIG. 1 1 b shows the mechanism required to create multi-touch, node-based triangles, including selection.
  • Process 302 determines whether a multi-touch event has occurred, and the logic of 1 104 determines whether a triangle is being defined. This is established by determining whether three fingers are touching the touch screen at the same time.
  • a triangle node will be created at the screen position below each finger, and the subsequent task 1 106 will join the nodes to create a triangle shape.
  • decision logic 1 108 will determine whether the three fingers are still present after an additional timeout which I contemplate to be approximately one second. If the fingers are still present, the user will be requested to add additional information for the triangle, such as a name or color, prior to completion of the triangle shape.
  • one or more additional sub-nodes will be created along the length of each side of the triangle and made visible, so that the user can change the triangle into a greater-sided polygon by dragging one or more sub-nodes off a side line to divide it.
  • re-selection of the triangle is achieved by the user double- tapping the triangle's key node. In another embodiment, the touching of any side or node of the triangle will select the whole shape.
  • the triangle can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as name or color.
  • a single node of a previously created triangle is selected by touching an apex of the triangle, and the node is highlighted and visible to the user.
  • the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the triangle accordingly.
  • a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
  • a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the triangle into a straight line between the remaining two defining nodes.
  • a quadrilateral is a four-sided geometric shape which does not have to contain right angles, which can also represent a quadrilateral area and bound a quadrilateral window.
  • a quadrilateral shape is defined in a similar manner to the triangle, but via the use of four fingers remaining stationary on the touch screen at the same time for a period I contemplate to be approximately one second. Unlike for the definition of a rectangle, a quadrilateral does not take screen borders or orientation into account.
  • FIG. 12b shows four stationary fingers 1204 held against touch screen 102. The four finger positions define nodes, which determine where the four corners are.
  • FIG. 13b shows a quadrilateral resulting from the touching of the multi-touch device 202 to create the nodes 1306.
  • the multi-touch device is a drawing application which has a pre-defined quadrilateral shape selected where the nodes are not visible.
  • the application or operating system of the multi-touch device is pre-programmed to recognise four fingers - where stationary for a defined time - as the initiating event for creating a quadrilateral, a pre-selection of shape type is not required.
  • the process of FIG. 14b shows the mechanism required to create multi-touch, node-based quadrilaterals, including selection.
  • Process 302 determines whether a multi- touch event has occurred, and the logic of 1404 determines whether a quadrilateral is being defined. This is established by determining whether four fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a quadrilateral node will be created at the screen position below each finger, and the subsequent task 1406 will join the nodes to create a quadrilateral shape. In one embodiment decision logic 1408 will determine whether the four fingers are still present after an additional timeout which I contemplate to be approximately one second. If the fingers are still present, the user will be requested to add additional information for the quadrilateral, such as a name or color, prior to completion of the quadrilateral shape.
  • one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the quadrilateral into a greater-sided polygon by dragging one or more sub- nodes off the existing side line to divide it.
  • re-selection of the quadrilateral is achieved by the user double-tapping the quadrilateral's key node.
  • the touching of any side or node of the quadrilateral will select the whole shape.
  • the quadrilateral can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color.
  • a single node of a previously created quadrilateral is selected by touching a corner of the quadrilateral, and the node is highlighted and visible to the user.
  • the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the quadrilateral accordingly.
  • a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
  • a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the quadrilateral into a triangle drawn between the remaining three defining nodes.
  • a pentagon is a five-sided, closed geometric shape, which can also represent a pentagonal area and bound a pentagonal window.
  • a pentagon can be defined via multi-touch with the use of five fingers, all five of which will initially remain stationary on the touch screen.
  • FIG. 15b shows five stationary fingers 1504 held against touch screen 102. The five finger positions define nodes, which determine where the five corners are.
  • FIG. 16b shows a pentagon resulting from the touching of the multi-touch device 202 with corners of the pentagon defined by nodes 1604, 1606, 1608, 1610 and 1612. In one embodiment, whether or not the node points are normally visible, one of the points will be designated as the key-node, which in the embodiment shown in FIG.
  • the multi-touch device shows a real estate application with a map.
  • the shape created by the user -which in this case is a pentagon - represents the geographic area of interest for a search of properties meeting criteria already defined.
  • Property 1602 is an example of nine properties for sale which match the user's criteria, and in this case it has a banner 1614 summarizing the address and market price.
  • Note the hidden sub-node 1616 which is an example of sub-nodes created in various embodiments of pentagon definition.
  • sub-node 1616 is selected by the user by (in this embodiment) touching the center of the line between node 1606 and node 1608, that sub-node will become visible (if not already visible) and can be dragged from its initial location, which will create a new node, and create a hexagon from the pentagon, as shown in FIG. 19b. It can be seen that node 1902 has become a node to replace the previous sub-node 1616. The effect of this on the real estate application is to enlarge and detail the geographic area of search, and it can be seen that one new property 1906 has appeared as relevant to the search.
  • Process 302 determines whether a multi-touch event has occurred, and the logic of 1704 determines whether a pentagon is being defined. This is established by determining whether five fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a pentagon node will be created at the screen position below each finger, and these nodes will be joined to create a pentagon shape, as shown by task 1706.
  • additional information includes options to provide a name to the pentagon and to define the color of the pentagon.
  • one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the pentagon into a greater-sided polygon by dragging one or more sub-nodes off the existing side line to divide it.
  • re-selection of the pentagon is achieved by the user double-tapping the pentagon's key node. In another embodiment, the touching of any side or node of the pentagon will select the whole shape.
  • the pentagon can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color.
  • a single node of a previously created pentagon is selected by touching a corner of the pentagon, and the node is highlighted and visible to the user.
  • the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the pentagon accordingly.
  • a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
  • a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the pentagon into a quadrilateral drawn between the remaining four defining nodes.
  • a hexagon is a closed six-sided geometric shape, which can also represent a hexagonal area and bound a hexagonal window.
  • a polygon is a closed multi-sided geometric shape, which can also represent a polygonal area and bound a polygonal window.
  • a hexagon can be defined via multi- touch with the use of six fingers, and more generally a polygon with greater than six sides can be defined with the equivalent number of fingers remaining stationary on the touch screen for a period of approximately one or two seconds.
  • FIG. 18b shows five stationary fingers 1804 from one hand held against touch screen 102, and another finger from another hand doing the same. The six or more finger positions define nodes, which determine where the six or more corners are.
  • FIG. 19b shows a resulting polygon produced on multi-touch device 202.
  • Process 302 determines whether a multi-touch event has occurred, and the logic of 2004 determines whether a polygon is being defined. This is established by determining whether six or more fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a polygon node will be created at the screen position below each finger, and these nodes will be joined to create a polygon shape with an equivalent number of sides, represented by task 2006.
  • additional information includes options to provide a name to the polygon and to define the color of the polygon.
  • one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the polygon into a greater-sided polygon by dragging one or more sub-nodes off the existing side line to divide it.
  • re-selection of the polygon is achieved by the user double- tapping the polygon's key node. In another embodiment, the touching of any side or node of the polygon will select the whole shape.
  • the polygon can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color.
  • a single node of a previously created polygon is selected by touching a corner of the polygon, and the node is highlighted and visible to the user.
  • the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the polygon accordingly.
  • a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the editing of the icon for that node.
  • This embodiment is illustrated in FIG. 21 b, where finger 2104 has selected a node, and a drop-down menu has appeared, allowing various options on the selected node.
  • a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the polygon into a polygon of one less number of sides, drawn between the remaining defining nodes.
  • This embodiment is illustrated in FIG. 22b, whereby the left-most node 2202 which was present in FIG. 21 b has been deleted, resulting in a polygon of one less side.
  • node-based shape and area definition a node-based shape is initiated as illustrated in FIG. 23Ab.
  • finger 2302 and finger 2304 initially touch at point 2306 on the multi-touch input area of multi-touch device 202.
  • the type, position and dimensions of the node-based shape are then completed as illustrated in FIG. 23Bb.
  • a rectangle is created by finger 2308 and finger 2310 drawing apart from each other to define four corner nodes, of which node 2312 is the top-right node.
  • a user interface in the form of a menu list is displayed to the user, as shown in FIG. 23Cb.
  • the menu list 2316 appears adjacent to, or in front of the shape 2314 which was created.
  • a user finger 2318 selects the specific application which is to be run in the window from the options in menu list 2316.
  • the selected application is run in the newly created window 2320.
  • a window for the running of software applications can be created from any of the node-based shapes, including circles, ellipses, rectangles, triangles, quadrilaterals, pentagons, hexagons and greater-sided polygons.
  • the node-based shape is first created and then the possible application options to run in the window appear inside the shape which will become the window, as shown in FIG. 24Ab (rather than the possible application options being presented on a list menu).
  • the created shape 2402 is filled with application icons (two of which are labelled 2404) representing all the applications which can be run in the window.
  • a different approach to the application of node-based shapes and areas to the creation and editing of windows on a touch screen display is characterised by embodiments in which the application is selected first, and the shape, position and size of the window is defined afterwards.
  • the application is selected by the user as shown in FIG. 25Ab.
  • the choice of application is determined from list menu 2504 appearing within the touch screen area 2502.
  • User finger 2506 selects an application from list menu 2504.
  • the means of application selection can be a different method, such as selection from a group of application icons.
  • the chosen application will then appear in a window 2508 of preset or default position, shape and size on the touch screen for that application, as shown in FIG. 25Bb.
  • the calculator application defaults to a rectangular window in the shown position and size.
  • each node of the window (one of which is indicated as node 2510) are shown to the user, and these nodes may be moved by user touches.
  • FIG. 25Cb shows the top-left node 2512 and the bottom-right node 2514 of a rectangular window being moved apart by simultaneous user touches. In this case, since all four corner nodes of a rectangle are linked, the other nodes including the top-right node 2516 also move accordingly.
  • the movement of the corner nodes apart designates a stretching of the window, and changing of the nodes occurs until a timeout of approximately two seconds after the last node is touched.
  • the resultant window 2518 is shown in FIG. 25Db.
  • FIG. 26b multiple node-based windows can co-exist as shown in FIG. 26b.
  • the individual windows can be selected and moved, with a more recently created or selected window appearing in front of previously created or selected windows, as shown by more recently selected calculator application window 2602 appearing in front of previously created video conference application window 2608.
  • Multiple window shapes containing multiple application types and instances can be visible at the same time, such as triangular clock application window 2612, real estate map application 2604 and internet messaging application window 2606.
  • Windows containing groups of application icons for selection by the user as shown by window 2610 can also appear with other windows.
  • any individual window can be stretched, moved, re-shaped or deleted.
  • a window can be active, and display data to the user, in a similar way to conventional computer windowing systems controlled by a mouse or similar device.
  • Decision block 2702 ascertains whether an application-first selection has been made (selection by the user of a desired application). In the case that it has been, decision block 2704 determines whether a default or option is set which only allows whole-screen applications. If this is the case, the selected application will run full-screen, such as typically occurs in current, prior art smartphone and tablet applications, and no windowing environment will be initiated. If the whole screen option has not been set, a default window process 2708 will be created of the default shape, size and position for the selected application.
  • Application creation task 2710 populates the default window with the selected application, which is followed by customisation task 2712 which allows the user to modify window size and shape by the movement of nodes. Finally for this method, positioning task 2714 allows the new window to be moved by the user to the desired position.
  • decision block 2702 does not detect an application-first event
  • decision block 2706 does not detect a windows-first event
  • the touch screen device will continue monitoring for either of these events.
  • window creation task 2718 will be initiated.
  • the window creation task will create a shape according to the nodes and gestures defined by the user.
  • Decision logic 2720 will then decide the method by which the user will define the application which will run in the new window, according to options and defaults. Either a list menu task 2722 or an icon selection task 2724 will be initiated to prompt the user to select an application for the window.
  • a touch screen module In order to detect nodes, and to create shapes, areas and windows from those nodes, a touch screen module is required, as indicated by 2802 on FIG. 28b.
  • the output signals from the touch screen module in response to user touches are fed to a control module 2804 to interpret.
  • the control module will determine whether a multi-touch event relating to nodes, node-based shapes, node-based areas or node-based windows has occurred. If so, the control module will process the information to create or modify the relevant entity. Node, shape, area or window data for storage will be routed to the memory module 2812, with the memory module also serving as the source of these data to the control module where required by the application 2818 or operating system 2814 running which requires the information.
  • the communications module 2810 sends or receives the shape, area or windows node data, and supplementary information associated with that entity. This information is passed to or from the control module which may also route the data to or from the touch screen module or the memory module.
  • FIG. 29b shows how node-based shape, area and window information may be exchanged between different devices and computers via a network. However several networks combining different communications links and different servers and databases could be used, depending on the application.
  • a tablet computer 2902, a large touch screen device (such as a touch screen television) 2904, a personal computer or workstation 2906 (which does not have to have a touch screen or be multi-touch enabled), a smartphone 2908 and a satellite navigation system 2910 are shown communicating node-based shape, area and window information via a network.
  • the information being provided by some or all of the devices is processed and stored at central servers or databases 2912.
  • the central servers or databases will share information as requested and required by the devices' applications and operating systems, including node-based shape, area and window information, for example a circular area on a map.
  • the link 2914 represents a one-to-one communication of node-based shape, area and window information between two users with suitable apparatus, and shows that a centralized information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share node-based shape, area and window information.
  • node-based area definition Another real benefit of the node-based area definition is realised where a defined area on the touch screen or interacts with a map, space model or image, to establish real, geographic or coordinate-based areas. This would bring benefit to navigation systems for personal and vehicle navigation. It would also enhance mapping and geographic information systems including agriculture planning and real-estate management. In combination with application specific real estate data from central databases, searches and data could be provided about locations and areas specifically defined and customised by a client using a tablet computer or smartphone.
  • a third application for node-based multi-touch window definition is defining areas for a particular function on a touch screen itself.
  • a picture, or window within a screen for example where most of a screen is being used for one application - such as web browsing, but there could be other user-drawn shapes, such as rectangles, which are showing another application independently - for example a TV display, a messaging screen, or a news-feed.
  • Useable windows of different shapes are available for the first time, allowing operating systems and users to use windows other than rectangles for the first time. Also for the first time there is a way to create a window before defining the application which will run in it.
  • node-based multi-touch operations defined can be used to create and control a whole windowing environment on touch screen and other multi-touch enabled devices. Following creation, windows can be selected, moved, stretched and deleted under intuitive user direction and efficient operating system control.
  • the node-based method lends itself to the efficient communication of shape, area and window data.
  • One area which has not yet been served by efficient multi-touch gestures is that of node-based point, line, route or corridor definition via a multi-touch surface, and yet there is considerable use which this could afford users of mobile touch-screens and even large scale touch-sensitive surfaces.
  • the use of one finger in contact with a touch-sensitive area is termed a node, and one node on the touch screen can define location and information relating to a point of definition on a background such as a map or image.
  • two touch screen nodes can be used to create line segments, and route segments on a background.
  • multi-segment, composite, lines, routes and corridors can be created, and any primitive or composite entity can be manipulated through the movement of the nodes defining them.
  • any primitive or composite entity can be manipulated through the movement of the nodes defining them.
  • Various combinations of the following node-based point, line, route and corridor definitions, manipulations and edits can be used in an embodiment. NODE-BASED POINT DEFINITION, SELECTION & MOVEMENT
  • a point on a touch screen device is prior art, especially when implemented as a single touch, held for a short duration.
  • a mapping service such as Google Maps and Apple Maps results in a point of definition, marked with an information bubble or pin symbol.
  • a prolonged touch (typically between 0.3 seconds and 1 .0 seconds) to a touch-screen device would also create a point of definition on a map, similar to the prior art method of Google Maps and Apple Maps implemented on touch screens, except that the point of definition would be marked with a symbol.
  • the symbol would be a filled, colored shape such as a square.
  • a node refers to a persistent defined touch point on the touch screen, which may be used to produce various entities, including points of definition, on a background application such as a map or image application. Therefore when used for defining points of definition, nodes are used in the creation of them, and effectively one node entity exists for one point of definition entity.
  • nodes are not specific to points of definition, since nodes are used in the creation of other entities. For example two nodes are used in the definition of a line segment, and therefore two nodes are associated with every line segment. Multiple nodes can exist on a touch screen concurrently.
  • FIG. 1 c illustrates the creation and selection of a node and point of definition on a touch-screen surface 102, via a prolonged touch with a single touch implement 104 - in this case a finger.
  • a single touch implement 104 - in this case a finger.
  • FIG. 2c shows the result of a point of definition node creation touch on touch screen device 202, with the created point represented by square 204 in the position on the map or background image at which it was created. In one embodiment no label will be given to the created point.
  • a means to label the point is automatically given to the user, such as a virtual keyboard being provided on the touch screen display.
  • the application will provide an automatically generated number or label upon creation.
  • a longer duration touch than required for node creation of approximately two seconds, will initiate a means to label the point of definition.
  • the selection of an existing point of definition will enable the user to be able to perform a labelling or re-naming operation on that point.
  • Legend 206 is an example of a point of definition label defined by the user.
  • the creation of a node-based point of definition also selects that node for immediate movement of the point of definition as indicated in FIG. 3c by the motion arrow 304.
  • the node and therefore associated point of definition will be moved along the surface of the touch screen 102 with the finger. Therefore in one embodiment when a selected point of definition is moved, no panning of the screen or underlying map typically occurs; the node and associated point of definition will move and not the background to the point of definition, with the exception of node-based panning described below.
  • FIG. 4c The result of a node-based point of definition creation and movement is shown in FIG. 4c.
  • the created node-based point of definition representation 404, on the touch-screen device 202 is shown.
  • the direction arrow 402 represents movement of the point of definition, equal in direction and distance on the touch-screen to the causal finger motion.
  • the selection of a node-based point of definition previously created is similar to the selection during creation of a node-based point of definition except that it requires the existence of a node-based point of definition at the location of the touch (or within a number of pixels of the touch). Therefore touching a node-based point of definition on the touch-screen and continuing to touch it will select that point of definition for an operation including moving the node around the touch screen.
  • the node-based point of definition is only required to be present to be able to be selected, and therefore the selection - even of invisible points - is possible.
  • the appearance of the selected point of definition is changed to denote that it is selected.
  • the contrast of a selected point of definition is inverted to denote a selected point such as that shown in 404, and in a third embodiment the color of a node-based point of definition changes once selected.
  • FIG. 5Ac demonstrates how to create the functionality of node-based definition, selection and movement on the touch-screen for points of definition.
  • Process box 502 is a background task which monitors for a multi-touch operation; when a multi-touch operation is detected, the decision logic at 504 detects the specific multi-touch input of a touch device such as a finger touching the screen for a duration appropriate with the application (for example one second). If this node event is detected there is another decision point 506 which determines whether there is an existing node at the location of the touch on the screen. If there is no existing node, the node-based point of definition creation process 510 is initiated, which creates a node at the location being touched, and then selects the point of definition which has just been created.
  • node at, or close to the detected node multi-touch, that node will be selected, as shown in 508. Whether the node was just selected by the node multi-touch, or created and selected in a combined operation, the user can move the node-based point of definition freely as summarised by process 512, and further elaborated in the process description of FIG. 5Bc.
  • the node-based point of definition position on the touch-screen will repeatedly track the position of the finger performing the touch as shown by process 518. However a decision 520 as to whether to perform normal node movement or margin- based node movement depends on whether the node, and associated point of definition, is within a screen margin.
  • a screen margin may be used - although it is not necessarily visible - in the situation where the background to a node, such as a map, occupies a larger area than can be seen on the touch-screen.
  • the node-based point of definition remains under the controlling finger, but the background moves in the opposite direction to the margin as described by 524. Therefore if a node-based point of definition is moved into a defined margin area of the left of the touch-screen, the user's controlling finger may stop there, in which case the background will move to the right.
  • borders will be relevant at the top, bottom, left and right sides of a rectangular touch-screen, although for example borders near the corners could act in two directions.
  • Such scrolling of a background can occur for as long as the user's finger is in contact with the screen and within a margin. If the finger is no longer in a margin, but still in contact with the touch-screen, normal node- based point of definition motion 522 will occur, with the node following the finger.
  • Decision logic 514 on FIG. 5Ac determines whether any other operation is performed on the node-based point of definition after movement; an immediate time-out occurs if the node controlling finger is removed from the touch-screen, in which case the node-based point of definition stays at the position it was last at - where the finger was removed from - and is deselected.
  • a user interface will be brought up to enable the user to assign additional information for the point of definition, as shown in process 516.
  • FIG. 5Cc shows normal node movement on a touch-screen surface 102 - in this case a satellite navigation system - with 528 representing the position of a user's finger, which is evidently outside the margin 530.
  • the node, and associated point of definition if applicable, is moved with the finger, for example as shown by arrow 526, without the background being panned.
  • FIG. 5Dc shows margin-based node movement, where finger position 528 is within the margin 530 - in this case the top margin.
  • Arrow 532 represents the movement or scrolling of the background as a consequence of the presence and position of the finger controlling the node being within the margin - in the opposite direction to the previous direction of travel across the screen by the node.
  • Margin-based node movement is possible where the relevant area for which nodes are relevant is greater than the area shown by the screen. The effect will be to move the node further along the background in the desired direction. In this case the direction of movement of the background will be opposite to a panning multi-touch operation in the same direction that would happen with the attempted movement of a point of definition pin, in Apple Maps, for example.
  • various embodiments relate to the movement of nodes underlying lines, routes and corridors. Therefore for example the movement of the end node of a multi-segment line can also use normal and margin-based node movement as described in FIGs. 5Ac, 5Bc, 5Cc and 5Dc.
  • FIG. 6c shows some of the information which could be attributed to a node after creation and selection - particularly a node representing a geographic location on a map.
  • Latitude and longitude would be important for a location node - this would be received from a mapping or geographic model application once the position on a touch-screen has been established.
  • start date and finish date could be useful for determining relevance.
  • elevation or altitude - perhaps with minimum and maximum elevation/altitude would allow a three dimensional location definition. Therefore the altitude of a surveying point on a mountain could be usefully defined, an altitude above ground could be defined, or a depth below the sea could be added to latitude and longitude data.
  • a name would also be useful - especially when sharing the node on a network, for a shared reference by those users with permissions to see specific nodes.
  • Certain information if required, could be attributed to a node by the operating system, such as the user and time at which a node was created. Miscellaneous information or notes about a location could also be added by a user.
  • visual information such as node icon and label color, size and shape could be defined, or these could be defined or defaulted by the application itself.
  • any information desired to be entered by the user would be available on a menu or form following normal selection of the node of approximately 0.5 seconds. In another embodiment a menu or form would be presented to the user to complete following an extra long selection period of more than one second.
  • FIG. 6c also shows that nodes and node data can be shared across a communication network, and that a node created on one device could be viewed either as a point of interest or an editable node on other devices.
  • Line segments - with reference to FIG. 7c - are defined with the use of two fingers; in this case a thumb 704 and middle finger 702 touching a touchscreen 102 together, and for a minimum duration (for example one second) without being moved.
  • This action will create or select two nodes, at the locations of the two touches on the touch-screen, between which will be drawn a line segment which in one embodiment will be straight, continuous, blue and thick.
  • Other embodiments relevant for different application would produce line segment combinations of wavy, saw-tooth or straight type, with combinations of continuous, dotted or dashed style, with one of various common colors and thicknesses.
  • one of the two nodes created will be determined as the key node, and indicated to the user as the key node by a specific appearance.
  • a key node for a line segment is defined, the selection of the whole line segment, and operations on that line segment are possible by selection of the key node first - for example movement of the line segment with both of its nodes, instead of the movement of just one node.
  • node 804 is marked as the key node, although other ways can be used to indicate a key node from other node, including shape, contrast, color or fill.
  • FIG. 9c The creation method of a line segment by a touch-screen device user is shown in FIG. 9c.
  • User inputs to the touch-screen will be monitored for multi-touch node gestures by process 502.
  • the logic of 902 will determine whether two fingers are touching the touch-screen and remaining still for greater then a minimum duration (for example 1 second), and if so the process 904 will create a line.
  • Process 904 will create a line firstly by selecting the two nodes specified by the finger positions. If a node already exists at a finger position (or within a defined radius), the node will be selected. If there is no pre-existing node at the position, a node will be created at the given screen position and the node will be selected.
  • a line segment will be drawn on the touch-screen between the two selected nodes.
  • the line will be straight, but there are various possibilities with regard to line type, which may for example be a wave, a saw-tooth, an arc, a spline, or other common line type, and of typical thickness and color possibilities found in drawing applications.
  • line type which may for example be a wave, a saw-tooth, an arc, a spline, or other common line type, and of typical thickness and color possibilities found in drawing applications.
  • the allocation of a key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touchscreen.
  • line segment latent nodes may be automatically created for the purpose of line division as described later with the assistance of FIG. 1 1 c.
  • the logic of 906 will detect whether the two fingers remain on the created nodes for a minimum time-out period after the creation of the line segment. If not (for instance the fingers are removed immediately upon the drawing of the line segment) in one embodiment the line will be completed without any additional user information added to the line at that time. In another embodiment the line segment will disappear if a minimum time-out period is not met with the creating fingers remaining substantially still. If the fingers do remain substantially still for an additional period following the timeout, for example 0.5 seconds, process 908 will allow the user to add additional information for the line segment via a user interface.
  • FIG. 10c illustrates some of the information which could be added to, and relevant to a line segment.
  • information can be added automatically by the operating system, such as Creation User and Creation Time.
  • Other information in various embodiments can be graphical preferences from a user, or from default values.
  • some information may be defined by the user, such as Line Segment Name and Information.
  • Other information including node positions on screen and node position representation (such as latitude/longitude/elevation in a mapping application) will be inherited from the node data relating to the nodes either end of the line segment. This is advantageous to a touch-screen device user, since if one end of a line segment is not in the desired location, the user can select that node and move it, with the effect that the line segment will be stretched, contracted or rotated in accordance with the motion of the node.
  • FIG. 13c denotes the touching of a multi-touch enabled touch screen device 202 with two fingers 1306 which are held still for a minimum amount of time as for the normal node-based creation of a line.
  • a representative distance 1302 is displayed next to line 1304, which states the straight line horizontal distance between the points on the map defined by the nodes.
  • the displayed distance is calculated by performing typical distance calculating navigational algorithms on the latitudes and longitudes represented by the nodes, which allows accurate real round-earth and great- circle calculations to be used.
  • the displayed distance is calculated by multiplying the calculated touch screen distance by the known scale of the map or image to the screen representation.
  • the distance only remains while the fingers are in contact with the touch screen, and disappears when one or both fingers are removed, although in other embodiments the distance can persist.
  • no line is displayed, and only the distance measurement is shown to the user. The distance measurement and display method shown in FIG.
  • 13c can be used in other applications than electronic mapping; it can be used with earth or other planet models such as Google Earth, and can be used with image backgrounds such as satellite imagery, x-ray images, architectural plans, geographic information system images, radio telescopes, photographs, video feeds and electron microscopy.
  • the distance calculated and displayed in one embodiment represents an angle or degree of arc rather than a scalar distance, which for example could be used in astronomy. Therefore this technology is of potential use to town planners, engineers, architects, microbiologists, planet scientists, radiographers, meteorologists, farmers, real estate agents, navigators and astronomers, to name a few, who are equipped with a suitable multi-touch enabled touch screen device.
  • a route segment may be defined. Route segment definition is similar to line segment definition, except that the direction or vector is defined by the order of finger touches and the creation order of the nodes.
  • FIG. 14c shows the creation of a route segment. Finger 1404 is touched to the touch screen to remain substantially motionless a certain time before finger 1402 touches the touch screen. I estimate a typical value in use of approximately 0.5 seconds between touches.
  • an arrow 1406 will be drawn from the node under the first touch 1404 in the direction of, and up to, the node under the second touch 1402.
  • the arrow direction is in the direction of the first touch instead of the second touch, and other embodiments provide for the use of multiple line types, styles, thicknesses and colors as described for line definition, including the drawing of a normal line without an arrowhead.
  • scalar distances can be displayed next to a route segment as in FIG. 13c.
  • a route segment gives direction as well as quantity
  • vector quantities or differences can be displayed to the user as shown in FIG. 15c.
  • a horizontal value difference of touch screen pixels 1508 and a vertical value difference of screen pixels 1510 is shown between the node under touch 1504 and the node under touch 1502. Since the direction of the vector is known, the horizontal x difference value and the vertical y difference value have polarity or direction, so that in effect the node under touch 1504 is a temporary origin, and the node under touch 1502 is a vector relative to the former.
  • the axis values of quantities 1508 and 1510 are Northings (distances to North or South) and Eastings (distances to East or West) respectively in a mapping application.
  • the two dimensions for which distance is shown is based on two orthogonal axes relevant to an image.
  • the information presented is an angle and distance between the nodes using measurement quantities and axes appropriate to the scale and application.
  • FIG. 16c shows a method of creating a route or vector segment, and differentiating it from a line or other multi-touch gesture.
  • Activity 1602 monitors for the first touch and decision logic 1604 determines whether a second touch occurs within a specific time window. In this example a window of between 0.2 and 0.8 seconds is defined, however more generally it is anticipated that the time between the two touches will have a minimum value for example of 0.1 seconds and a maximum value for example of two seconds for a route to be recognised. However other durations are possible.
  • Activity 1606 creates the vector line itself, for example a straight, black, thick arrow from the node under the first touch to the node under the second touch.
  • Decision logic 1608 determines whether vector distance is required to be displayed, and if this is the case it is displayed via process 1612. If not, finger presence after the drawing of the route is used to decide in 1610 whether user information is required to be added in activity 1614. Additional information which may be added in by the user includes name of segment and free text describing its significance.
  • FIG. 1 1 c demonstrates how a line may be sub-divided via node-based multi- touch.
  • a created line or route segment will automatically have one or more latent nodes (as indicated by 1 106) created along its length in addition to the line-end defining nodes.
  • these extra latent nodes may be visible or invisible to the user, and may be regularly spaced or irregularly spaced. If a user selects one of these latent nodes (as per the selection of any node with a short touch typically of between 0.5 seconds and 1 .0 second), the latent node can be moved relative to the line- end nodes 1 104. This will bend the line, and create two line segments out of one, sharing a common end-line node (which was previously a latent node of the original line). In one embodiment new line segments created by line subdivision will have their own new latent nodes created.
  • FIG. 12c illustrates a process by which node-based line division may be performed, which will result in a multi-segment line or route.
  • Decision logic 1202 identifies whether the use of latent nodes is valid with current defaults and user selections.
  • a second decision logic 1204 determines which method to use. In the process shown two methods are possible depending on default or pre-definition by the user.
  • process 1206 becomes active and divides a line or route segment into N equal parts with N-1 equally spaced latent nodes, where N is a predefined integer value. For example if N has the value of 2, any line or route segment created will have one latent node half way between the two end nodes.
  • a latent node will be placed at that interval, starting at one end node.
  • FIG. 17c illustrates a composite line created from line segments 1708, 1710 and 1706.
  • Line segments 1708 and 1710 are joined by a common node 1704
  • line segments 1710 and 1706 are joined by a common node 1702.
  • Such a composite line could for example represent a border, boundary or road on a map, or a complex line on a drawing application. Since non end nodes of a composite line are common between two line segments, for example the selection and moving of a node 1704 would result in a change of angle and possibly length by both line segments 1708 and 1710.
  • FIG. 17c illustrates a composite line created from line segments 1708, 1710 and 1706.
  • Line segments 1708 and 1710 are joined by a common node 1704
  • line segments 1710 and 1706 are joined by a common node 1702.
  • Such a composite line could for example represent a border, boundary or road on a map, or a complex line on a drawing application. Since non end nodes of a composite line are common between two
  • 18c shows a composite route created through the combining of multiple line segments 1802.
  • Such a composite line can be used to show flow or direction of travel.
  • a particular use of this type of line would be in defining a route which does not necessarily rely on existing geographic locations. This would be beneficial for example to define a planned wilderness route, journey on water or the flight plan for an aircraft.
  • FIG. 19c shows a method for the joining of different line or route segments to make a composite line or route.
  • Decision logic 1902 verifies whether a new segment has been created, and if so, decision logic 1904 determines whether either of the end nodes or both correspond with existing nodes. It is unlikely that even if the user desires to exactly match the position of an existing node that the same central pixel on the touch screen will be able to be selected, especially if fingers are used compared to a stylus or precision pointing device. Therefore said logic will accept close approximations to the position of an existing node as being identical, and those nodes will be merged as defined in process 1906.
  • the mean position of the original node and new node will be averaged so that the new joining node of the two segments will be halfway between the precise centers of the nodes.
  • the joining node position is taken to be the position of the original node, and in yet another embodiment the joining node position is taken to be the position of the new node.
  • there is user-defined or default data attached to the nodes for example the altitude above sea level of the point represented by the node, this will be merged.
  • the data associated with the original node is used to populate the data fields of the joining node.
  • newer non-default data such as creation date from the new node will over-write the equivalent data of the existing node.
  • Process 1908 allows a key node for a composite line or route to be determined where neither or both segments have a key node already, since in one embodiment a composite, multi-segment line or route may not have more than one key node.
  • the key node of the original line segment, route segment, composite line or composite route is retained as the key node for the new composite line or route.
  • node joining method it is not just new nodes on line or route segments which may be joined to existing nodes, but two nodes of existing segments where the user has selected an end node from a segment and moved it in close proximity to an existing end node of another segment or existing point node.
  • the closeness of the centers of nodes when deciding whether they are to be joined depends somewhat on the application. However it is anticipated in general that nodes would not be joined unless the touch areas under a typical finger touch overlapped with an equivalent radius from the second node.
  • One operation is the deletion of a node, as shown in FIG. 20Ac, although the operation can also be performed on nodes which are not part of a composite line, composite route, line segment or route segment.
  • a user selects a node using a finger 2004.
  • a long press of more than approximately one second results in a node operation menu, one of whose operations is deletion of the node.
  • the node Upon selection the node will be deleted and the symbol representing the node will be deleted as shown by area 2006 on FIG. 20Bc. If the node is an end node of a composite line or route, the end segment of the line or route which incorporated the node will be deleted also.
  • the segments attached to the deleted node will be deleted.
  • the composite line or route would be reformed by the joining together of the nodes either side of the deleted node with a new line or route segment.
  • the deletion of a node would result in two entities, such as a node and a composite line, with no automatic replacement or substitution of line segments.
  • the whole segment and both nodes will be deleted.
  • the node not deleted will remain as a point of designation node. If a key node is deleted, in one embodiment another node will be made the key node. In another embodiment the whole entity that the key node represents will be deleted.
  • any node can be defined as a key node by menu selection, which in one embodiment replaces the existing key node in that function.
  • a labelling means such as a virtual keyboard on the touch screen becomes active.
  • Another option for selection in one embodiment illustrated in FIG. 23Ac and FIG. 23Cc is the conversion of a line segment 2304 or composite line 2308 into a route segment 2302 or composite route 2310 respectively, and vice versa.
  • an option for selection is the means to reverse the direction of a composite route 2306. The said reversal means is also applicable to route segments in various embodiments.
  • FIG. 21 c illustrates the use of line segments to create not only a composite line, but a corridor.
  • a corridor is a central composite line or composite route such as 21 10 with associated parallel composite lines or composite routes illustrated in FIG. 21 c by 2106 and 2108 which represent desired limits related to the central line.
  • One use for this is the definition of air corridors or sea lanes for touch-screen devices used for navigation or navigation planning.
  • the end of a corridor will be a semi-circle centered on the end- node.
  • Corridors can be created by the user of a touch-screen device for a line segment by specifying distance offsets from a central line, as part of the user- added line information process 908 previously described in FIG. 9c.
  • a distance offset can be defined for a route segment as part of the user-added route segment information process 1614 described in FIG. 16c.
  • a line or route segment is offset equally on both sides of the existing line or route segment.
  • selection of the whole multi-segment line or route is first performed. In one embodiment selection is achieved by the selection of the key node of the multi- segment line or route. In another embodiment, selection is achieved by a long press of over approximately one second anywhere on the line or route. In a third embodiment selection is achieved by the long press of over approximately one second of any node on the multi-segment line or route.
  • the touch-screen device will present one or more options to the user, including the option to create a corridor. In one embodiment the user will subsequently provide numerical input to the touch-screen device representing a width of corridor.
  • a default or pre-selected value will automatically be used as the corridor width.
  • an active symbol is placed over the key node, or all nodes of the multi-segment entity. When a finger touch is made to an active symbol, the active symbol can be moved away from the node it is over for the user to indicate the width of the corridor required.
  • a dynamic circle will be created centered on the node and with radius defined by the active symbol to visually feed back to the user what the width of the corridor will be.
  • the active symbol and circle will disappear upon the user removing their finger.
  • the user's finger must remain substantially motionless for approximately 0.5 seconds before the corridor width is finalised and the active symbol is removed.
  • a corridor will be drawn around the central multi-segment line or route in accordance with the selected width.
  • the corridor area will be calculated by the union of segment rectangle area as shown for one segment by 2210 in FIG. 22Bc, and node circle area as shown for one node by 2208.
  • Segment rectangle area is the union of all areas made up of rectangles with length given by individual segment lengths and width given by the selected width.
  • Node circle area is defined by circles with a radius of the selected width, for all nodes in the multi-segment line.
  • the addition of node circles for the calculation of corridor area eliminates discontinuities of corridor shape shown by 2212 in FIG. 22Bc.
  • the border of the final corridor area calculated will be displayed around the original multi-segment line or route, as shown by 2214 in FIG. 22Cc.
  • corridors are not just created by the user of a touch screen device, but are defined on a remote computer or touch screen device and communicated to a computer or touch screen device for display.
  • the use of nodes facilitates communication of corridors since little data is required to be transmitted to define a corridor.
  • Navigation restricted corridors can therefore be provided centrally, which can be overlaid on a touch-screen display with local information - such as GPS position and planned route of the local user.
  • the key is the use of nodes to represent the required information between users and data sources.
  • a touch screen module In order to detect nodes, define points of definition, and to create lines, routes and corridors from nodes, a touch screen module is required, as indicated by 2402 on FIG. 24c.
  • the output signals from the touch screen module in response to user touches are fed to a control module 2404 to interpret.
  • the control module will determine whether a multi-touch event relating to nodes, node-based lines, node-based routes or node-based corridors has occurred. If so, the control module will process the information to create or modify the relevant entity. Node, line, route or corridor data for storage will be routed to the memory module 2412, with the memory module also serving as the source of these data to the control module where required by an application 2418 or operating system 2414 running which requires the information.
  • the communications module 2410 sends or receives the point of definition, line, route or corridor node data, and supplementary information associated with that entity. This information is passed to or from the control module which may also route the data to or from the touch screen module or the memory module.
  • FIG. 25c shows how node-based point of definition, line, route and corridor information may be exchanged between different devices and computers via a network.
  • networks combining different communications links and different servers and databases could be used, depending on the application.
  • a tablet computer 2502 a large touch screen device (such as a touch screen television) 2504, a personal computer or workstation 2506 (which does not have to have a touch screen or be multi-touch enabled), a smartphone 2508 and a satellite navigation system 2510 are shown communicating node-based point of definition, line, route and corridor information via a network.
  • the information being provided by some or all of the devices is processed and stored at central servers or databases 2512.
  • the central servers or databases will share information as requested and required by the devices' applications and operating systems, including node-based point of definition, line, route and corridor information, for example a corridor area on a map.
  • the link 2514 represents a one-to-one communication of node-based point of definition, line, route and corridor information between two users with suitable apparatus, and shows that a centralized information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share node-based entity information. ADVANTAGES
  • a way of defining multiple points of definition by the user is provided, which means that places of relevance to her may be defined just by a touch at the applicable screen location over a background such as a map. Furthermore, those points of definition may be named as desired, remembered by the touch screen device, and shared with friends, social networks or databases. Points of definition could include favourite shops, parking spaces currently vacant and rendezvous points. Current mapping applications typically only allow one user- defined point or pin, which are not customizable, storable and may not be labelled.
  • routes can also be defined easily with two taps, which show direction as well as routing. Route segments may be quickly defined and joined by touch to create a composite route for navigators and pilots. Since routes - like all node-based entities - are easy to define and repeat, they are easily communicated via a communication network, which could have advantages for example in the remote filing of flight plans.
  • corridors which are two dimensional extensions to lines and routes.
  • Corridors are easy to create by touch and user selection, and have application in navigation and control. Corridors can be defined centrally for example by air traffic control on a touch screen, and communicated to pilots.
  • the same method of defining lines, with two touches lends itself to defining two points of which it is desired to know the distance between, which is displayed to the touch screen device user. The distance can either be the screen distance, for example in horizontal and vertical pixels to a programmer, or the distance which the touches represent on a background image or map. Therefore the method is useful for navigators to assess distance between two points.
  • Other examples include the use for radiographers to determine the size of bone fractures from an image on the touch screen and air traffic control to determine following distances between two aircraft by touching their symbols on a radar display on touch screen workstation screen.
  • the node-based method lends itself to the efficient communication of the said entities.

Abstract

A method and system are presented which will detect combinations of user touches on a touch screen device as nodes, and will create points, lines, shapes, areas and windows from these nodes. Where the device has a communications capability, the locally defined points, lines, shapes, areas and windows can be shared with remote users and databases, and similarly points, lines, shapes, areas and windows created by others can be viewed on a local display. The method and system are of particular benefit to drawing applications, windows definition on touch screen devices, real estate management, navigation, and exploitation of mapping resources.

Description

Systems and Methods for Interacting with a Touch Screen
The numbers and types of touch-screen devices is rapidly increasing, and to a large extent these devices are mobile devices, such as mobile telephones, tablet computers, electronic readers and satellite navigation systems. The user interface to some of these devices has evolved into the common use of certain gestures with fingers or thumbs in order to perform selection, panning, scrolling and zooming functions. These gestures are often collectively referred to as Multi-touch, since they rely on more than one digit being in contact with a touch-sensitive surface at a time, or the same digit being used to tap the surface more than once.
Certain multi-touch gestures are now so common as to have become de-facto standards, for example pinch, zoom and pan on any mapping application in, for example, Apple's iOS and Google's Android. Other gestures have been proposed, for example in U.S. Pat. No. 7,840,912 to Elias et al. (2010), which outlines a method for implementing a gesture dictionary, with examples of specific gestures which can be variously interpreted to perform functions such as save or ungroup. However there is a lack of multi-touch gestures to define a line, shape or area. U.S. Pat. No. 0254,797 to Adamson et al. (201 1 ) addresses the subject of multi-shape, multi-touch gestures, but this is at the level of identification of finger and non-finger touches by area patterns, and does not consider shape or area as a definition or input technique for the use of applications on a touch-screen.
In the prior art, there is no method to create, edit or define an application for unlimited windows using current touch screen device operating systems. Similarly there is no mechanism for defining flexible, user-defined window positions or sizes, nor windows which can overlap. Although equivalent mechanisms exist for workstation, desktop and laptop computers as defined over decades by the well known products of IBM, Microsoft and Apple for example, and underlying technology from the likes of AT&T Bell Laboratories (see Pike, 1985, U.S. Pat. No. 4555,775), these are based on the user using non-touch user interfaces such as a computer mouse, or multi-touch equivalents of mouse operations, such as scroll, select and run; there has not been a graphical language for windowing on touch screens and other multi- touch enabled devices. According to one aspect of the present invention, there is provided a method for interpreting user touches on a touch screen device to create and edit points of definition, lines, routes and corridors on the display of said touch screen device, comprising recognizing single and double, concurrent user touches to the touch screen device, interpreting said user touches as node positions, node touch sequences and associated node motions on the screen display of said touch screen, interpreting said node positions, said node touch sequences and said node motions to determine the point, line segment or route segment entities to be drawn on the touch screen display, retaining recognition and information of said entities persistently after said user touches to the touch screen device have ceased, allowing reselection by a user of a previously defined entity for operation on that entity, and allowing reselection by a user of any node of a previously defined entity for operation on that node.
Preferably, the number of said concurrent user touches is interpreted as one, and the node produced by said concurrent user touch remains substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a point of definition and the drawing of a symbol on the touch screen to represent said point to the user. Conveniently, said user touch remains substantially motionless for an additional predetermined length of time after said creation time, thereby resulting in a means being provided to the user for adding and viewing alphanumeric name or identification information to the said point of definition.
Advantageously, the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a line segment and the drawing of a line on the touch screen between positions of the two user touches. Preferably, the said drawn line has a predetermined style, color and thickness.
Conveniently, one or more latent nodes is automatically created at intervals along a line segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the line segment which thereby becomes a multi-segment line.
Advantageously, a node from one line segment is moved so that it is substantially at the same location on the touch screen as a second node of a different line segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment line.
Preferably, the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the creation of a route segment and the drawing of an arrow from the point of the first touch in the direction of the second touch on the touch screen using a predetermined style, color and thickness.
Conveniently, one or more latent nodes is automatically created at intervals along a route segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the route segment which thereby becomes a multi-segment route. Advantageously, a node from one route segment is moved so that it is substantially at the same location on the touch screen as a second node of a different route segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment route.
Preferably, the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display of actual distance between the two user touches on the touch screen, to the user.
Conveniently, the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display to the user of representative distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image. Advantageously, the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display of screen vector distance between the two user touches on the touch screen, to the user.
Preferably, the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display to the user of representative two dimensional vector distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image. Conveniently, a said reselection by a user of a previously defined entity is performed and said operation on said entity is selected as corridor creation, whereby a bounded area around said entity is calculated and displayed to the user on the touch screen, defined by the logical union of circle area around all nodes of said entity and rectangle area around all line or route segments of said entity.
Advantageously, the corridor width is predetermined and therefore the radius of the circles around said nodes is made equal to the predetermined corridor width and the width of the rectangles around said segments is also made equal to the predetermined corridor width.
Preferably, the corridor width is defined by touch by the user, and therefore the radius of the circles around said nodes is made equal to the user-specified corridor width and the width of the rectangles around said segments is also made equal to the user-specified corridor width.
Conveniently, a said reselection by a user of a previously defined entity is performed through the means of the entity having one or more key nodes whereby operations specific to the whole entity such as the movement, deletion or addition of data is performed.
Advantageously, said operation on a node is taken from the list including movement, deletion, labelling, addition of data and definition as a key node.
Preferably, said movement operation on the node is by the user dragging the node around within the perimeters of the multi-touch enabled input device without any background map or image being scrolled.
Conveniently, said movement of the node is by the user maintaining a touch within a predetermined distance of a perimeter of the touch screen, thereby causing the node to stay at the position of the touch, but any background map or image being scrolled in the opposite direction of said perimeter.
Advantageously, if the geometric location coordinates of the moved node become substantially the same as the geometric location coordinates of an existing node, the two nodes are equated as being the same, and the new single node inherits the properties of said existing node.
Preferably, said addition of data includes information taken from the list of start date, end date, elevation above sea level, planned altitude, depth below sea level, and free text information.
Conveniently, said deletion operation removes the node, and also a point of definition associated with a node.
Advantageously, the method is for interpreting multiple, concurrent user touches on the touch screen device to create and edit shapes on the display of said touch screen device, wherein the method further comprises recognizing multiple, concurrent user touches to the touch screen device, interpreting said user touches as node positions and associated node motions on the screen display of said touch screen, interpreting said node positions, and said node motions to determine a geometric shape to be drawn on the touch screen display, retaining recognition and information of said shape persistently after the user touches to the touch screen device have ceased, allowing reselection by a user of a previously defined shape for operation on that geometric entity, and allowing reselection by a user of any node of a previously defined shape for operation on that node.
Preferably, the number of said concurrent user touches is interpreted as two, and one of the nodes produced by these remains substantially motionless for a predetermined length of time while the other node initially moves and subsequently also remains substantially motionless for a predetermined length of time, thereby resulting in the drawing of a circle centered on the initially stationary node and passing through the initially moving node.
Conveniently, the number of said concurrent user touches is interpreted as two, and both of the nodes produced by said concurrent user touches initially move and subsequently remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a rectangle determined by the positions of the two nodes at diagonally opposite corners. Advantageously, the two concurrent user touches are initially detected as a single node due to proximity of said user touches to each other before the movement of the two resultant nodes apart.
Preferably, the number of said concurrent user touches is interpreted as three and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a triangle between the three nodes such that each node position becomes an apex of the triangle. Conveniently, the number of said concurrent user touches is interpreted as four and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a quadrilateral shape between the four nodes such that each node position becomes a corner of the quadrilateral.
Advantageously, the number of said concurrent user touches is interpreted as five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a pentagon between the five nodes such that each node position becomes a vertex of the pentagon. Preferably, the number of said concurrent user touches is interpreted as greater than five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a polygon between the plurality of nodes such that each node position becomes a vertex of said polygon.
Conveniently, reselection of the shape is via the user touching a node or side of the shape for a predetermined period of time. Advantageously, said operation on said geometric entity is movement of the whole geometric entity with all nodes being moved together with respect to a virtual area or background so that the nodes are not moved in relation to one another. Preferably, said operation on said geometric entity is the addition of user- defined information into predetermined fields to categorise, label or define parameters relating to the geometric entity in the area of application.
Conveniently, said operation on said geometric entity is the deletion of the geometric entity.
Advantageously, said operation on said geometric entity is an independent moving of two component nodes of said geometric entity along two orthogonal axes of said touch screen module such that a two dimensional stretching or compressing of said geometric entity occurs proportional to the movement of the two nodes in each axis.
Preferably, said two dimensional stretching or compressing is of two nodes on the circumference of a circle geometric entity such that an ellipse shape is created from the original circle. Conveniently, one or more sub-nodes is automatically created at intervals along the sides of a shape, allowing the user to identify, select and move any said sub-node whereby said sub-node becomes a new node of said shape and a new side is added to said shape.
Advantageously, the number of sides in a geometric shape is decreased by user selection of a node of said shape and a subsequent deletion operation on said selected node occurs, whereby nodes previously connected to the deleted node are directly joined.
Preferably, said geometric shape becomes the boundary within said touch screen module of an area comprising a two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
Conveniently, said geometric shape is created on top of an existing two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said shape define coordinates of said two dimensional image, map or surface displayed on said touch screen module, and a subsequent pan and zoom operation is performed either to the specific area of said image, map or surface defined by said geometric shape or to an area centered on and including said specific area which also shows additional surrounding area due to differences in shape or aspect ratio of the said shape and the available screen area.
Advantageously, said geometric shape becomes the boundary within said touch screen module of a new window which runs a software application independent from any main or background software application, and independent from a software application running in any other window. Preferably, selection options for applications to run in said new window are presented to the user in a menu or list adjacent to the new window after creation of the new window. Conveniently, selection options for applications to run in said new window are presented to the user via icons appearing inside the new window after creation of the new window.
Advantageously, an application to run in said new window is selected by the user prior to creation of the new window.
According to another aspect of the present invention, there is provided a tangible computer readable medium storing instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 46.
According to a further aspect of the present invention, there is provided a distance measurement and display system graphical user interface for touch screen devices with a mapping, navigation or image background, comprising a detection module configured to detect two concurrent user touches to a touch screen that permit a user to input two points of definition on a background map or image for which it is desired to know the distance between, a measurement module configured to calculate the representative distance between the two concurrent touches including scaling and conversion to the measurement units and axes of the background map or image, and a display unit configured to display the calculated representative distance between the two concurrent touches to the user of the touch screen device.
According to a yet further aspect of the present invention, there is provided a windowing system for touch screen devices, comprising a multiple independent window display and interaction module that permits a user to view concurrently a plurality of computer applications in a plurality of different windows of a plurality of different shapes and sizes, a selection module for identifying which of the plurality of said multiple independent windows is to be the subject of a user defined operation, a geometric shape detection module that is configured to define the shape, size and boundary of a new window conveniently, and a user selection module configured to permit a user to select an application for said new window.
According to a further aspect of the present invention, there is provided an apparatus, comprising a touch screen module incorporating a touch panel adapted to receiving user input in the form of multi-touch shape gestures including finger touches and finger movements, and a display surface adapted to present point of definition, line, route and corridor information to the user, a control module which is operatively connected to said touch screen module to determine node and point of definition positions from said finger touches, to determine node motions and touch sequence from said finger movements, to recognize a line or route segment from combinations of said node positions and touch sequences, to create multi-segment lines and routes from individual segments by node position equivalence detection, to create multi-segment lines and routes from detection of latent node selection and movement on line and route segments, to detect a selection touch to a pre-existing entity from the list including point of definition, line segment, route segment, multi- segment line and multi-segment route, to control the editing of said preexisting entity, and to generate a continuous graphical image including said node positions and plurality of said pre-existing entities for display on the touch screen module, a memory module logically connected to said control module which is able to store from and provide to said control module a logical element selected from the group consisting of operating systems, system data for said operating systems, applications which can be executed by the control module, data for said applications, node data, point of definition data, line segment data, route segment data, multi-segment line data, multi-segment route data and corridor data. Preferably, the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen, a touch screen display which interacts with medical and scientific image display equipment and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
Conveniently, the node, point of definition, line segment, route segment, multi- segment line, multi-segment route and corridor information presented to the user includes symbols and lines currently detected or selected and those recalled from memory, previously defined, or received from a remote database, device or application.
Advantageously, a communications module is incorporated adapted to the transfer of node, point of definition, line segment, route segment, multi- segment line, multi-segment route and corridor information including node position and entity type, to and from other devices, networks and databases.
Preferably, the control module is configured to accept external said information from said communications module and pass the information to the touch screen module for display to the user.
Conveniently, the control module is configured to pass locally created said information to the communications module for communication to other devices, networks and databases. Advantageously, the entities recognised by said control module from the said detected node positions, touch sequences and node motions include nodes, points of definition, line segments, route segments, multi-segment lines, multi- segment routes and corridors.
Preferably, the said editing of pre-existing entities includes the movement of points of definition, the movement of entire entities, the deletion of entire entities, the stretching of lines by the movement of their individual nodes, the editing of a corridor width, the creation of multi-segment lines and routes by joining segments at common nodes, and the addition of a new node between two existing nodes of an entity.
Conveniently, the said nodes and points of definition recognised by said control module represent locations on a two dimensional image, map or surface having its own coordinate system which are readable by said control module from the memory module.
Advantageously, the display surface is adapted to present node and shape information to the user, the control module which is configured to determine node positions from said finger touches, to determine node motions from said finger movements, to recognize a geometric shape from combinations of said node positions and node motions, to recognize an application associated with said geometric shape, to detect a selection touch to a pre-existing said shape, to control the editing of said selected shape, and to generate a continuous graphical image including said node positions and plurality of said geometric shapes for display on the touch screen module, the memory module being configured to store from and provide to said control module a logical element selected from the group consisting of operating systems, system data for said operating systems, applications which can be executed by the control module, data for said applications, node data shape data, area data and windows data. Preferably, the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
Conveniently, the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows currently detected or selected. Advantageously, the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows from previous user touches, recalled from memory, previously defined, or received from a remote database, device or application. Preferably, the apparatus comprises a communications module which is configured to the transfer of node, shape, area and window information including node position and shape type, to and from other devices, networks and databases. Conveniently, the control module is configured to accept external node, shape, area and window data from said communications module and pass them to the touch screen module for display to the user.
Advantageously, the control module is configured to pass locally created nodes, shapes, areas and windows to the communications module for communication to other devices, networks and databases.
Preferably, said geometric shapes recognised by said control module from the detected node positions and node movements include circles, rectangles, triangles, quadrilaterals, pentagons and polygons with greater than five vertices. Conveniently, the edits to selected shapes includes the transformation of a selected circle into an ellipse.
Advantageously, the edits to selected shapes includes the two dimensional stretching of a selected shape in the axes determined by node movements.
Preferably, the edits to selected shapes, areas and windows includes the creation of shapes, areas and windows with one side more than when selected, by the addition of a new node between two existing nodes of a shape.
Conveniently, the node positions of a re-selected geometric shape, area or window are moved by the node being touched with a finger which is moved over the surface of said touch screen module wherein the node moves accordingly, and the said geometric shape, area or window is modified accordingly.
Advantageously, said geometric shapes are re-selected by a non-moving user touch of a constituent node of said geometric shape for a period of less than three seconds, after which the whole geometric shape is moved accordingly by subsequent movement of said constituent node.
Preferably, the said geometric shapes recognised by said control module represent areas on a two dimensional image, map or surface having its own coordinate system wherein the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
Conveniently, the said geometric shapes recognised by said control module represent window boundaries on the display of said touch screen such that a plurality of independent computer programs or applications are run concurrently on said display. Advantageously, said geometric shapes recognised by said control module represent area boundaries on the display of said touch screen such that a combined pan and zoom operation is performed to the specified said area boundary.
An embodiment of the invention seeks to define and manage lines and shapes rapidly and in a user-friendly manner using multi-touch gestures. Such means are produced through the creation and manipulation of nodes on a multi-touch enabled surface. In one embodiment the nodes are interpreted as being lines or shapes according to the gestures used. The nodes, lines and shapes take on a meaning relevant to the operating system or application processing them, such as the creation of geographic points, routes or areas on a map, respectively.
Being node-based, once defined the points, lines and areas can be easily manipulated and edited, and the lines and areas can be efficiently stretched and re-sized. The node-based aspect also gives the advantage of sharing data in one embodiment such as the remote transmission of geometric information between different devices programmed or enabled to process or interpret the information, for example for remote drawing over a communications network.
Various applications of one or more embodiments include sharing planned wilderness hiking routes, defining flight plans, remote drawing, co-ordinating search and rescue, maritime navigation and defining unique geographic areas of interest for real estate searches of a remote server.
SUMMARY
Currently there is no means to be able to define and manage shapes or areas rapidly and in a user-friendly manner using multi-touch gestures on a touch screen. Similarly there is no means of creating user-defined windows on a multi-touch enabled device, or defining the application to run in those windows. I have produced a means of defining, editing and communicating shapes, areas and windows of various geometric forms through the creation and manipulation of nodes on a multi-touch enabled surface. One application of shape drawing on a touch screen device for various embodiments is the definition or visibility of areas on a map, whereby the shape created over a map or earth model application is directly related to a real geographic area represented by the map or earth model. This is possible primarily due to the node based nature of the system, whereby for example a positional node on a touch screen or other multi-touch enabled device equates to a geographic node on the surface of (or above) the earth and vice versa. A geographic area defined or visible in this way by various embodiments will be advantageous to real estate buyers and sellers, farmers, pilots, property developers, navigators, air traffic controllers and emergency co-ordinators, for example.
In addition to various embodiments providing an easy means of defining the shape, size and position of windows on the display of a touch screen device, embodiments allowing various means of defining the application within a newly-defined window on a touch screen device have been designed. These include methods of defining a window first, and then defining the application within it, and also drawing the window using multi-touch gestures once an application has been launched. One, or a combination of these methods, means that a multi-window display with truly independent applications running and visible concurrently, can for the first time be used on touch screen devices. Especially (but not exclusively) for larger touch screen devices such as tablet computers and touch screen personal computers, it will revolutionise the user interface with the device in the same way that Microsoft Windows replaced single DOS based applications on the personal computer. However the transformation will have even more potential due to the different shapes of windows which various embodiments allow, including circular, rectangular, triangular and pentagonal windows. Being node-based, for various embodiments once defined the areas, shapes and windows can be easily manipulated and edited, such as being stretched and re-sized. The node-based aspect also gives the advantage of being able to easily share area, shape and window data from a touch screen device with other devices, networks and databases.
In one embodiment multiple independent nodes on a mapping application are created by touch gestures on a touch screen, which correspond to multiple independent geographic points on the Earth's surface. The nodes can be labelled by the user, moved by the user without the background map moving, and have attributes added by the user or application. Multi-touch selections on the touch screen with two touches together are interpreted as line segments, and with two touches in rapid sequence are interpreted as route segments with direction implied by the order of touches. The lines may exist for the duration of the creating touches or indefinitely, and may display distance or vector difference between the two touches, to the user. Line segments may be divided into several segments by the selection and movement of hidden (latent) nodes along existing line segments. Multiple line and route segments can be defined to exist together, and these can be joined to make complex, multi-segment lines and routes by the joining of the end nodes of line and route segments. Lines and routes can be made into a two dimensional corridor by the definition of a width relevant either side of a line or route segment, or a composite, multi-segment line or route.
Some of the gestures for node, line, route and corridor definition and management are entirely new in one embodiment, while for others the gestures used are adapted from existing gestures, but used for an entirely new purpose or with a new human computer interaction interface.
Being node-based, once defined the points, lines, routes and corridors can be easily manipulated and edited, and the lines, routes and corridors can be efficiently stretched and re-sized. In some embodiments the use of key nodes permits operations on a composite entity made up of multiple line or route segments, such as the movement or deletion of a complex route. The node- based aspect also gives the advantage of sharing data in one embodiment such as the remote transmission of geometric information between different devices programmed or enabled to process or interpret the information, for example for passing planned routes over a communications network.
Various applications of one or more embodiments include sharing planned wilderness hiking routes, defining flight plans, remote drawing, defining boundaries in real estate, maritime navigation and getting representative distance information between two points on an x-ray or electron microscope image by touching the two required points on a touch screen. So that the present invention may be more readily understood, embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 a represents the definition of a node by the user of a multi-touch enabled device.
FIG. 2a is a potential result on a mapping application of a multi-touch defined point.
FIG. 3Aa shows a flow-chart of a method to create and select a node using multi-touch.
FIG. 3Ba shows a flow chart of normal node and margin node movements. FIG. 3Ca shows normal node movement.
FIG. 3Da shows margin node movement.
FIG. 4a demonstrates potential information related to a multi-touch defined point.
FIG. 5a represents the definition of a line segment with a multi-touch enabled device. FIG. 6a shows a result on a mapping application of a multi-touch defined line segment.
FIG. 7a shows a flow-chart of a method to detect a multi-touch defined line segment.
FIG. 8a highlights potential information related to a multi-touch defined line segment.
FIG. 9a demonstrates how 'sub-nodes' can be used to divide a line segment in two.
FIG. 10a illustrates how a line can be defined from multiple line segments. FIG. 1 1 a shows how a line can be used as a route (with directionality).
FIG. 12a represents the formation of a corridor from a line.
FIG. 13a represents the definition of a circle using a multi-touch enabled device.
FIG. 14a is a potential result on a mapping application of a multi-touch defined circle.
FIG. 15a shows a flow-chart of a method to define a multi-touch defined circle. FIG. 16a demonstrates potential information related to a multi-point defined circle.
FIG. 17a represents the definition of a rectangle by the user of a multi-touch enabled device.
FIG. 18a is a potential use of a multi-touch defined rectangle to define a window on a touch-screen device for a parallel application.
FIG. 19a shows a flow-chart of a method to detect a multi-touch defined rectangle.
FIG. 20a demonstrates potential information related to node-based polygons defined by multi-touch gestures.
FIG. 21 a represents the definition of a triangle (and by analogy, a quadrilateral) by the user of a multi-touch enabled device.
FIG. 22a is a demonstration of a multi-touch defined triangle in a drawing application, which is also directly analogous to definition of quadrilateral shapes. FIG. 23a shows a flow-chart of a method to detect a multi-touch defined triangle or quadrilateral shape.
FIG. 24a represents the definition of a pentagon using a multi-touch enabled device.
FIG. 25a is a potential result on a Real Estate application of a multi-touch defined pentagon.
FIG. 26a shows a flow-chart of a method to detect a multi-touch defined pentagon.
FIG. 27a demonstrates how a user can create a hexagon from a pentagon. FIG. 28Aa illustrates how route lines can be reversed in directionality.
FIG. 28Ba illustrates how a closed area can be converted between a route and area, and between a closed line and an area.
FIG. 29a is a modular view of a multi-touch device capable of defining, editing and displaying node-based points, lines and areas.
FIG. 30a demonstrates how a network of multi-touch enabled devices, touchscreen devices and databases can be used together to share node-based data between each other.
FIG. 1 b represents the definition of a circle using a touch screen device according to an embodiment of the invention.
FIG. 2b is a potential result on a mapping application of a multi-touch defined circle according to embodiments of the invention.
FIG. 3b shows a flow-chart of a method to define a multi-touch defined circle according to embodiments of the invention.
FIG. 4Ab demonstrates how cardinal nodes on a created circle can be used to stretch or compress the shape in different axes, permitting the creation of an ellipse according to embodiments of the invention.
FIG. 4Bb illustrates the result of the creation of an ellipse from a circle according to embodiments of the invention.
FIG. 5b shows a flow-chart of a method to create an stretched shape such as an ellipse according to embodiments of the invention.
FIG. 6b represents the definition of a rectangle by the user of a touch screen device according to embodiments of the invention. FIG. 7b is a potential use of a multi-touch defined rectangle to define an area on a touch screen device map for a combined pan and zoom operation to an area of interest according to embodiments of the invention.
FIG. 8b shows a flow-chart of a method to detect a multi-touch defined rectangle according to embodiments of the invention.
FIG. 8Ab shows a flow-chart of a method to provide a combined pan and zoom on a map following a multi-touch defined rectangle definition gesture.
FIG. 9b represents the means of definition of a triangle by the user of a touch screen device according to embodiments of the invention.
FIG. 10b is a demonstration of a multi-touch defined triangle in a drawing application according to embodiments of the invention.
FIG. 1 1 b shows a flow-chart of a method to detect a multi-touch defined triangle according to embodiments of the invention.
FIG. 12b represents the means of definition of a quadrilateral shape on a touch screen device according to embodiments of the invention.
FIG. 13b is an illustration of a multi-touch defined quadrilateral shape in a drawing application according to embodiments of the invention.
FIG. 14b shows a flow-chart of a method to detect a multi-touch defined quadrilateral shape according to embodiments of the invention.
FIG. 15b represents the definition of a pentagon using a touch screen device according to embodiments of the invention.
FIG. 16b illustrates a potential result on a real estate application of a multi- touch defined pentagon according to embodiments of the invention.
FIG. 17b shows a flow-chart of a method to detect a multi-touch defined pentagon according to embodiments of the invention.
FIG. 18b represents the definition of a polygon of more than five sides and five nodes, using a touch screen device according to embodiments of the invention.
FIG. 19b demonstrates a hexagon created from a pentagon according to embodiments of the invention. FIG. 20b shows a flow-chart of a method to detect a multi-touch defined polygon of six or more sides and nodes according to an embodiments of the invention.
FIG. 21 b illustrates how a node of a multi-touch defined shape, area or window may have operations performed on it by a user according to an embodiment of the invention.
FIG. 22b demonstrates how a node of a multi-touch defined shape, area or window may be deleted, thus rendering the shape, area or window with less sides and nodes.
FIG. 23Ab illustrates the nodes detection step in creating a rectangular shape or area which will ultimately result in a new window-first window according to an embodiment of the invention.
FIG. 23Bb illustrates the shape sizing and positioning step in creating a rectangular shape or area which will ultimately result in a new window-first window according to an embodiment of the invention.
FIG. 23Cb illustrates the selection of application step for a newly created window-first rectangular window according to an embodiment of the invention. FIG. 23Db illustrates the completion of a multi-touch defined window-first rectangular window on a multi-touch enabled device according to embodiments of the invention.
FIG. 24Ab illustrates an alternative selection of application step for a newly created window-first circular window according to an embodiment of the invention.
FIG. 24Bb illustrates the completion of the alternative multi-touch defined window-first circular window according to embodiments of the invention.
FIG. 25Ab illustrates the application selection stage for creating an application- first window according to prior art.
FIG. 25Bb illustrates the default window shape size and position appearance stage for creating an application-first window according to embodiments of the invention.
FIG. 25Cb illustrates the sizing and positioning stage for creating an application-first window according to embodiments of the invention. FIG. 25Db illustrates the completion of a multi-touch defined application-first window according to embodiments of the invention.
FIG. 26b illustrates a multi-touch device display containing many different windows, of various shapes running several different applications.
FIG. 27b shows a flow-chart defining how windows may be created and defined on a multi-touch device.
FIG. 28b is a modular view of a touch screen device capable of defining, editing and displaying node-based areas, shapes and windows according to embodiments of the invention.
FIG. 29b demonstrates how a network of multi-touch enabled devices, touch screen devices, workstations, networks and databases can be used together to share node-based area, shape and window data between each other according to embodiments of the invention.
FIG. 1 c represents the definition of a node or point of definition by the user of a multi-touch enabled device according to an embodiment of the invention.
FIG. 2c is a potential result on a mapping application of a multi-touch defined point of definition according to an embodiment of the invention.
FIG. 3c shows how a node on the touch screen, representing a point of definition may be moved according to an embodiment of the invention.
FIG. 4c shows how a point of definition on a map is moved in sympathy with the node being moved on the touch screen according to an embodiment of the invention.
FIG. 5Ac shows a flow-chart of a method to create and select a node or point of definition using multi-touch according to embodiments of the invention.
FIG. 5Bc shows a flow chart of normal node and margin node movements according to embodiments of the invention.
FIG. 5Cc shows normal node movement according to an embodiment of the invention.
FIG. 5Dc shows margin node movement according to an embodiment of the invention.
FIG. 6c demonstrates potential information related to a touch defined point of definition on a map according to embodiments of the invention. FIG. 7c represents the definition of a line segment on a touch screen device according to embodiments of the invention.
FIG. 8c shows a result on a mapping application of a multi-touch defined line segment according to an embodiment of the invention.
FIG. 9c shows a flow-chart of a method to detect a multi-touch defined line segment according to embodiments of the invention.
FIG. 10c highlights potential information related to a multi-touch defined line segment according to an embodiment of the invention.
FIG. 1 1 c demonstrates how latent nodes can be used to divide a line segment in two according to embodiments of the invention.
FIG. 12c presents a flow-chart of how latent nodes can be used with fractional or space division to divide a line segment into two or more segments according to embodiments of the invention.
FIG. 13c demonstrates how the act of creating a line segment can also be used to display representative or actual distances between the touch nodes, according to embodiments of the invention.
FIG. 14c illustrates how a sequenced tapping of a touch screen can be used to define direction of travel, and therefore also a vector or route according to embodiments of the invention.
FIG. 15c indicates how when defining a route or line in a node based fashion, a display of vector difference between the nodes can also be displayed according to embodiments of the invention.
FIG. 16c presents a flow-chart defining how sequenced touches result in a route or vector line, and how it is determined whether to display distance between the nodes, according to embodiments of the invention.
FIG. 17c illustrates how a composite, multi-segment line can be defined from multiple line segments according to embodiments of the invention.
FIG. 18c shows how a line can be used as a route, with directionality, according to an embodiment of the invention.
FIG. 19c presents a flow-chart defining how a composite multi-segment line or route can be formed from joining nodes of other lines or routes together, according to embodiments of the invention. FIG. 20Ac shows how a node on a line or route can be deleted according to an embodiment of the invention.
FIG. 20Bc indicates a resulting composite, multi-segment line after the deletion of a node, according to an embodiment of the invention.
FIG. 21 c represents the formation of a corridor from a line or route according to an embodiment of the invention.
FIG. 22Ac depicts a multi-segment route according to an embodiment of the invention.
FIG. 22Bc shows a multi-segment route with rectangles and a circle to show the creation of a corridor area, according to an embodiment of the invention. FIG. 22Cc illustrates a complete corridor over a multi-segment route according to an embodiment of the invention.
FIG. 22Dc represents a process for the creation of a corridor area around a multi-segment line or route according to an embodiment of the invention.
FIG. 23Ac illustrates how route lines can be converted into non-route lines and vice versa, according to embodiments of the invention.
FIG. 23Bc illustrates how route lines can be reversed in directionality according to an embodiment of the invention.
FIG. 23Cc illustrates how a closed area can be converted between a route and area, and between a closed line and an area according to an embodiment of the invention.
FIG. 24c is a modular view of a multi-touch device capable of defining, editing and displaying node-based points, lines, routes and corridors according to embodiments of the invention.
FIG. 25c demonstrates how a network of multi-touch enabled devices, touchscreen devices and databases can be used together to share node-based points, lines, routes and corridors between each other according to embodiments of the invention. DETAILED DESCRIPTION & OPERATION - FIRST EMBODIMENT
One area which has not yet been addressed by multi-touch gestures is that of node-based point, line and shape definition via a multi-touch surface, and yet there is considerable use which this could afford users of mobile touch- screens and even large scale touch-sensitive surfaces. The use of one or more fingers in contact with a touch-sensitive area can be used to create nodes, lines, rectangles, circles, triangles, quadrilaterals and pentagons. From these primitive entities, greater-sided polygons, lines and corridors can be quickly created, and any of these lines and shapes can be manipulated through the movement of the nodes defining them. Various combinations of the following node, line and shape definitions, manipulations and edits can be used in an embodiment.
NODE DEFINITION, SELECTION & MOVEMENT
The definition of a point on a multi-touch enabled device is prior art, especially when implemented as a single tap multi-touch operation, and it is used for instance for detection of selected options and re-location of a computer mouse pointer. A prolonged touch at a given location has also been used to define a geographic location on a map. In this embodiment a prolonged touch (typically between 0.1 seconds and 3.0 seconds) to a touch-screen multi-touch enabled device is a required precursor to much of the functionality described, and a node would be created in this manner; a node being an entity which can be viewed and manipulated as a geometric point on a multi-touch enabled device, but which represents another logical or physical entity. An example of a node is a small circle centered on a specific screen pixel on a map, which represents a particular latitude/longitude.
The manipulation of a node also departs from existing prior art due to the means by which it can be repositioned. FIG. 1 a illustrates the creation and selection of a node on a touch-screen surface 102, via a prolonged touch with a single touch implement 104 - in this case a finger. The creation of a node also selects that node, which allows movement of the node as indicated by the motion arrow 106. When the touch implement that creates or selects the node is moved while still maintaining contact with the touch-screen, the node will be moved along with the touch implement. Therefore when a selected node is moved, no panning of the screen or underlying map typically occurs; the node will move and not the background to the node, with the exception of node- based panning described below. The result of a node creation and movement is shown in FIG. 2a. The created node representation 204, on the multi-touch enabled touch-screen device 202 is shown. The direction arrow 208 represents movement of the node, equal in direction and distance on the touch-screen to the touch implement motion.
The selection of a node is similar to creation of a node except that it requires the existence of a node shown at the location of the touch (or within a number of pixels of the touch). Therefore touching a node on the touch-screen and remaining touching it will select that node for an operation including moving the node around the screen. The node is only required to be present to be able to be selected, and therefore the selection - even of invisible nodes - is possible.
Operations possible on a node once created and selected (and the user interface for those operations) will be specific to the application using the multi- touch operations, but is likely to include the input of data about the node. FIG. 2a shows a node on a map representing a geographic coordinate on a map which has been given a label 206 by a user.
Margin-based node movement is possible where the relevant area for which nodes are relevant is greater than the area shown by the screen. In this case although a selected node will initially be moved in the direction of the finger controlling it, without the background being panned, when the finger and node beneath it get within a certain margin of the edge of the screen, the node will not move further across or off the screen. Instead, a scroll of the background will occur in the opposite direction to the previous direction of travel across the screen by the node. The effect will be to move the node further along the background in the desired direction. In this case the direction of movement of the background will be opposite to a panning multi-touch operation in the same direction.
FIG. 3Aa demonstrates how to create the functionality of node definition, selection and movement on the touch-screen. Process box 302 is a background task which monitors for a multi-touch operation; when a multi- touch operation is detected, the decision logic at 304 detects the specific multi- touch input of a touch device such as a finger touching the screen for a duration appropriate with the application (for example one second). If this node event is detected there is another decision point 306 which determines whether there is an existing node at the location of the touch on the screen. If there is no existing node, the node creation process 310 is commanded, which creates a node at the location being touched, and then selects the point which has just been created. If there is a node at, or close to (as applicable to the application) the detected node multi-touch, that node will be selected, as shown in 308. Whether the node was just selected by the node multi-touch, or created and selected, the user can move the node freely as summarised by process 312, and further elaborated in the process description of FIG. 3Ba. The node position on the touch-screen will always track the position of the finger performing the multi-touch as shown by process 318. However a decision 320 as to whether to perform normal node movement or margin- based node movement depends on whether the node is within a screen margin. A screen margin may be used - although it is not necessarily visible - in the situation where the background to a node (such as a map) occupies a larger area than can be seen on the touch-screen. In this case the node remains under the controlling finger, but the background moves in the opposite direction to the margin as described by 324. Therefore if a node is moved into a defined margin area of the left of the touch-screen, the user's controlling finger may stop there, in which case the background will move to the right. Typically, borders will be relevant at the top, bottom, left and right sides of a rectangular touch-screen, although for example borders near the corners could act in two directions. Such scrolling of a background can occur for as long as the user's finger is in contact with the screen and within a margin. If the finger is no longer in a margin, but still in contact with the touch-screen, normal node motion 322 will occur, with the node following the finger. Decision logic 314 (on FIG. 3Aa) determines whether any other operation is performed on the node after movement; an immediate time-out occurs if the node controlling finger is removed from the touch-screen, in which case the node stays at the position it was last at (where the finger was removed from) and is deselected. However if the controlling finger is detected as staying in the same position, but still in contact with the touch-screen for a certain time - for example two seconds - a user interface will be brought up to enable the user to assign additional information for the node, as shown in process 316.
FIG. 3Ca shows normal node movement on a touch-screen surface 102 (in this case a satellite navigation system), with 328 representing the position of a user's finger, which is evidently outside the margin 330. The node is moved with the finger, for example as shown by arrow 326. By contrast, FIG. 3Da shows margin-based node movement, where finger position 328 is within the margin 330 - in this case the top margin. Arrow 332 represents the movement or scrolling of the background as a consequence of the presence and position of the finger controlling the node being within the margin.
FIG. 4a shows some of the information which could be attributed to a node after creation and selection - particularly a node representing a geographic location on a map. Latitude and longitude would be important for a location node - this would be received from a mapping of geographic model application once the position on a touch-screen has been established. Also for geographic nodes, start date (e.g. rendez-vous) and finish date could be useful. Significantly, elevation or altitude - perhaps with minimum and maximum elevation/altitude would allow a three dimensional location definition. Therefore the altitude of a surveying point on a mountain could be usefully defined, an altitude above ground could be defined, or a depth below the sea could be added to latitude and longitude data. A name would also be useful - especially when sharing the node on a network, for a shared reference by those users with permissions to see specific nodes. Certain information, if required, could be attributed to a node by the operating system, such as the user and time at which a node was created. Miscellaneous information or notes about a location could also be added by a user. Finally visual information, such as node icon and label color, size and shape could be defined, or these could be defined or defaulted by the application itself.
LINE AND CORRIDOR DEFINITION Line segments - with reference to FIG. 5a - are defined with the use of two fingers 502 (in this case a thumb and middle finger) touching a touch-screen 102 together, and for a minimum duration (for example 2 seconds) without being moved. This action will create or select two nodes, at the locations of the two touches on the touch-screen, between which will be drawn a line which may be straight, or otherwise, as appropriate to the application. Of the two nodes created one will always be determined as the key node, and indicated to the user as the key node. The key node for a line - or any node-based shape with more than one node - allows the selection of the whole shape, and operations on that shape - for example movement of the shape with all of its nodes, instead of the movement of just one node. The straight line shown as 606 on FIG. 6a between position nodes 602 and 604, on the touch-screen device 202, constitutes a line segment which can be further enhanced by the user, as illustrated in FIG. 8a, 9a, 10a, 1 1 a and 12a. In this case node 604 is marked as the key node, although other ways can be used to indicate the key node from other nodes (the shape nodes), including shape, contrast, color or fill.
The creation method of a line segment by a touch-screen device user is shown in FIG. 7a. User inputs to the touch-screen will be monitored for multi-touch node gestures by process 302. The logic of 702 will determine whether two fingers are touching the touch-screen and remaining still for greater then a minimum duration (for example 1 .5 seconds), and if so the process 704 will create a line. Process 704 will create a line firstly by selecting the two nodes specified by the finger positions. If a node already exists at a finger position (or within a defined radius), the node will be selected. If there is no pre-existing node at the position, a node will be created at the given screen position and the node will be selected. A line will be drawn on the touch-screen between the two selected nodes. Typically the line will be straight, but there are various possibilities with regard to line type, which may for example be an arc, a spline, an arrow or other common line type. The allocation of the key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touch-screen.
According to the line type, line sub-nodes will automatically be created for the purpose of line division as described below. The logic of 706 will detect whether the two fingers remain on the created nodes for a minimum time-out period after the creation of the line. If not (for instance the fingers are removed immediately upon the drawing of the line) the line will be completed without any additional user information added to the line at that time. If the fingers do still remain at the time-out, process 708 will allow the user to add additional information for the line via a user interface. FIG. 8a illustrates some of the information which could be added to a line; as for node information, some information can be added automatically by the operating system, such as Creation User and Creation Time. Other information can be graphical preferences from a user or defaults. Some information may be defined by the user, such as Line Segment Name and Information. Other information including node positions on screen, node position representation (such as latitude/longitude/elevation in a mapping application) will be inherited from the node data relating to the nodes either end of the line. This is advantageous to a touch-screen device user, since if one end of a line is not in the desired location, the user can select that node and move it in the manners described by node movement.
Selection of a whole line will consist of touching and holding, or double-tapping the key node of the line.
FIG. 9a demonstrates how a line may be sub-divided via node-based multi- touch. A created line will automatically have one or more sub-nodes (as indicated by 906) created along its length in addition to the line-end defining nodes. These extra line sub-nodes may be visible or invisible to the user, and may be regularly spaced or irregularly spaced. If a user selects one of these line sub-nodes (as per the selection of any node), the node can be moved relative to the line-end nodes. This will bend the line, and create two line segments out of one, sharing a common end-line node (which was previously a sub-node of the original line). New line segments created by line sub-division will have their own new sub-nodes created.
FIG. 10a illustrates a composite line 1004, created from line segments 1006. Such a composite line could for example represent a border, boundary or road on a map, or a complex line on a drawing application.
FIG. 1 1 a illustrates a composite line made up out of line segments 1 106, which are arrows, and therefore define directionality. Such a composite line can be used to show flow or direction of travel. A particular use of this type of line would be in defining a route which does not necessarily rely on existing geographic locations. This would be beneficial for example to define a planned wilderness route, journey on water or the flight plan for an aircraft.
FIG. 12a illustrates the use of line segments to create not only a composite line, but a corridor. A corridor is a central line such as 1210 with associated parallel lines (1206 and 1208) which represent desired limits related to the central line. One use for this is the definition of air corridors or sea lanes for touch-screen devices used for navigation or navigation planning. Corridors can be created by the user of a touch-screen device by specifying distance offsets from a central line, as part of the user-added line information process 708 previously described. The offset lines which may be to one or both sides of the central line, are drawn by the application, under control of the user. Selection of the key node shown in FIG. 12a by 1212 will allow the selection, movement and data entry of the complete corridor. Frequently - such as in the examples given - corridors will not be defined by a touch-screen user, but from a central database, the nodes and related information about which will be sent over a network. Navigation restricted corridors can therefore be provided centrally, which can be overlaid on a touch-screen display with local information - such as GPS position and planned route of the local user. The key is the use of nodes to represent the required information between users and data sources. If a single node of a line is selected and deleted, a line consisting of only two points will disappear, but leave the remaining node as a singularity point. For a line of more than two nodes, the individual node will disappear, but leave a line made up of the remaining nodes; neighbouring nodes will be joined if an intermediate node is deleted. If a key node is deleted, another node will become the key node, using a rule applicable to the application, such as the closest node to the deleted node becomes the key node.
CIRCLE DEFINITION, SELECTION & MOVEMENT
Circles can be defined via multi-touch with the use of two fingers; one finger will be static at the center of the desired circle on the touch-screen, while the other will move from the center outwards until the required radius is achieved. FIG. 13a illustrates the method, in which the user is using one finger 1304 (in this case the left thumb) to define the center of the circle. Another finger 1302 is in motion over touch-screen 102 and away from the thumb. The finger which is moving away can move in any direction from the center, since it represents a point on the circumference of the circle, and therefore it can move back in towards the center to make the circle smaller. Once the circle is the required radius, finger 1302 is removed. Finger 1304 is also removed if no further operation is required on the circle, however it can be left to retain selection of the circle to move it or add information.
FIG. 14a shows the result of the circle drawing operation with the circle still selected, since both the centre node 1402 and the radius node 1404 are visible. In this example of implementation of the circle drawing method the centre node and the radius node are shaped or colored differently to be able to distinguish them, and the radius node will become invisible when the circle is no longer selected. The circle circumference 1406 is visible where it is on the touch-screen of the multi-touch enabled device 202. A label 1408 has been given to the circle after creation. In this case some zoom in and zoom out touch-screen buttons 1410 are shown, since the multi-touch gesture defined for circle drawing is similar to the widely used zoom multi-touch gesture, and such buttons would allow zoom to be performed in addition to circle creation. An application which uses the circle drawing method and apparatus defined here would not be compatible in the same mode as the here-to common pinch/zoom multi-touch gesture.
FIG. 15a describes how a circle drawing method can be implemented. If multi- touch detection process 302 has determined that a multi-touch operation has been initiated and logic 1504 detects that one finger is still and the other is moving away from the still finger, a dynamic circle will be drawn. The circle will be centred on the center node (created by the still finger) and have a radius determined by the distance between the center node and the radius node (created by the moving finger). The center node is always defined as being the key node for the circle shape. Once both fingers are still for a set duration (for example two seconds), logic 1508 will declare a timeout, and if both fingers are still present at that event, will keep the circle selected and initiate the user- added circle information process 1512. This process will present a user interface applicable to the application, for example to name the circle. Logic block 1508 will activate logic block 1510 if only one finger is present after the timeout, and thereafter the radius can be changed until the radius node finger is removed. However, if only the center node finger is present, the circle will remain selected, and can be moved around like any singularity node described by FIG. 3Aa and FIG. 3Ba (including margin-based node movement). Circle definition and movement will be completed upon the centre node finger being removed from the touch screen, although other end events are possible such as a further timeout upon the centre node finger being still.
Selection of a circle for movement occurs through the touching of the centre node of a circle. This will allow its movement, and also display a radius node on the circumference of the circle, which can also be changed while the circle is selected. Double-tapping of the center node will select the circle for the addition of information as in process 1512, since the center node is also the key node.
Deletion of a radius node will result in the deletion of the circle, and the center node becoming a singularity node. Deletion of the whole circle will occur if either the center node is deleted, or if the shape is selected by double-tap of the center node and a deletion option being selected by the user.
FIG. 16a presents some examples of information which can be attributed to a circle. The position of the circle on the screen (and representational position for example longitude and latitude) will automatically be attributed to the circle where applicable, and operating system information such as user and date/time can automatically be added. Similar to single nodes and lines, elevation and shape formatting information can be added. It is important to note that since the circle is node-based, information about it can be easily shared between applicable devices. Therefore circles defined by one user can - if enabled - be displayed on touch-screen devices of other users.
RECTANGLE DEFINITION
A rectangle can be defined via multi-touch with the use of two fingers, both of which will move across the touch-screen - initially away from each other. During the rectangle definition, each finger will create, select and control the motion of a node representing a diagonally opposite corner of the rectangle. FIG. 17a shows how a rectangle is defined on a touch-screen 102. The user is using one finger 1702 (in this case the left thumb) to define the position and motion of the bottom right corner node of the rectangle and another finger 1704 to define the position and motion of the top left corner node. Both fingers must be in motion over the touch-screen for the touch-screen device to differentiate between rectangle creation and both the creation of a circle (one finger moving and one static) and a line (two static fingers). Two fingers from different hands may be used to define rectangle corners. Also definition of a rectangle may be made by controlling the bottom-left and top-right corner nodes, relative to the user.
FIG. 18a shows the result of the rectangle definition on touch-screen device 202. A rectangle with a node at each of the four corners is displayed to the user; two of these nodes (1802 and 1804) are the nodes created and moved by the fingers of the user until the desired size and position of rectangle 1806 was achieved. Zoom in and zoom out touch-screen buttons 1410 are shown, since the multi-touch gesture defined for rectangle drawing is similar to the widely used zoom multi-touch gesture, and such buttons would allow zoom to be performed in addition to rectangle creation.
The allocation of the key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touch-screen. Note that although two nodes are used to create the rectangle, four nodes will be created - one for each corner - of which one will be designated as the key node. One or more sub-nodes will also be created along each side of the rectangle (similar to the creation of lines). These are for creating new nodes on the rectangle shape if selected by the user; for example after the creation of the rectangle, the user may decide to drag the middle of one side outwards to create a pentagon shape.
The creation of a rectangle would enable the user to create shapes on drawing applications, define areas on a mapping or geographic modelling application, and can be used to define specific screen areas or windows on a touchscreen, such as for a picture-in-a-picture window or secondary application. In order to implement the creation of a rectangle, a process has been defined in FIG. 19a. After monitoring for a multi-touch event (process 310) and detecting a multi-touch, where two fingers are detected which are both moving, and moving apart (decision logic 1904), process 1906 is initiated. This initially creates four corner nodes - one of which is the key node. The logic will interpret which node is highest according to the touch-screen orientation, and designate one of the nodes covered by a finger as a top corner node; the other node under a finger will be designated a bottom corner node. The process will also determine which of the corner nodes is furthest left with respect to the current touch-screen orientation, and designate it as a left node; the other node under a finger will become a right node. In this manner one node under a finger will become a top-left node or a top-right mode, while the other will become a bottom-right node or bottom-left mode, respectively. During the creation of a rectangle these two nodes will be moved by the user in the same manner as for any singularity node (including normal node movement and margin-based node movement), and may even swap as top/bottom or left/right nodes. The other two corner nodes (not under a finger) will move in sympathy with the other two corner nodes, for instance if the top-left and bottom-right nodes are being controlled by the user, the process will position the top-right node at the same vertical position as the top-left, and at the same horizontal position as the bottom-right node. Similarly the bottom-left node will be placed at the same vertical position as the bottom-right node and at the same horizontal position as the top-left node. The rectangle nodes will continue to track the finger positions as described until there is a timeout event determined by decision logic 1908. A timeout will occur immediately if both fingers are removed from the touch-screen (decision 1910), and in this case the corners of the rectangle will be where the fingers were last detected and the rectangle will become complete. If one finger is still present, then that node may be moved until the finger is removed from the touch-screen, as depicted by process 1914. If both fingers remain still for a short period (such as 2 seconds) without moving, a user interface process 1912 will be brought up to enable the user to add information about the shape created. At any time after creation (as for any shape) a double-tap of the key node of the shape will also allow shape information to be added by the user.
In one implementation of rectangle creation using the above method, once a rectangle has been created it will be classed as a shape made up of four nodes and will not be treated specifically as a rectangle after creation. However continued classification as a rectangle could also occur in which case further size editing could be made by touching diagonal corner nodes at the same time.
FIG. 20a shows examples of information which could be recorded about a rectangle, although this information is also common to any other shape with multiple nodes (any polygon). It can be seen that much of the information potentially underlying a shape is based upon the identification of the node positions which define the shape. This may be position on the touch-screen and a reference position represented by the node (such as a geographic latitude, longitude and elevation). Some information may be created automatically by the operating system or application, such as creation date/time and user. Other elements may be format information which may be a combination of application and user selections and defaults.
TRIANGLE AND QUADRILATERAL DEFINITION
A triangle can be defined via multi-touch with the use of three fingers, all three of which will initially remain stationary on the touch-screen. A quadrilateral shape can similarly be defined via the use of four fingers remaining stationary on the touch-screen. FIG. 21 a shows three stationary fingers 2104 held against touch-screen 102. The three finger positions define nodes, which determine where the three corners are. FIG. 22a shows a triangle 2204 resulting from the touching of the multi-touch device 202 to create the nodes 2206. Whether or not the node points are normally visible, one of the points will be designated as the key-node. In this example of use the multi-touch device is a drawing application which has a pre-defined triangle shape selected as denoted in selection area 2202, and in this case the nodes are not visible. However if the application or operating system of the multi-touch device is pre-programmed to recognise three fingers - where stationary for a defined time - as the initiating event for creating a triangle, a pre-selection of shape type would not be required. The process of FIG. 23a shows the mechanism required to create multi-touch, node-based triangles and quadrilaterals, including selection. Process 302 determines whether a multi- touch event has occurred, and the logic of 2304 determines whether a triangle or quadrilateral is being defined. This is established by determining whether three fingers or four fingers respectively are touching the touch-screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a triangle node will be created at the screen position below each finger, and these nodes will be joined to create a triangle shape.
The end of a time-out process 2308 (such as one second) where not all three fingers are in contact with the touch-screen, or an immediate time-out when the user removes all three fingers from the touch-screen will result in the completion of the triangle.
If all three fingers are detected at the end of the time-out, a user- interface will be presented to the user to define additional information in process 2310 - such as illustrated in FIG. 20a.
As for line definition and rectangle definition, one or more additional (typically hidden) sub-nodes will be created along the length of each side, so that the user can efficiently change the triangle into another polygon by dragging one or more sub-nodes off the line to divide it. Also (as per all other node-based lines and shapes) if the key node of the triangle is selected by double-tap after creation as a polygon, the shape will be selected by the user for movement of the whole shape together, or for the addition or editing of information desired.
PENTAGON DEFINITION
A pentagon can be defined via multi-touch with the use of five fingers, all five of which will initially remain stationary on the touch-screen. FIG. 24a shows five stationary fingers 2404 held against touch-screen 102. The five finger positions define nodes, which determine where the five corners are. FIG. 25a shows a pentagon resulting from the touching of the multi-touch device 202 with corners of the pentagon defined by nodes 2504, 2506, 2508, 2510 and 2512. Whether or not the node points are normally visible, one of the points will be designated as the key-node, which in FIG. 25a is marked as 2506. In this example the multi-touch device shows a real estate application with a map. The shape created by the user -which in this case is a pentagon - represents the geographic area of interest for a search of properties meeting criteria already defined. Property 2502 is an example of nine properties for sale which match the user's criteria, and in this case it has a banner 2514 summarizing the address and market price. Note the hidden sub-node 2516 which are always created when any line or polygon side is created, although there can be more than one between two nodes. If sub-node 2516 is selected by the user by (in this case) touching the center of the line between node 2506 and node 2508, that sub-node will become visible (if not already visible) and can be dragged from its initial location, which will create a new node, and create a hexagon from the polygon, as shown in FIG. 27a. It can be seen that node 2702 has become a node to replace the previous sub-node 2704. The effect of this on the real estate application is to enlarge and detail the geographic area of search, and it can be seen that one new property 2706 has appeared as relevant to the search. In both FIG. 25a and FIG. 27a, the zoom in/out graphics 1401 are shown as a means to zoom in and zoom out for map scale without invoking circle or rectangle shape drawings. The process of FIG. 26a shows the mechanism required to create multi-touch, node-based pentagons, including selection. Process 302 determines whether a multi-touch event has occurred, and the logic of 2604 determines whether a pentagon is being defined. This is established by determining whether five fingers are touching the touch-screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a pentagon node will be created at the screen position below each finger, and these nodes will be joined to create a pentagon shape. The end of a time-out (such as one second) where not all five fingers are in contact with the touch-screen, or an immediate time-out when the user removes all five fingers from the touch-screen will result in the completion of the pentagon.
If all five fingers are detected at the end of the time-out process 2608, a user- interface will be presented to the user to define additional information - such as illustrated in FIG. 20a.
If (as per all other node-based lines and shapes) the key node of the pentagon is selected by double-tap after creation as a polygon, the shape will be selected by the user for movement of the whole shape together, or for the addition or editing of information desired in process 2610.
LINE DIRECTION AND CONVERSION
Where a line has been defined with nodes and has been given direction, as shown with arrows between the nodes as in FIG. 28Aa, after selection of the whole line, the direction of the line can be reversed on selection by the user where a suitable user interface is provided; the arrow shown by 2802 can be seen as having different directionality before and after reversal. Similarly, as shown by 2808 in FIG. 28Ba, where a line has more than two nodes and the line is closed - where every node is both a beginning and end of the line - as well as reversing line direction the user can convert the line into a polygon 2804 using the same nodes. If such a line has directionality before conversion, it will not have as a polygon. A polygon can be converted into a line 2806, which may have directionality. However polygons created from nodes of a closed line will always use all nodes as perimeter nodes: there will not be crossing of lines as shown in 2808.
APPARATUS DETAILED DESCRIPTION & OPERATION
In order to create moveable nodes, and to create lines and shapes from those nodes, a device with a multi-touch interface is required, as indicated by 2902 on FIG. 29a. The output signals from the multi-touch interface in response to user touches are fed to an Input / Output processor 2906 to interpret. The processor will determine whether a multi-touch event relating to nodes, node- based lines, node-based circles or any node-based polygon has occurred. If so the relevant event and data - such as the positions of the node/s on the multi-touch device will be communicated to the Central Processing Unit 2908. The Central Processing Unit will perform the required calculations and processing to create shapes and perform node movements - including margin- based node movements - some of which will call on data from Memory 2910. Node and shape data may be shared on a network under control of a Communications module 2912. Node, line, circle and polygon information may be displayed under control of the Input / Output Processor 2906, on the Display 2904. In one embodiment the Display module 2904 and Multi-touch interface 2902 are combined in one unit - a touch-screen for which nodes, lines and shapes appear where commanded by the user - under the user's fingertips. However the Display module 2904 and Multi-touch interface 2902 do not have to be the same; a multi-touch pad could be used as the input device with a conventional screen used for display to the user.
FIG. 30a shows how node-based point, line and area information may be exchanged between different devices and computers via a network. However several networks combining different communications links and different servers and databases could be used, depending on the application.
A tablet computer 3002, a large touch-screen device 3004, a personal computer 3006 (which does not have to have a touch-screen or be multi-touch enabled), a smart phone 3008 and a satellite navigation system 3010 are shown communicating node-based point, line and area information via a network. The information being provided by some or all of the devices is processed and stored at a central server with database 3012. The central server will share information as requested and required by the devices' applications. The link 3014 represents a one-to-one communication of node- based point, line and area information between two users with suitable apparatus, and shows that a centralised information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share information.
ADVANTAGES
Various applications of this new human interface to touch-device technology are foreseen.
Firstly, the node-based definition of line segments and area in a drawing application would be an efficient means of drawing lines and shapes. Although there are applications and user interfaces which will draw lines and (through this) shapes directly while following a user-controlled pointer, and there are also applications coming from the mouse-type point and click tradition by which a shape type can be selected, and then drawn using a digit like a computer mouse, there is not currently a means of drawing via nodes to represent corners, directly from multi-touch gestures.
A second application for node-based multi-touch area definition is defining areas for a particular function on a touch-screen itself. Of particular note is the quick and easy definition of a picture, or window within a screen, for example where most of a screen is being used for one application - such as web browsing, but there could be other user-drawn shapes, such as rectangles, which are showing another application independently - for example a TV display, a messaging screen, or a news-feed.
Another real benefit of the node-based point, line and area definition is potentially in the field of where a defined line or area on the touch-screen or touchpad interacts with a map or space model, to establish real, geographic coordinate-based points, lines, corridors and areas. This would bring benefit to navigation systems for, personal and vehicle navigation. It would also enhance mapping and geographic information systems including agriculture planning and real-estate management. In combination with application specific real estate data from central databases, searches and data could be provided about locations and areas specifically defined and customised by a client using a tablet computer or smart phone.
All of the above applications are underpinned by the efficient and easy to use method of node definition via multi-touch. Nodes can be easily selected, moved and have information added. Key nodes allow a whole entity such as a shape, line or corridor to be moved, or edited as one operation, and sub-nodes allow the division of lines or shape sides into smaller, linked lines.
Although the description above contains several specificities, these should not be construed as limiting the scope of the embodiment, but as examples of several embodiments. For example the range of devices applicable is greater than tablet computers and smart phones.
Thus the scope of the embodiments should be determined by the claims appended and their legal implications, rather than by the examples supplied.
DETAILED DESCRIPTION OF FURTHER EMBODIMENTS
The use of two or more pointing devices in contact with a touch-sensitive area can be used to create rectangles, circles, triangles, quadrilaterals pentagons and other polygons through the recognition and manipulation of the key nodes which define them. From these areas and shapes, greater-sided polygons, can be quickly created. Any of the resultant entities can be used to define shapes or areas on the touch screen itself, geographic areas or zoom boundaries on an associated map which may be shared with other devices or applications, or windows which may run multiple independent applications which are visible to the user at the same time on a touch screen.
In the following description of specification and operation the term "finger" is used to refer to a user's finger or other touch pointing device or stylus, and "fingers" represents a plurality of these objects, including a mixture of human fingers, thumbs and artificial pointing devices. The term "node" is taken to mean a singular two dimensional point on a two dimensional surface, plane, screen, image, map or model, which can persist in logical usage and memory after being initially defined, and can have additional data linked to it. The term "node-based" is used to define an entity created from, or pertaining to, one or more nodes. The term "cardinal nodes" refers to a minimum set of nodes which will completely define the size and form of a specific shape. The term "shape" is used to describe a pure geometric form, whereas "area" is an application of a shape to a specific two dimensional space such as on a finite screen or a map which may be larger than can be seen with a finite screen space. A "window" is taken to mean a shape specifically used to interact with a specific computer system or embedded device software application, which can run independently of other software applications or background tasks. The term "window" is analogous to its use in conventional computer systems where graphical windows and windowing systems are typically driven by a mouse or other such controller. However windows in the context of the following description are those created and controlled with one or more fingers, and which can be any geometric shape, whereas conventional computer system windows are rectangular. The term "touch screen" is used to mean an integrated touch input mechanism and display surface, such as that typically present on a tablet computer.
Various combinations of the following shape, area and window operations can be used in various embodiments.
NODE-BASED CIRCLE SPECIFICATION AND OPERATION
A circle is a symmetrical geometric shape with a constant radius, which can also represent a circular area and bound a circular window. Circles can be defined via multi-touch with the use of two fingers; one finger will be static at the center of the desired circle on the touch screen, while the other will move from the center outwards until the required radius is achieved. FIG. 1 b illustrates the method, in which the user is using one finger 104 (in this case the left thumb) to define the center of the circle. Another finger 106 is in motion over touch screen 102 and away from the thumb. The finger which is moving away can move in any direction from the center, since it represents a point on the circumference of the circle, and therefore it can move back in towards the center to make the circle smaller. However once the circle is the required radius, finger 106 is removed. Finger 104 is also removed if no further operation is required on the circle, however it can be left to retain selection of the circle to move it or add information.
FIG. 2b shows the result of the circle drawing operation with the circle still selected, since both the centre node 203 and the radius node 204 are visible. In this example of implementation of the circle drawing method there are additional radius nodes 209 automatically created, and the center node and the radius nodes are shaped or colored differently to be able to distinguish them from each other. In one embodiment of the circle drawing operation the radius node is invisible when the circle is no longer selected, but the circle circumference 206 is visible where it overlaps with the touch screen area of the multi-touch enabled device 202. A label 208 has been given to the circle after creation in this example. In this case some zoom in and zoom out touch screen buttons 210 are shown, since the multi-touch gesture defined for circle drawing is similar to the widely used zoom multi-touch gestures, and such buttons would allow zoom to be performed in addition to circle creation. An application which uses the circle drawing method and apparatus defined here would not be compatible in the same mode as the common pinch/zoom multi- touch gesture.
FIG. 3b describes how a circle drawing operation can be implemented. If multi- touch detection process 302 has determined that a multi-touch operation has been initiated and logic 304 detects that one finger is still and the other is moving away from the still finger, a dynamic circle will be drawn by process 306. The circle will be centered on the center node (created by the still finger) and have a radius determined by the distance between the center node and the radius node (created by the moving finger). The center node is defined as being the key node for the circle shape. Once both fingers are still for a set duration (for example two seconds), logic 308 will declare a timeout, and if both fingers are still present at that event, will keep the circle selected and initiate the user-added circle information process 312. This process will present a user interface applicable to the application, for example to name the circle. Logic block 308 will activate logic block 310 if only one finger is present after the timeout, and thereafter the radius can be changed until the radius node finger is removed. However, if only the center node finger is present, the circle will remain selected, and the whole circle can be moved around by following the touch path of the key node finger across the screen of the touch screen device as in process 314. Circle definition and movement will be completed upon the center node finger being removed from the touch screen, although other end events are possible in different embodiments such as a further timeout upon the centre node finger being still.
Re-selection of a circle after creation, occurs through the touching of the center node (key node) of the circle. This allows movement of the circle as previously described, and also displays one or more radius nodes on the circumference of the circle, which can also be changed while the circle is selected. In one embodiment, double-tapping of the center node will select the circle for an operation selected from a menu adjacent to the circle; such as for the addition of information, circle deletion, node position editing or conversion to an ellipse. In another embodiment, the same operations are available from a persistent touch of a center node for more than a second.
Once a circle is selected or re-selected after creation, the radius node or nodes will also be visible, and subsequent radius node selections by touch in one embodiment will allow operations on the radius node, such as movement, naming or deletion via a menu adjacent to the radius node. Deletion of a radius node will result in the deletion of the circle, although in one embodiment the center node will remain.
Deletion of the whole circle will occur in various embodiments if either the center node is deleted, or if the whole shape is selected by key node selection and the subsequent selection of circle deletion by the user.
NODE-BASED ELLIPSE SPECIFICATION AND OPERATION
An ellipse is a curved geometric shape which has a varying curve around its circumference, which can also represent an elliptical area and bound an elliptical window. Ellipses are created as a subsequent operation to the creation of a circle. FIG. 4Ab presents an example of a circle which has been selected prior to conversion to an ellipse. In the embodiment shown four radius nodes 406, 408, 410 and 412 are displayed to the user, evenly distributed around the circumference (404), along with the center of the circle (402). These nodes which together completely define the shape, are collectively referred to as the cardinal nodes of the shape. Every shape, including circles have a minimum set of cardinal nodes. Although circles only require two cardinal nodes to be created - the center node and one radius node -, in order to be able to create an ellipse from it, more nodes are required to be shown. In this embodiment the opposite radius nodes (410 and 412, or 406 and 408) move together but in opposite directions - either both towards the center node or both away from the center node - when one of the pair is moved, to give a symmetrical modification. The arrows shown next to nodes 412 and 406 show an example of an elongation of the shape to create an ellipse.
FIG. 4Bb shows the resultant ellipse from the circle elongation indicated by the arrows in FIG. 4Ab. Note that this created ellipse used the embodiment of symmetrical node manipulation, since the ellipse is symmetrical, and also because node 418 has been used to elongate the ellipse to such an extent that its opposite cardinal node has moved off the touch screen. In another embodiment of circle manipulation to create an ellipse, all radius nodes are fully independent, allowing separate skewing in the different axes.
FIG. 5b describes how an already-created shape can be modified via manipulation of its cardinal nodes. Continuous shape selection monitoring 502 occurs until there is recognition 504 that a node-based shape has been selected, for instance by the user touching or double-tapping the key node. In this case a process 506 displays all the cardinal nodes to the user where the nodes are within the touch screen display area, and allows any of the cardinal nodes to be selected and moved. Decision logic 508 detects any touches made to the cardinal nodes, which in the shown embodiment incorporates a timeout limit. The shape changing process 510 modifies shape and geometry according to the movement of cardinal nodes, such as the elongation of a circle to create an ellipse. Following completion of shape changing determined by a timeout on no nodes being moved, process 512 terminates the shape modification process, which in one embodiment results in the cardinal nodes being made invisible to the user and the shape being de-selected. In another embodiment, the shape is not de-selected until another shape is selected. NODE-BASED RECTANGLE SPECIFICATION AND OPERATION
A rectangle is a four-sided geometric shape containing four right angles, which can also represent a rectangular area and bound a rectangular window. A rectangle can be defined via multi-touch with the use of two fingers, both of which will move across the touch screen - initially away from each other in one embodiment. During the rectangle definition, each finger will create, select and control the motion of a node representing a diagonally opposite corner of the rectangle. Node movement, and therefore creation is with respect to the borders of a touch screen or other multi-touch input device, and is relative to the associated axes. FIG. 6b shows how a rectangle is defined on a touch screen 102. The user is using one finger 602 (in this case the left thumb) to define the position and motion of the bottom right corner node of the rectangle and another finger 604 to define the position and motion of the top left corner node. Both fingers must be in motion over the touch screen for the touch screen device to differentiate between rectangle creation and the creation of a circle (one finger moving and one static). Two fingers from different hands may be used to define rectangle corners. Also definition of a rectangle may be made by controlling the bottom-left and top-right corner nodes, relative to the user.
FIG. 7b shows the result of the rectangle definition on touch screen device 202. A rectangle with a node at each of the four corners is displayed to the user; two of these nodes (702 and 704) are the nodes created and moved by the fingers of the user until the desired size and position of rectangle 706 was achieved. Zoom in and zoom out touch screen buttons 708 are shown, since this would be a way to zoom in and zoom out which is compatible with the embodiment described. That is to say that the rectangle creation method being described is incompatible with the widely used pinch and zoom multi-touch gesture in the same mode or application since the touch screen device logic would not be able to distinguish between whether the user wanted to zoom or define a rectangle. The allocation of the key node of a rectangle can vary according to application and user defaults and preferences; in one embodiment the highest and furthest left on the touch screen with respect to the user becomes the key node. Note that although two nodes are used to create the rectangle, four nodes will be created - one for each corner. In one embodiment a center node will also be created at the geometrical center of the rectangle which may also be designated as the key node. In one embodiment, one or more sub-nodes will also be created along each side of the rectangle for the purpose of creating new nodes on the rectangle shape if selected by the user; for example after the creation of the rectangle, the user may decide to drag the middle of one side outwards to create a pentagon shape.
The creation of a rectangle would enable the user to create shapes on drawing applications, define areas on a mapping or geographic modelling application, and can be used to define specific screen areas or windows on a touch screen, such as for a picture-in-a-picture window or secondary application.
In order to implement the creation of a rectangle, a process has been defined in FIG. 8b. After monitoring for a multi-touch event (process 302) and detecting a multi-touch, where two fingers are detected which are both moving, and moving apart (decision logic 804), process 806 is initiated. This initially creates four corner nodes. The logic will interpret which node is highest according to the touch screen orientation, and designate one of the nodes covered by a finger as a top corner node; the other node under a finger will be designated a bottom corner node. The process will also determine which of the corner nodes is furthest left with respect to the current touch screen orientation, and designate it as a left node; the other node under a finger will become a right node. In this manner one node under a finger will become a top-left node or a top-right mode, while the other will become a bottom-right node or bottom-left mode, respectively. During the creation of a rectangle these two nodes will be moved by the user until they are at the final desired positions to define a rectangle, and may even swap as top/bottom or left/right nodes. The other two corner nodes (not under a finger) will move in sympathy with the other two corner nodes, for instance if the top-left and bottom-right nodes are being controlled by the user, the process will position the top-right node at the same vertical position as the top-left, and at the same horizontal position as the bottom-right node. Similarly the bottom-left node will be placed at the same vertical position as the bottom-right node and at the same horizontal position as the top-left node. The rectangle nodes will continue to track the finger positions as described until there is a timeout event determined by decision logic 808. A timeout will occur immediately if both fingers are removed from the touch screen (decision 810), and in this case the corners of the rectangle will be where the fingers were last detected and the rectangle will become complete. If one finger is still present in one embodiment, then that node may be moved until the finger is removed from the touch screen, as depicted by process 814. If both fingers remain still for a short period (such as 2 seconds) without moving, a user interface process 812 will be brought up to enable the user to add information about the shape created.
The rectangle definition completion process 819 concerns the defining nodes being put into memory, including the finger-defined nodes, the other corner nodes, a center node (where applicable) and mid-point nodes along the rectangle sides (where applicable). A key node will also be defined. At any time after creation (as for any shape) a double-tap of the key node of the shape will allow shape information to be added by the user, as well as allowing the movement of corner nodes, or the whole shape.
A different process from the standard completion process will occur where a decision block 816 recognises that the rectangle definition is on an application such as a map for which a combined pan and zoom operation is valid. In this case the area defined will define the border of a part of the existing map or view which will be magnified in a subsequent process 818 defined by FIG 8Ab. In the combined pan and zoom implementation of rectangle creation using the above method, once a rectangle has been created on a map or other relevant view which can be expanded, such as shown in FIG. 7b the rectangle area 706 created will form the basis of a new zoomed view. If the ratio of the width to height (the aspect ratio) of the rectangle 706 is exactly the same as that of the screen, then the content of rectangle 706 will become the complete display of the touch screen, magnified or zoomed to an equivalent amount. FIG. 8Ab shows how the process of the automatic pan and zoom by the definition of a rectangle can be achieved. Initially there is a calculation 820 of the geometric center point of the newly created rectangle. In one embodiment where the display is a map or an earth model for which every point on the display represents a latitude/longitude coordinate, the map or earth model latitude and longitude for the corner nodes and the center node are retrieved so that the subsequent pan and zoom operations directly reference geographic points. Another step 822 calculates the aspect ratio of the newly created rectangle, and then decision block 824 compares it with that of the whole display area (window or screen). If the aspect ratio of the newly created rectangle is greater than that of the original display area then the new view calculation 826 will represent the full width of the rectangle, with extra area added at top and bottom accordingly. If however the aspect ratio of the newly created rectangle is less than or equal to that of the whole display area, task 828 will calculate a new view representing the maximum height, with additional area shown to the left and right. A completion task 830 centers the new view at the center of the display area, which effectively implements an automatic pan operation. The same task also creates the display over the whole display area according to the previously calculated area derived from the aspect ratio, and this effectively implements an automatic zoom operation.
Subsequent pan and zoom operations using rectangle definition on a touch screen may be performed to iteratively zoom in on detailed information in a display.
NODE-BASED TRIANGLE SPECIFICATION AND OPERATION
A triangle is a three-sided, closed geometric shape, which can also represent a triangular area and bound a triangular window. A triangle can be defined via multi-touch with the use of three fingers, all three of which will initially remain stationary on the touch screen. FIG. 9b shows three stationary fingers 904 held against touch screen 102. The three finger positions define nodes, which determine where the three corners are. FIG. 10b shows a triangle 1004 resulting from the touching of the multi-touch device 202 to create the nodes 1006. Whether or not the node points are normally visible, in one embodiment one of the points will be designated as the key-node. In the embodiment shown the multi-touch device is a drawing application which has a pre-defined triangle shape selected as denoted in selection area 1002, and the nodes are not visible. However in another embodiment where the application or operating system of the multi-touch device is pre-programmed to recognise three fingers - where stationary for a defined time - as the initiating event for creating a triangle, a pre-selection of shape type is not required. The process of FIG. 1 1 b shows the mechanism required to create multi-touch, node-based triangles, including selection. Process 302 determines whether a multi-touch event has occurred, and the logic of 1 104 determines whether a triangle is being defined. This is established by determining whether three fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a triangle node will be created at the screen position below each finger, and the subsequent task 1 106 will join the nodes to create a triangle shape. In one embodiment decision logic 1 108 will determine whether the three fingers are still present after an additional timeout which I contemplate to be approximately one second. If the fingers are still present, the user will be requested to add additional information for the triangle, such as a name or color, prior to completion of the triangle shape.
In one embodiment, upon creation or subsequent re-selection of the triangle, one or more additional sub-nodes will be created along the length of each side of the triangle and made visible, so that the user can change the triangle into a greater-sided polygon by dragging one or more sub-nodes off a side line to divide it.
In one embodiment re-selection of the triangle is achieved by the user double- tapping the triangle's key node. In another embodiment, the touching of any side or node of the triangle will select the whole shape.
In various embodiments, once re-selected, the triangle can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as name or color. In various embodiments, a single node of a previously created triangle is selected by touching an apex of the triangle, and the node is highlighted and visible to the user.
In one embodiment, once a single node of the triangle is selected, the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the triangle accordingly.
In one embodiment, once a single node of the triangle is selected, a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
In one embodiment, once a single node of the triangle is selected, a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the triangle into a straight line between the remaining two defining nodes.
NODE-BASED QUADRILATERAL SPECIFICATION AND OPERATION
A quadrilateral is a four-sided geometric shape which does not have to contain right angles, which can also represent a quadrilateral area and bound a quadrilateral window. A quadrilateral shape is defined in a similar manner to the triangle, but via the use of four fingers remaining stationary on the touch screen at the same time for a period I contemplate to be approximately one second. Unlike for the definition of a rectangle, a quadrilateral does not take screen borders or orientation into account. FIG. 12b shows four stationary fingers 1204 held against touch screen 102. The four finger positions define nodes, which determine where the four corners are. FIG. 13b shows a quadrilateral resulting from the touching of the multi-touch device 202 to create the nodes 1306. In one embodiment, whether or not the node points are normally visible, one of the points will be designated as the key-node. In the embodiment shown in FIG. 13b the multi-touch device is a drawing application which has a pre-defined quadrilateral shape selected where the nodes are not visible. However in another embodiment where the application or operating system of the multi-touch device is pre-programmed to recognise four fingers - where stationary for a defined time - as the initiating event for creating a quadrilateral, a pre-selection of shape type is not required. The process of FIG. 14b shows the mechanism required to create multi-touch, node-based quadrilaterals, including selection. Process 302 determines whether a multi- touch event has occurred, and the logic of 1404 determines whether a quadrilateral is being defined. This is established by determining whether four fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a quadrilateral node will be created at the screen position below each finger, and the subsequent task 1406 will join the nodes to create a quadrilateral shape. In one embodiment decision logic 1408 will determine whether the four fingers are still present after an additional timeout which I contemplate to be approximately one second. If the fingers are still present, the user will be requested to add additional information for the quadrilateral, such as a name or color, prior to completion of the quadrilateral shape.
In one embodiment, upon creation or subsequent re-selection of the quadrilateral, one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the quadrilateral into a greater-sided polygon by dragging one or more sub- nodes off the existing side line to divide it.
In one embodiment re-selection of the quadrilateral is achieved by the user double-tapping the quadrilateral's key node. In another embodiment, the touching of any side or node of the quadrilateral will select the whole shape. In various embodiments, once re-selected, the quadrilateral can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color. In various embodiments, a single node of a previously created quadrilateral is selected by touching a corner of the quadrilateral, and the node is highlighted and visible to the user.
In one embodiment, once a single node of the quadrilateral is selected, the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the quadrilateral accordingly. In one embodiment, once a single node of the quadrilateral is selected, a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
In one embodiment, once a single node of the quadrilateral is selected, a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the quadrilateral into a triangle drawn between the remaining three defining nodes.
NODE-BASED PENTAGON SPECIFICATION AND OPERATION
A pentagon is a five-sided, closed geometric shape, which can also represent a pentagonal area and bound a pentagonal window. A pentagon can be defined via multi-touch with the use of five fingers, all five of which will initially remain stationary on the touch screen. FIG. 15b shows five stationary fingers 1504 held against touch screen 102. The five finger positions define nodes, which determine where the five corners are. FIG. 16b shows a pentagon resulting from the touching of the multi-touch device 202 with corners of the pentagon defined by nodes 1604, 1606, 1608, 1610 and 1612. In one embodiment, whether or not the node points are normally visible, one of the points will be designated as the key-node, which in the embodiment shown in FIG. 16b is marked as a non-filled square 1606. In the embodiment illustrated in FIG. 16b the multi-touch device shows a real estate application with a map. The shape created by the user -which in this case is a pentagon - represents the geographic area of interest for a search of properties meeting criteria already defined. Property 1602 is an example of nine properties for sale which match the user's criteria, and in this case it has a banner 1614 summarizing the address and market price. Note the hidden sub-node 1616 which is an example of sub-nodes created in various embodiments of pentagon definition. If sub-node 1616 is selected by the user by (in this embodiment) touching the center of the line between node 1606 and node 1608, that sub-node will become visible (if not already visible) and can be dragged from its initial location, which will create a new node, and create a hexagon from the pentagon, as shown in FIG. 19b. It can be seen that node 1902 has become a node to replace the previous sub-node 1616. The effect of this on the real estate application is to enlarge and detail the geographic area of search, and it can be seen that one new property 1906 has appeared as relevant to the search.
The process of FIG. 17b shows the mechanism required to create multi-touch, node-based pentagons, including selection. Process 302 determines whether a multi-touch event has occurred, and the logic of 1704 determines whether a pentagon is being defined. This is established by determining whether five fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a pentagon node will be created at the screen position below each finger, and these nodes will be joined to create a pentagon shape, as shown by task 1706.
The end of a further time-out (such as one second) and decision logic 1708 will determine whether the shape is completed immediately (five fingers no longer in contact with the touch screen), or whether additional information is requested 1710 when all five fingers are detected after the time-out. In one embodiment, additional information includes options to provide a name to the pentagon and to define the color of the pentagon.
In one embodiment, upon creation or subsequent re-selection of the pentagon, one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the pentagon into a greater-sided polygon by dragging one or more sub-nodes off the existing side line to divide it.
In one embodiment re-selection of the pentagon is achieved by the user double-tapping the pentagon's key node. In another embodiment, the touching of any side or node of the pentagon will select the whole shape.
In various embodiments, once re-selected, the pentagon can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color. In various embodiments, a single node of a previously created pentagon is selected by touching a corner of the pentagon, and the node is highlighted and visible to the user.
In one embodiment, once a single node of the pentagon is selected, the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the pentagon accordingly.
In one embodiment, once a single node of the pentagon is selected, a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the setting of visibility of the node.
In one embodiment, once a single node of the pentagon is selected, a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the pentagon into a quadrilateral drawn between the remaining four defining nodes.
NODE-BASED HEXAGONS & POLYGONS, SPECIFICATION AND OPERATION
A hexagon is a closed six-sided geometric shape, which can also represent a hexagonal area and bound a hexagonal window. Generally a polygon is a closed multi-sided geometric shape, which can also represent a polygonal area and bound a polygonal window. A hexagon can be defined via multi- touch with the use of six fingers, and more generally a polygon with greater than six sides can be defined with the equivalent number of fingers remaining stationary on the touch screen for a period of approximately one or two seconds. FIG. 18b shows five stationary fingers 1804 from one hand held against touch screen 102, and another finger from another hand doing the same. The six or more finger positions define nodes, which determine where the six or more corners are. FIG. 19b shows a resulting polygon produced on multi-touch device 202.
The process of FIG. 20b shows the mechanism required to create multi-touch, node-based polygons, including selection. Process 302 determines whether a multi-touch event has occurred, and the logic of 2004 determines whether a polygon is being defined. This is established by determining whether six or more fingers are touching the touch screen at the same time. If it is determined that this is the case and a minimum duration passes without the fingers moving (for example one second), a polygon node will be created at the screen position below each finger, and these nodes will be joined to create a polygon shape with an equivalent number of sides, represented by task 2006. The end of a further time-out (such as one second) and decision logic 2008 will determine whether the shape is completed immediately (fingers no longer in contact with the touch screen), or whether additional information is requested 2010 when all fingers are still detected after the time-out. In one embodiment, additional information includes options to provide a name to the polygon and to define the color of the polygon.
In one embodiment, upon creation or subsequent re-selection of the polygon, one or more additional sub-nodes will be created along the length of each side of the shape and made visible, so that the user can change the polygon into a greater-sided polygon by dragging one or more sub-nodes off the existing side line to divide it.
In one embodiment re-selection of the polygon is achieved by the user double- tapping the polygon's key node. In another embodiment, the touching of any side or node of the polygon will select the whole shape.
In various embodiments, once re-selected, the polygon can be moved, with the whole shape following the user finger. Re-selection also allows the addition or editing of information desired, such as shape name or shape color.
In various embodiments, a single node of a previously created polygon is selected by touching a corner of the polygon, and the node is highlighted and visible to the user.
In one embodiment, once a single node of the polygon is selected, the node is moved by following the user's finger touch on the touch screen. This operation alters the shape of the polygon accordingly.
In one embodiment, once a single node of the polygon is selected, a user interface is presented to the user allowing additional information to be attributed to that node, including the labelling of the node and the editing of the icon for that node. This embodiment is illustrated in FIG. 21 b, where finger 2104 has selected a node, and a drop-down menu has appeared, allowing various options on the selected node.
In one embodiment, once a single node of the polygon is selected, a user interface is presented to the user allowing the deletion of the selected node and the subsequent modification of the polygon into a polygon of one less number of sides, drawn between the remaining defining nodes. This embodiment is illustrated in FIG. 22b, whereby the left-most node 2202 which was present in FIG. 21 b has been deleted, resulting in a polygon of one less side.
NODE-BASED WINDOW CREATION WITH SHAPE DEFINITION BEFORE APPLICATION - SPECIFICATION AND OPERATION
There are various possibilities with regard to the application of the previously described node-based shapes and areas to the creation and editing of independent windows on a touch screen display. In one embodiment of node- based shape and area definition, a node-based shape is initiated as illustrated in FIG. 23Ab. In one embodiment finger 2302 and finger 2304 initially touch at point 2306 on the multi-touch input area of multi-touch device 202. The type, position and dimensions of the node-based shape are then completed as illustrated in FIG. 23Bb. In the case shown, a rectangle is created by finger 2308 and finger 2310 drawing apart from each other to define four corner nodes, of which node 2312 is the top-right node. Once the shape has been defined, a user interface in the form of a menu list is displayed to the user, as shown in FIG. 23Cb. The menu list 2316 appears adjacent to, or in front of the shape 2314 which was created. A user finger 2318 selects the specific application which is to be run in the window from the options in menu list 2316. Finally, as shown in FIG. 23Db, the selected application is run in the newly created window 2320. Although the creation of a rectangular window is shown in FIG. 23Ab, FIG. 23Bb, FIG. 23Cb and FIG. 23Db, it is anticipated that a window for the running of software applications can be created from any of the node-based shapes, including circles, ellipses, rectangles, triangles, quadrilaterals, pentagons, hexagons and greater-sided polygons. In another embodiment of node-based shapes being used to create windows, the node-based shape is first created and then the possible application options to run in the window appear inside the shape which will become the window, as shown in FIG. 24Ab (rather than the possible application options being presented on a list menu). The created shape 2402 is filled with application icons (two of which are labelled 2404) representing all the applications which can be run in the window. Once a user finger 2406 selects an application icon by touching it, that application will start within shape 2402 as shown in FIG. 24Bb. The new window 2408 is now running the selected application 2410, which is independent of any application running in another window or on the touch screen background 2412.
NODE-BASED WINDOW CREATION WITH APPLICATION DEFINITION
BEFORE SHAPE - SPECIFICATION AND OPERATION
A different approach to the application of node-based shapes and areas to the creation and editing of windows on a touch screen display is characterised by embodiments in which the application is selected first, and the shape, position and size of the window is defined afterwards.
In one embodiment the application is selected by the user as shown in FIG. 25Ab. In the embodiment shown in FIG. 25Ab the choice of application is determined from list menu 2504 appearing within the touch screen area 2502. User finger 2506 selects an application from list menu 2504. However it is anticipated that the means of application selection can be a different method, such as selection from a group of application icons. The chosen application will then appear in a window 2508 of preset or default position, shape and size on the touch screen for that application, as shown in FIG. 25Bb. In the embodiment illustrated in FIG. 25Bb, the calculator application defaults to a rectangular window in the shown position and size. However once the application window appears, each node of the window (one of which is indicated as node 2510) are shown to the user, and these nodes may be moved by user touches. FIG. 25Cb shows the top-left node 2512 and the bottom-right node 2514 of a rectangular window being moved apart by simultaneous user touches. In this case, since all four corner nodes of a rectangle are linked, the other nodes including the top-right node 2516 also move accordingly. In the illustration of FIG. 25Cb the movement of the corner nodes apart designates a stretching of the window, and changing of the nodes occurs until a timeout of approximately two seconds after the last node is touched. The resultant window 2518 is shown in FIG. 25Db.
NODE-BASED MULTI-WINDOW TOUCH SCREEN ENVIRONMENT - SPECIFICATION AND OPERATION
Practical embodiments for the creation of windows running software applications have been disclosed. However the environment in which the windows can run on a touch screen device is also disclosed. In one embodiment, multiple node-based windows can co-exist as shown in FIG. 26b. The individual windows can be selected and moved, with a more recently created or selected window appearing in front of previously created or selected windows, as shown by more recently selected calculator application window 2602 appearing in front of previously created video conference application window 2608. Multiple window shapes containing multiple application types and instances can be visible at the same time, such as triangular clock application window 2612, real estate map application 2604 and internet messaging application window 2606. Windows containing groups of application icons for selection by the user as shown by window 2610 can also appear with other windows. Upon selection, any individual window can be stretched, moved, re-shaped or deleted. However even if not currently selected, a window can be active, and display data to the user, in a similar way to conventional computer windowing systems controlled by a mouse or similar device.
The method to create the windows applications possible within the window environment is depicted in FIG. 27b. Decision block 2702 ascertains whether an application-first selection has been made (selection by the user of a desired application). In the case that it has been, decision block 2704 determines whether a default or option is set which only allows whole-screen applications. If this is the case, the selected application will run full-screen, such as typically occurs in current, prior art smartphone and tablet applications, and no windowing environment will be initiated. If the whole screen option has not been set, a default window process 2708 will be created of the default shape, size and position for the selected application. Application creation task 2710 populates the default window with the selected application, which is followed by customisation task 2712 which allows the user to modify window size and shape by the movement of nodes. Finally for this method, positioning task 2714 allows the new window to be moved by the user to the desired position.
Where decision block 2702 does not detect an application-first event, and decision block 2706 does not detect a windows-first event, the touch screen device will continue monitoring for either of these events. If decision block 2706 does detect a windows-first event, such as the detection of a node- based shape being drawn on the touch screen, window creation task 2718 will be initiated. The window creation task will create a shape according to the nodes and gestures defined by the user. Decision logic 2720 will then decide the method by which the user will define the application which will run in the new window, according to options and defaults. Either a list menu task 2722 or an icon selection task 2724 will be initiated to prompt the user to select an application for the window.
APPARATUS DETAILED DESCRIPTION AND OPERATION
In order to detect nodes, and to create shapes, areas and windows from those nodes, a touch screen module is required, as indicated by 2802 on FIG. 28b. The output signals from the touch screen module in response to user touches are fed to a control module 2804 to interpret. The control module will determine whether a multi-touch event relating to nodes, node-based shapes, node-based areas or node-based windows has occurred. If so, the control module will process the information to create or modify the relevant entity. Node, shape, area or window data for storage will be routed to the memory module 2812, with the memory module also serving as the source of these data to the control module where required by the application 2818 or operating system 2814 running which requires the information. Where an application or operating system requires an interface with remote devices, networks, servers or databases, the communications module 2810 sends or receives the shape, area or windows node data, and supplementary information associated with that entity. This information is passed to or from the control module which may also route the data to or from the touch screen module or the memory module. FIG. 29b shows how node-based shape, area and window information may be exchanged between different devices and computers via a network. However several networks combining different communications links and different servers and databases could be used, depending on the application.
A tablet computer 2902, a large touch screen device (such as a touch screen television) 2904, a personal computer or workstation 2906 (which does not have to have a touch screen or be multi-touch enabled), a smartphone 2908 and a satellite navigation system 2910 are shown communicating node-based shape, area and window information via a network. The information being provided by some or all of the devices is processed and stored at central servers or databases 2912. The central servers or databases will share information as requested and required by the devices' applications and operating systems, including node-based shape, area and window information, for example a circular area on a map. The link 2914 represents a one-to-one communication of node-based shape, area and window information between two users with suitable apparatus, and shows that a centralized information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share node-based shape, area and window information.
ADVANTAGES
Various applications of this new human interface to touch-device technology are foreseen.
Firstly, the node-based definition of various shapes in a drawing application would be an efficient and intuitive means of drawing these shapes. There are applications and user interfaces which will draw lines and (through this) shapes directly while following a user-controlled pointer. There are also applications coming from the mouse-type point and click tradition by which a shape type can be selected, and then drawn using a mechanism like a computer mouse. However there is not currently a means of drawing shapes directly from multi-touch gestures using the touched nodes to represent corners and other vital parts of those shapes.
Another real benefit of the node-based area definition is realised where a defined area on the touch screen or interacts with a map, space model or image, to establish real, geographic or coordinate-based areas. This would bring benefit to navigation systems for personal and vehicle navigation. It would also enhance mapping and geographic information systems including agriculture planning and real-estate management. In combination with application specific real estate data from central databases, searches and data could be provided about locations and areas specifically defined and customised by a client using a tablet computer or smartphone.
A third application for node-based multi-touch window definition is defining areas for a particular function on a touch screen itself. Of particular note is the quick and easy definition of a picture, or window within a screen, for example where most of a screen is being used for one application - such as web browsing, but there could be other user-drawn shapes, such as rectangles, which are showing another application independently - for example a TV display, a messaging screen, or a news-feed. Useable windows of different shapes are available for the first time, allowing operating systems and users to use windows other than rectangles for the first time. Also for the first time there is a way to create a window before defining the application which will run in it. Apart from the use of node-based operations to create windows on a display, node-based multi-touch operations defined can be used to create and control a whole windowing environment on touch screen and other multi-touch enabled devices. Following creation, windows can be selected, moved, stretched and deleted under intuitive user direction and efficient operating system control.
As well as being simple to define shapes, areas and windows using user- touched nodes, the node-based method lends itself to the efficient communication of shape, area and window data. The sending of only a few key nodes over a communication channel which completely define a whole geometric entity, without sending a complete record of the shape, allows simple and low bandwidth communication of such data.
Although the description above contains several specificities, these should not be construed as limiting the scope of the embodiment, but as examples of several embodiments. For example the range of devices applicable is greater than tablet computers and smartphones, and finger operation referred to includes operation by a pointing device such as a stylus.
Thus the scope of the embodiments should be determined by the claims appended and their legal implications, rather than by the examples supplied.
DETAILED DESCRIPTION & OPERATION - FURTHER EMBODIMENTS One area which has not yet been served by efficient multi-touch gestures is that of node-based point, line, route or corridor definition via a multi-touch surface, and yet there is considerable use which this could afford users of mobile touch-screens and even large scale touch-sensitive surfaces. The use of one finger in contact with a touch-sensitive area is termed a node, and one node on the touch screen can define location and information relating to a point of definition on a background such as a map or image. Similarly two touch screen nodes can be used to create line segments, and route segments on a background. From these primitive entities, multi-segment, composite, lines, routes and corridors can be created, and any primitive or composite entity can be manipulated through the movement of the nodes defining them. Various combinations of the following node-based point, line, route and corridor definitions, manipulations and edits can be used in an embodiment. NODE-BASED POINT DEFINITION, SELECTION & MOVEMENT
The definition of a point on a touch screen device is prior art, especially when implemented as a single touch, held for a short duration. Such an operation on a mapping service such as Google Maps and Apple Maps results in a point of definition, marked with an information bubble or pin symbol. I term the created point a point of definition and not a point of interest since the existence and location of the point is defined by the user, at will, anywhere on a map or geographic application; a point of interest is normally taken to mean a pre- existing geographic or commercial entity, the symbol or information of which may be normally hidden, that can be displayed to the user on request or when satisfying a search request. In one embodiment a prolonged touch (typically between 0.3 seconds and 1 .0 seconds) to a touch-screen device would also create a point of definition on a map, similar to the prior art method of Google Maps and Apple Maps implemented on touch screens, except that the point of definition would be marked with a symbol. In one embodiment the symbol would be a filled, colored shape such as a square. As a note of clarification a node refers to a persistent defined touch point on the touch screen, which may be used to produce various entities, including points of definition, on a background application such as a map or image application. Therefore when used for defining points of definition, nodes are used in the creation of them, and effectively one node entity exists for one point of definition entity. However nodes are not specific to points of definition, since nodes are used in the creation of other entities. For example two nodes are used in the definition of a line segment, and therefore two nodes are associated with every line segment. Multiple nodes can exist on a touch screen concurrently.
FIG. 1 c illustrates the creation and selection of a node and point of definition on a touch-screen surface 102, via a prolonged touch with a single touch implement 104 - in this case a finger. The use of one or more fingers is assumed during the remainder of the detailed description and operation section, although other touch implements can be used such as a stylus or gloves adapted to the purpose of multi-touch gesturing on touch screens. FIG. 2c shows the result of a point of definition node creation touch on touch screen device 202, with the created point represented by square 204 in the position on the map or background image at which it was created. In one embodiment no label will be given to the created point. In a second embodiment, directly after the creation of a point of definition a means to label the point is automatically given to the user, such as a virtual keyboard being provided on the touch screen display. In a third embodiment the application will provide an automatically generated number or label upon creation. In a fourth embodiment, a longer duration touch than required for node creation, of approximately two seconds, will initiate a means to label the point of definition. In a further embodiment which is not mutually exclusive to the point labelling embodiments described above, the selection of an existing point of definition will enable the user to be able to perform a labelling or re-naming operation on that point. Legend 206 is an example of a point of definition label defined by the user. One of the fundamental differences between prior art points of definition compared to node-based points of definition is that multiple node- based points of definition can exist concurrently, and be visible to the touch screen user at the same time, as depicted by additional points 208. For prior art points of definition such as those of Google Maps or Apple Maps, only one point of definition is permitted. Trying to define a second one will result in the original point being deleted, and the new point being created at the new position instead.
The creation of a node-based point of definition also selects that node for immediate movement of the point of definition as indicated in FIG. 3c by the motion arrow 304. When the finger that creates or selects the node is moved while still maintaining contact with the touch-screen, the node and therefore associated point of definition will be moved along the surface of the touch screen 102 with the finger. Therefore in one embodiment when a selected point of definition is moved, no panning of the screen or underlying map typically occurs; the node and associated point of definition will move and not the background to the point of definition, with the exception of node-based panning described below. The result of a node-based point of definition creation and movement is shown in FIG. 4c. The created node-based point of definition representation 404, on the touch-screen device 202 is shown. The direction arrow 402 represents movement of the point of definition, equal in direction and distance on the touch-screen to the causal finger motion.
The selection of a node-based point of definition previously created, is similar to the selection during creation of a node-based point of definition except that it requires the existence of a node-based point of definition at the location of the touch (or within a number of pixels of the touch). Therefore touching a node-based point of definition on the touch-screen and continuing to touch it will select that point of definition for an operation including moving the node around the touch screen. The node-based point of definition is only required to be present to be able to be selected, and therefore the selection - even of invisible points - is possible. In one embodiment the appearance of the selected point of definition is changed to denote that it is selected. In a second embodiment the contrast of a selected point of definition is inverted to denote a selected point such as that shown in 404, and in a third embodiment the color of a node-based point of definition changes once selected.
FIG. 5Ac demonstrates how to create the functionality of node-based definition, selection and movement on the touch-screen for points of definition. Process box 502 is a background task which monitors for a multi-touch operation; when a multi-touch operation is detected, the decision logic at 504 detects the specific multi-touch input of a touch device such as a finger touching the screen for a duration appropriate with the application (for example one second). If this node event is detected there is another decision point 506 which determines whether there is an existing node at the location of the touch on the screen. If there is no existing node, the node-based point of definition creation process 510 is initiated, which creates a node at the location being touched, and then selects the point of definition which has just been created. If there is a node at, or close to the detected node multi-touch, that node will be selected, as shown in 508. Whether the node was just selected by the node multi-touch, or created and selected in a combined operation, the user can move the node-based point of definition freely as summarised by process 512, and further elaborated in the process description of FIG. 5Bc. The node-based point of definition position on the touch-screen will repeatedly track the position of the finger performing the touch as shown by process 518. However a decision 520 as to whether to perform normal node movement or margin- based node movement depends on whether the node, and associated point of definition, is within a screen margin. A screen margin may be used - although it is not necessarily visible - in the situation where the background to a node, such as a map, occupies a larger area than can be seen on the touch-screen. In this case the node-based point of definition remains under the controlling finger, but the background moves in the opposite direction to the margin as described by 524. Therefore if a node-based point of definition is moved into a defined margin area of the left of the touch-screen, the user's controlling finger may stop there, in which case the background will move to the right. Typically, borders will be relevant at the top, bottom, left and right sides of a rectangular touch-screen, although for example borders near the corners could act in two directions. Such scrolling of a background can occur for as long as the user's finger is in contact with the screen and within a margin. If the finger is no longer in a margin, but still in contact with the touch-screen, normal node- based point of definition motion 522 will occur, with the node following the finger. Decision logic 514 on FIG. 5Ac determines whether any other operation is performed on the node-based point of definition after movement; an immediate time-out occurs if the node controlling finger is removed from the touch-screen, in which case the node-based point of definition stays at the position it was last at - where the finger was removed from - and is deselected. However if the controlling finger is detected as staying in the same position, but still in contact with the touch-screen for a certain time - for example two seconds - a user interface will be brought up to enable the user to assign additional information for the point of definition, as shown in process 516.
FIG. 5Cc shows normal node movement on a touch-screen surface 102 - in this case a satellite navigation system - with 528 representing the position of a user's finger, which is evidently outside the margin 530. The node, and associated point of definition if applicable, is moved with the finger, for example as shown by arrow 526, without the background being panned. By contrast, FIG. 5Dc shows margin-based node movement, where finger position 528 is within the margin 530 - in this case the top margin. Arrow 532 represents the movement or scrolling of the background as a consequence of the presence and position of the finger controlling the node being within the margin - in the opposite direction to the previous direction of travel across the screen by the node. Margin-based node movement is possible where the relevant area for which nodes are relevant is greater than the area shown by the screen. The effect will be to move the node further along the background in the desired direction. In this case the direction of movement of the background will be opposite to a panning multi-touch operation in the same direction that would happen with the attempted movement of a point of definition pin, in Apple Maps, for example. Note that although the above functionality concentrates on node movement with respect to an associated point of definition, various embodiments relate to the movement of nodes underlying lines, routes and corridors. Therefore for example the movement of the end node of a multi-segment line can also use normal and margin-based node movement as described in FIGs. 5Ac, 5Bc, 5Cc and 5Dc.
FIG. 6c shows some of the information which could be attributed to a node after creation and selection - particularly a node representing a geographic location on a map. Latitude and longitude would be important for a location node - this would be received from a mapping or geographic model application once the position on a touch-screen has been established. Also for geographic nodes, start date and finish date could be useful for determining relevance. Significantly, elevation or altitude - perhaps with minimum and maximum elevation/altitude would allow a three dimensional location definition. Therefore the altitude of a surveying point on a mountain could be usefully defined, an altitude above ground could be defined, or a depth below the sea could be added to latitude and longitude data. A name would also be useful - especially when sharing the node on a network, for a shared reference by those users with permissions to see specific nodes. Certain information, if required, could be attributed to a node by the operating system, such as the user and time at which a node was created. Miscellaneous information or notes about a location could also be added by a user. Finally visual information, such as node icon and label color, size and shape could be defined, or these could be defined or defaulted by the application itself. In one embodiment any information desired to be entered by the user would be available on a menu or form following normal selection of the node of approximately 0.5 seconds. In another embodiment a menu or form would be presented to the user to complete following an extra long selection period of more than one second. In some embodiments data associated with a node would have default values used if the user did not specify values. FIG. 6c also shows that nodes and node data can be shared across a communication network, and that a node created on one device could be viewed either as a point of interest or an editable node on other devices.
LINE SEGMENT DEFINITION AND DISTANCE DISPLAY
Line segments - with reference to FIG. 7c - are defined with the use of two fingers; in this case a thumb 704 and middle finger 702 touching a touchscreen 102 together, and for a minimum duration (for example one second) without being moved. This action will create or select two nodes, at the locations of the two touches on the touch-screen, between which will be drawn a line segment which in one embodiment will be straight, continuous, blue and thick. Other embodiments relevant for different application would produce line segment combinations of wavy, saw-tooth or straight type, with combinations of continuous, dotted or dashed style, with one of various common colors and thicknesses. In one embodiment, one of the two nodes created will be determined as the key node, and indicated to the user as the key node by a specific appearance. Where a key node for a line segment is defined, the selection of the whole line segment, and operations on that line segment are possible by selection of the key node first - for example movement of the line segment with both of its nodes, instead of the movement of just one node. The straight line segment shown as 806 on FIG. 8c between position nodes 802 and 804, on the touch-screen device 202, constitutes a line segment which can be further enhanced by the user, as illustrated in subsequent figures and paragraphs. In this case node 804 is marked as the key node, although other ways can be used to indicate a key node from other node, including shape, contrast, color or fill.
The creation method of a line segment by a touch-screen device user is shown in FIG. 9c. User inputs to the touch-screen will be monitored for multi-touch node gestures by process 502. The logic of 902 will determine whether two fingers are touching the touch-screen and remaining still for greater then a minimum duration (for example 1 second), and if so the process 904 will create a line. Process 904 will create a line firstly by selecting the two nodes specified by the finger positions. If a node already exists at a finger position (or within a defined radius), the node will be selected. If there is no pre-existing node at the position, a node will be created at the given screen position and the node will be selected. A line segment will be drawn on the touch-screen between the two selected nodes. Typically the line will be straight, but there are various possibilities with regard to line type, which may for example be a wave, a saw-tooth, an arc, a spline, or other common line type, and of typical thickness and color possibilities found in drawing applications. If a key node is required by an application, the allocation of a key node can vary according to application and user defaults and preferences, for instance the first touch to be made during creation of the line, or the highest and furthest left on the touchscreen.
According to the application, line segment latent nodes may be automatically created for the purpose of line division as described later with the assistance of FIG. 1 1 c. The logic of 906 will detect whether the two fingers remain on the created nodes for a minimum time-out period after the creation of the line segment. If not (for instance the fingers are removed immediately upon the drawing of the line segment) in one embodiment the line will be completed without any additional user information added to the line at that time. In another embodiment the line segment will disappear if a minimum time-out period is not met with the creating fingers remaining substantially still. If the fingers do remain substantially still for an additional period following the timeout, for example 0.5 seconds, process 908 will allow the user to add additional information for the line segment via a user interface. Selection of a whole line segment is possible after the initial creation of the line segment; in one embodiment will consist of touching and holding the line segment anywhere on its length (although this embodiment is not compatible with the use of latent nodes for line division). Other embodiments will select a whole line segment from the touching and holding, or double-tapping of the key node of the line segment. Yet another embodiment will allow the selection of a whole line segment from a menu option once either of the end nodes of the line segment is selected. FIG. 10c illustrates some of the information which could be added to, and relevant to a line segment. In one embodiment, information can be added automatically by the operating system, such as Creation User and Creation Time. Other information in various embodiments can be graphical preferences from a user, or from default values. In various embodiments some information may be defined by the user, such as Line Segment Name and Information. Other information including node positions on screen and node position representation (such as latitude/longitude/elevation in a mapping application) will be inherited from the node data relating to the nodes either end of the line segment. This is advantageous to a touch-screen device user, since if one end of a line segment is not in the desired location, the user can select that node and move it, with the effect that the line segment will be stretched, contracted or rotated in accordance with the motion of the node.
Apart from, or in addition to the creation of a line segment between two nodes, the same touching and holding of two node points on a touch screen can be used to show distance or difference between the selected nodes. FIG. 13c denotes the touching of a multi-touch enabled touch screen device 202 with two fingers 1306 which are held still for a minimum amount of time as for the normal node-based creation of a line. However in addition to line 1304 being created between the nodes under fingers 1306, a representative distance 1302 is displayed next to line 1304, which states the straight line horizontal distance between the points on the map defined by the nodes. In one embodiment the displayed distance is calculated by performing typical distance calculating navigational algorithms on the latitudes and longitudes represented by the nodes, which allows accurate real round-earth and great- circle calculations to be used. In another embodiment the displayed distance is calculated by multiplying the calculated touch screen distance by the known scale of the map or image to the screen representation. In one embodiment the distance only remains while the fingers are in contact with the touch screen, and disappears when one or both fingers are removed, although in other embodiments the distance can persist. In another embodiment no line is displayed, and only the distance measurement is shown to the user. The distance measurement and display method shown in FIG. 13c can be used in other applications than electronic mapping; it can be used with earth or other planet models such as Google Earth, and can be used with image backgrounds such as satellite imagery, x-ray images, architectural plans, geographic information system images, radio telescopes, photographs, video feeds and electron microscopy. The distance calculated and displayed in one embodiment represents an angle or degree of arc rather than a scalar distance, which for example could be used in astronomy. Therefore this technology is of potential use to town planners, engineers, architects, microbiologists, planet scientists, radiographers, meteorologists, farmers, real estate agents, navigators and astronomers, to name a few, who are equipped with a suitable multi-touch enabled touch screen device.
ROUTE SEGMENT DEFINITION AND VECTOR DISPLAY
If it is desired to show a vector or direction of travel rather than a scalar line, a route segment may be defined. Route segment definition is similar to line segment definition, except that the direction or vector is defined by the order of finger touches and the creation order of the nodes. FIG. 14c shows the creation of a route segment. Finger 1404 is touched to the touch screen to remain substantially motionless a certain time before finger 1402 touches the touch screen. I estimate a typical value in use of approximately 0.5 seconds between touches. Once the second touch has been detected within the time window permitted, an arrow 1406 will be drawn from the node under the first touch 1404 in the direction of, and up to, the node under the second touch 1402. In one embodiment the arrow direction is in the direction of the first touch instead of the second touch, and other embodiments provide for the use of multiple line types, styles, thicknesses and colors as described for line definition, including the drawing of a normal line without an arrowhead.
In a similar manner to how distance displays have been described as being able to be displayed next to a line segment, scalar distances can be displayed next to a route segment as in FIG. 13c. However since a route segment gives direction as well as quantity, vector quantities or differences can be displayed to the user as shown in FIG. 15c. In the example shown, a horizontal value difference of touch screen pixels 1508 and a vertical value difference of screen pixels 1510 is shown between the node under touch 1504 and the node under touch 1502. Since the direction of the vector is known, the horizontal x difference value and the vertical y difference value have polarity or direction, so that in effect the node under touch 1504 is a temporary origin, and the node under touch 1502 is a vector relative to the former. One specific application of this example is for touch screen graphic designers and webpage authors in determining relative positioning on a touch screen. However the areas of application are much broader. In one embodiment the axis values of quantities 1508 and 1510 are Northings (distances to North or South) and Eastings (distances to East or West) respectively in a mapping application. In another embodiment the two dimensions for which distance is shown is based on two orthogonal axes relevant to an image. In another embodiment the information presented is an angle and distance between the nodes using measurement quantities and axes appropriate to the scale and application.
FIG. 16c shows a method of creating a route or vector segment, and differentiating it from a line or other multi-touch gesture. Activity 1602 monitors for the first touch and decision logic 1604 determines whether a second touch occurs within a specific time window. In this example a window of between 0.2 and 0.8 seconds is defined, however more generally it is anticipated that the time between the two touches will have a minimum value for example of 0.1 seconds and a maximum value for example of two seconds for a route to be recognised. However other durations are possible. Activity 1606 creates the vector line itself, for example a straight, black, thick arrow from the node under the first touch to the node under the second touch. Decision logic 1608 determines whether vector distance is required to be displayed, and if this is the case it is displayed via process 1612. If not, finger presence after the drawing of the route is used to decide in 1610 whether user information is required to be added in activity 1614. Additional information which may be added in by the user includes name of segment and free text describing its significance.
MULTI-SEGMENT LINE AND ROUTE DEFINITION AND EDITING FIG. 1 1 c demonstrates how a line may be sub-divided via node-based multi- touch. In one embodiment, a created line or route segment will automatically have one or more latent nodes (as indicated by 1 106) created along its length in addition to the line-end defining nodes. In different embodiments these extra latent nodes may be visible or invisible to the user, and may be regularly spaced or irregularly spaced. If a user selects one of these latent nodes (as per the selection of any node with a short touch typically of between 0.5 seconds and 1 .0 second), the latent node can be moved relative to the line- end nodes 1 104. This will bend the line, and create two line segments out of one, sharing a common end-line node (which was previously a latent node of the original line). In one embodiment new line segments created by line subdivision will have their own new latent nodes created.
FIG. 12c illustrates a process by which node-based line division may be performed, which will result in a multi-segment line or route. Decision logic 1202 identifies whether the use of latent nodes is valid with current defaults and user selections. A second decision logic 1204 determines which method to use. In the process shown two methods are possible depending on default or pre-definition by the user. In the case of line division, process 1206 becomes active and divides a line or route segment into N equal parts with N-1 equally spaced latent nodes, where N is a predefined integer value. For example if N has the value of 2, any line or route segment created will have one latent node half way between the two end nodes. Alternatively if a particular spacing value, such as 3cm, has been decided as the basis for latent node spacing, a latent node will be placed at that interval, starting at one end node. In different embodiments there may be a choice between line division method, spacing method or irregularly-spaced latent node methods, or just one of these may be implemented. Once the number and position of latent nodes on a line or route segment has been determined, activity 1210 will monitor for a touch over one of the latent nodes. On selection as shown in process 1212, a normal node will be created from the selected latent node, and result in a multi-segment line or route, even if all nodes are in a straight line still. In one embodiment of process 1212, once a latent node has been selected, a new node will not be created from the latent node unless the selected latent node is first moved.
Another means of creating a composite, multi-segment line or route is from moving line or route segments together so that they are joined at their end nodes. FIG. 17c illustrates a composite line created from line segments 1708, 1710 and 1706. Line segments 1708 and 1710 are joined by a common node 1704, and line segments 1710 and 1706 are joined by a common node 1702. Such a composite line could for example represent a border, boundary or road on a map, or a complex line on a drawing application. Since non end nodes of a composite line are common between two line segments, for example the selection and moving of a node 1704 would result in a change of angle and possibly length by both line segments 1708 and 1710. FIG. 18c shows a composite route created through the combining of multiple line segments 1802. Such a composite line can be used to show flow or direction of travel. A particular use of this type of line would be in defining a route which does not necessarily rely on existing geographic locations. This would be beneficial for example to define a planned wilderness route, journey on water or the flight plan for an aircraft.
FIG. 19c shows a method for the joining of different line or route segments to make a composite line or route. Decision logic 1902 verifies whether a new segment has been created, and if so, decision logic 1904 determines whether either of the end nodes or both correspond with existing nodes. It is unlikely that even if the user desires to exactly match the position of an existing node that the same central pixel on the touch screen will be able to be selected, especially if fingers are used compared to a stylus or precision pointing device. Therefore said logic will accept close approximations to the position of an existing node as being identical, and those nodes will be merged as defined in process 1906. In one embodiment the mean position of the original node and new node will be averaged so that the new joining node of the two segments will be halfway between the precise centers of the nodes. In another embodiment the joining node position is taken to be the position of the original node, and in yet another embodiment the joining node position is taken to be the position of the new node. In some embodiments if there is user-defined or default data attached to the nodes, for example the altitude above sea level of the point represented by the node, this will be merged. In one embodiment the data associated with the original node is used to populate the data fields of the joining node. In another embodiment newer non-default data, such as creation date from the new node will over-write the equivalent data of the existing node. Process 1908 allows a key node for a composite line or route to be determined where neither or both segments have a key node already, since in one embodiment a composite, multi-segment line or route may not have more than one key node. In one embodiment the key node of the original line segment, route segment, composite line or composite route is retained as the key node for the new composite line or route.
In one embodiment of the node joining method described by FIG. 19c it is not just new nodes on line or route segments which may be joined to existing nodes, but two nodes of existing segments where the user has selected an end node from a segment and moved it in close proximity to an existing end node of another segment or existing point node. The closeness of the centers of nodes when deciding whether they are to be joined depends somewhat on the application. However it is anticipated in general that nodes would not be joined unless the touch areas under a typical finger touch overlapped with an equivalent radius from the second node.
Other operations are possible on multi-segment lines and routes, however said lines and routes were created. One operation is the deletion of a node, as shown in FIG. 20Ac, although the operation can also be performed on nodes which are not part of a composite line, composite route, line segment or route segment. A user selects a node using a finger 2004. A long press of more than approximately one second results in a node operation menu, one of whose operations is deletion of the node. Upon selection the node will be deleted and the symbol representing the node will be deleted as shown by area 2006 on FIG. 20Bc. If the node is an end node of a composite line or route, the end segment of the line or route which incorporated the node will be deleted also. In the case of one of the non-end nodes being deleted, the segments attached to the deleted node will be deleted. In one embodiment shown by FIG. 20Bc, the composite line or route would be reformed by the joining together of the nodes either side of the deleted node with a new line or route segment. In another embodiment the deletion of a node would result in two entities, such as a node and a composite line, with no automatic replacement or substitution of line segments. On the selection of deletion of a node on a line or route segment which is not connected to any other nodes, in one embodiment the whole segment and both nodes will be deleted. In a second embodiment the node not deleted will remain as a point of designation node. If a key node is deleted, in one embodiment another node will be made the key node. In another embodiment the whole entity that the key node represents will be deleted.
Other operations on lines and routes are possible, which can be performed via a menu selection as shown in FIG. 20Ac. Firstly any node can be defined as a key node by menu selection, which in one embodiment replaces the existing key node in that function. Secondly naming of a whole entity is possible after which a labelling means such as a virtual keyboard on the touch screen becomes active. Another option for selection in one embodiment illustrated in FIG. 23Ac and FIG. 23Cc is the conversion of a line segment 2304 or composite line 2308 into a route segment 2302 or composite route 2310 respectively, and vice versa. In various embodiments, as shown by FIG. 23Bc, an option for selection is the means to reverse the direction of a composite route 2306. The said reversal means is also applicable to route segments in various embodiments.
CORRIDOR DEFINITION AND OPTIONS
FIG. 21 c illustrates the use of line segments to create not only a composite line, but a corridor. A corridor is a central composite line or composite route such as 21 10 with associated parallel composite lines or composite routes illustrated in FIG. 21 c by 2106 and 2108 which represent desired limits related to the central line. One use for this is the definition of air corridors or sea lanes for touch-screen devices used for navigation or navigation planning. In one embodiment the end of a corridor will be a semi-circle centered on the end- node. Corridors can be created by the user of a touch-screen device for a line segment by specifying distance offsets from a central line, as part of the user- added line information process 908 previously described in FIG. 9c. Similarly a distance offset can be defined for a route segment as part of the user-added route segment information process 1614 described in FIG. 16c. In one embodiment there is a single offset to line or route segment on one side of the selected segment. In another embodiment a line or route segment is offset equally on both sides of the existing line or route segment.
In the case of multi-segment lines or routes, such as 2204 in FIG. 22Ac, selection of the whole multi-segment line or route is first performed. In one embodiment selection is achieved by the selection of the key node of the multi- segment line or route. In another embodiment, selection is achieved by a long press of over approximately one second anywhere on the line or route. In a third embodiment selection is achieved by the long press of over approximately one second of any node on the multi-segment line or route. After user selection of the multi-segment line or route, in one embodiment the touch-screen device will present one or more options to the user, including the option to create a corridor. In one embodiment the user will subsequently provide numerical input to the touch-screen device representing a width of corridor. In another embodiment a default or pre-selected value will automatically be used as the corridor width. In other embodiments an active symbol is placed over the key node, or all nodes of the multi-segment entity. When a finger touch is made to an active symbol, the active symbol can be moved away from the node it is over for the user to indicate the width of the corridor required. In one embodiment a dynamic circle will be created centered on the node and with radius defined by the active symbol to visually feed back to the user what the width of the corridor will be. In one embodiment the active symbol and circle will disappear upon the user removing their finger. In another embodiment the user's finger must remain substantially motionless for approximately 0.5 seconds before the corridor width is finalised and the active symbol is removed. Once a corridor width has been defined graphically or numerically, a corridor will be drawn around the central multi-segment line or route in accordance with the selected width. In one embodiment the corridor area will be calculated by the union of segment rectangle area as shown for one segment by 2210 in FIG. 22Bc, and node circle area as shown for one node by 2208. Segment rectangle area is the union of all areas made up of rectangles with length given by individual segment lengths and width given by the selected width. Node circle area is defined by circles with a radius of the selected width, for all nodes in the multi-segment line. The addition of node circles for the calculation of corridor area eliminates discontinuities of corridor shape shown by 2212 in FIG. 22Bc. In one embodiment, the border of the final corridor area calculated will be displayed around the original multi-segment line or route, as shown by 2214 in FIG. 22Cc.
In several embodiments corridors are not just created by the user of a touch screen device, but are defined on a remote computer or touch screen device and communicated to a computer or touch screen device for display. The use of nodes facilitates communication of corridors since little data is required to be transmitted to define a corridor. Navigation restricted corridors can therefore be provided centrally, which can be overlaid on a touch-screen display with local information - such as GPS position and planned route of the local user. The key is the use of nodes to represent the required information between users and data sources.
APPARATUS DETAILED DESCRIPTION & OPERATION
In order to detect nodes, define points of definition, and to create lines, routes and corridors from nodes, a touch screen module is required, as indicated by 2402 on FIG. 24c. The output signals from the touch screen module in response to user touches are fed to a control module 2404 to interpret. The control module will determine whether a multi-touch event relating to nodes, node-based lines, node-based routes or node-based corridors has occurred. If so, the control module will process the information to create or modify the relevant entity. Node, line, route or corridor data for storage will be routed to the memory module 2412, with the memory module also serving as the source of these data to the control module where required by an application 2418 or operating system 2414 running which requires the information. Where an application or operating system requires an interface with remote devices, networks, servers or databases, the communications module 2410 sends or receives the point of definition, line, route or corridor node data, and supplementary information associated with that entity. This information is passed to or from the control module which may also route the data to or from the touch screen module or the memory module.
FIG. 25c shows how node-based point of definition, line, route and corridor information may be exchanged between different devices and computers via a network. However several networks combining different communications links and different servers and databases could be used, depending on the application.
A tablet computer 2502, a large touch screen device (such as a touch screen television) 2504, a personal computer or workstation 2506 (which does not have to have a touch screen or be multi-touch enabled), a smartphone 2508 and a satellite navigation system 2510 are shown communicating node-based point of definition, line, route and corridor information via a network. The information being provided by some or all of the devices is processed and stored at central servers or databases 2512. The central servers or databases will share information as requested and required by the devices' applications and operating systems, including node-based point of definition, line, route and corridor information, for example a corridor area on a map. The link 2514 represents a one-to-one communication of node-based point of definition, line, route and corridor information between two users with suitable apparatus, and shows that a centralized information distribution system is not necessarily required. Peer-to-peer and small clusters of users can also share node-based entity information. ADVANTAGES
Various applications of this new human interface to touch-device technology are foreseen.
Firstly, a way of defining multiple points of definition by the user is provided, which means that places of relevance to her may be defined just by a touch at the applicable screen location over a background such as a map. Furthermore, those points of definition may be named as desired, remembered by the touch screen device, and shared with friends, social networks or databases. Points of definition could include favourite shops, parking spaces currently vacant and rendezvous points. Current mapping applications typically only allow one user- defined point or pin, which are not customizable, storable and may not be labelled.
There are not currently ways to easily draw lines on touch screen devices, and especially lines which can be joined or shared remotely. Node-based line drawing allows lines to be quickly drawn with just two user touches between the desired points. This provides an efficient means to define borders of land for agriculture and real estate for example.
Similarly to lines, routes can also be defined easily with two taps, which show direction as well as routing. Route segments may be quickly defined and joined by touch to create a composite route for navigators and pilots. Since routes - like all node-based entities - are easy to define and repeat, they are easily communicated via a communication network, which could have advantages for example in the remote filing of flight plans.
A further development of composite or multi-segment lines and routes is the definition of corridors, which are two dimensional extensions to lines and routes. Corridors are easy to create by touch and user selection, and have application in navigation and control. Corridors can be defined centrally for example by air traffic control on a touch screen, and communicated to pilots. The same method of defining lines, with two touches, lends itself to defining two points of which it is desired to know the distance between, which is displayed to the touch screen device user. The distance can either be the screen distance, for example in horizontal and vertical pixels to a programmer, or the distance which the touches represent on a background image or map. Therefore the method is useful for navigators to assess distance between two points. Other examples include the use for radiographers to determine the size of bone fractures from an image on the touch screen and air traffic control to determine following distances between two aircraft by touching their symbols on a radar display on touch screen workstation screen.
Finally, as well as being simple to define points of definition, lines, routes and corridors using user-touched nodes, the node-based method lends itself to the efficient communication of the said entities. The sending of only a few key nodes over a communication channel which completely define a whole geometric entity, without sending a complete record of the entity, allows simple and low bandwidth communication of such data.
Although the description above contains several specificities, these should not be construed as limiting the scope of the embodiment, but as examples of several embodiments. For example the range of devices applicable is greater than tablet computers and smartphones, and finger operation referred to includes operation by a pointing device such as a stylus.
Thus the scope of the embodiments should be determined by the claims appended and their legal implications, rather than by the examples supplied.
When used in this specification and the claims, the term "comprises" and "comprising" and variations thereof mean that specified features, steps or integers and included. The terms are not to be interpreted to exclude the presence of other features, steps or compounds.

Claims

1 . A method for interpreting user touches on a touch screen device to create and edit points of definition, lines, routes and corridors on the display of said touch screen device, comprising:
a. recognizing single and double, concurrent user touches to the touch screen device,
b. interpreting said user touches as node positions, node touch sequences and associated node motions on the screen display of said touch screen, c. interpreting said node positions, said node touch sequences and said node motions to determine the point, line segment or route segment entities to be drawn on the touch screen display,
d. retaining recognition and information of said entities persistently after said user touches to the touch screen device have ceased,
e. allowing reselection by a user of a previously defined entity for operation on that entity, and
f. allowing reselection by a user of any node of a previously defined entity for operation on that node.
2. The method of Claim 1 wherein the number of said concurrent user touches is interpreted as one, and the node produced by said concurrent user touch remains substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a point of definition and the drawing of a symbol on the touch screen to represent said point to the user.
3. The method of Claim 2 wherein said user touch remains substantially motionless for an additional predetermined length of time after said creation time, thereby resulting in a means being provided to the user for adding and viewing alphanumeric name or identification information to the said point of definition.
4. The method of Claim 1 wherein the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the creation of a line segment and the drawing of a line on the touch screen between positions of the two user touches.
5. The method of Claim 4 wherein the said drawn line has a predetermined style, color and thickness.
6. The method of Claim 4 or Claim 5 wherein one or more latent nodes is automatically created at intervals along a line segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the line segment which thereby becomes a multi-segment line.
7. The method of any one of Claims 4 to 6 wherein a node from one line segment is moved so that it is substantially at the same location on the touch screen as a second node of a different line segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment line.
8. The method of Claim 1 wherein the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the creation of a route segment and the drawing of an arrow from the point of the first touch in the direction of the second touch on the touch screen using a predetermined style, color and thickness.
9. The method of Claim 8 wherein one or more latent nodes is automatically created at intervals along a route segment, allowing the user to identify, select and move any latent node whereby said latent node becomes a new node of the route segment which thereby becomes a multi-segment route.
10. The method of Claim 8 or Claim 9 wherein a node from one route segment is moved so that it is substantially at the same location on the touch screen as a second node of a different route segment, thereby resulting in the merging of the two nodes and the creation of a multi-segment route.
1 1 . The method of Claim 1 wherein the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display of actual distance between the two user touches on the touch screen, to the user.
12. The method of Claim 1 wherein the number of said concurrent user touches is interpreted as two, and the nodes produced by said concurrent user touches remain substantially motionless for a predetermined length of creation time, thereby resulting in the display to the user of representative distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image.
13. The method of Claim 1 wherein the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display of screen vector distance between the two user touches on the touch screen, to the user.
14. The method of Claim 1 wherein the number of said user touches is interpreted as two and there is a detected said node touch sequence with the time between the first touch and the second touch being within a predetermined time value of each other, thereby resulting in the display to the user of representative two dimensional vector distance between the two node points created by the two user touches on the underlying map or image, taking into account the scaling of said underlying map or image.
15. The method of Claim 1 wherein a said reselection by a user of a previously defined entity is performed and said operation on said entity is selected as corridor creation, whereby a bounded area around said entity is calculated and displayed to the user on the touch screen, defined by the logical union of circle area around all nodes of said entity and rectangle area around all line or route segments of said entity.
16. The method of Claim 15 wherein the corridor width is predetermined and therefore the radius of the circles around said nodes is made equal to the predetermined corridor width and the width of the rectangles around said segments is also made equal to the predetermined corridor width.
17. The method of Claim 15 wherein the corridor width is defined by touch by the user, and therefore the radius of the circles around said nodes is made equal to the user-specified corridor width and the width of the rectangles around said segments is also made equal to the user-specified corridor width.
18. The method of any one of the preceding claims wherein a said reselection by a user of a previously defined entity is performed through the means of the entity having one or more key nodes whereby operations specific to the whole entity such as the movement, deletion or addition of data is performed.
19. The method of any one of the preceding claims wherein said operation on a node is taken from the list including movement, deletion, labelling, addition of data and definition as a key node.
20. The method of Claim 19 wherein said movement operation on the node is by the user dragging the node around within the perimeters of the multi-touch enabled input device without any background map or image being scrolled.
21 . The method of Claim 19 wherein said movement of the node is by the user maintaining a touch within a predetermined distance of a perimeter of the touch screen, thereby causing the node to stay at the position of the touch, but any background map or image being scrolled in the opposite direction of said perimeter.
22. The method of Claim 19 wherein if the geometric location coordinates of the moved node become substantially the same as the geometric location coordinates of an existing node, the two nodes are equated as being the same, and the new single node inherits the properties of said existing node.
23. The method of Claim 19 wherein said addition of data includes information taken from the list of start date, end date, elevation above sea level, planned altitude, depth below sea level, and free text information.
24. The method of Claim 19 wherein said deletion operation removes the node, and also a point of definition associated with a node.
25. The method of claim 1 , wherein the method is for interpreting multiple, concurrent user touches on the touch screen device to create and edit shapes on the display of said touch screen device, wherein the method further comprises:
a. recognizing multiple, concurrent user touches to the touch screen device, b. interpreting said user touches as node positions and associated node motions on the screen display of said touch screen,
c. interpreting said node positions, and said node motions to determine a geometric shape to be drawn on the touch screen display,
d. retaining recognition and information of said shape persistently after the user touches to the touch screen device have ceased,
e. allowing reselection by a user of a previously defined shape for operation on that geometric entity, and f. allowing reselection by a user of any node of a previously defined shape for operation on that node.
26. The method of Claim 25 wherein the number of said concurrent user touches is interpreted as two, and one of the nodes produced by these remains substantially motionless for a predetermined length of time while the other node initially moves and subsequently also remains substantially motionless for a predetermined length of time, thereby resulting in the drawing of a circle centered on the initially stationary node and passing through the initially moving node.
27. The method of Claim 25 wherein the number of said concurrent user touches is interpreted as two, and both of the nodes produced by said concurrent user touches initially move and subsequently remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a rectangle determined by the positions of the two nodes at diagonally opposite corners.
28. The method of Claim 27 wherein the two concurrent user touches are initially detected as a single node due to proximity of said user touches to each other before the movement of the two resultant nodes apart.
29. The method of Claim 25 wherein the number of said concurrent user touches is interpreted as three and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a triangle between the three nodes such that each node position becomes an apex of the triangle.
30. The method of Claim 25 wherein the number of said concurrent user touches is interpreted as four and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a quadrilateral shape between the four nodes such that each node position becomes a corner of the quadrilateral.
31 . The method of Claim 25 wherein the number of said concurrent user touches is interpreted as five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a pentagon between the five nodes such that each node position becomes a vertex of the pentagon.
32. The method of Claim 25 wherein the number of said concurrent user touches is interpreted as greater than five and the nodes produced by these remain substantially motionless for a predetermined length of time, thereby resulting in the drawing of a polygon between the plurality of nodes such that each node position becomes a vertex of said polygon.
33. The method of any one of Claims 25 to 32 wherein reselection of the shape is via the user touching a node or side of the shape for a predetermined period of time.
34. The method of Claim 25 wherein said operation on said geometric entity is movement of the whole geometric entity with all nodes being moved together with respect to a virtual area or background so that the nodes are not moved in relation to one another.
35. The method of Claim 25 wherein said operation on said geometric entity is the addition of user-defined information into predetermined fields to categorise, label or define parameters relating to the geometric entity in the area of application.
36. The method of Claim 25 wherein said operation on said geometric entity is the deletion of the geometric entity.
37. The method of Claim 25 wherein said operation on said geometric entity is an independent moving of two component nodes of said geometric entity along two orthogonal axes of said touch screen module such that a two dimensional stretching or compressing of said geometric entity occurs proportional to the movement of the two nodes in each axis.
38. The method of Claim 37 wherein said two dimensional stretching or compressing is of two nodes on the circumference of a circle geometric entity such that an ellipse shape is created from the original circle.
39. The method of any one of Claims 25 to 38 wherein one or more sub-nodes is automatically created at intervals along the sides of a shape, allowing the user to identify, select and move any said sub-node whereby said sub-node becomes a new node of said shape and a new side is added to said shape.
40. The method of any one of Claims 25 to 39 wherein the number of sides in a geometric shape is decreased by user selection of a node of said shape and a subsequent deletion operation on said selected node occurs, whereby nodes previously connected to the deleted node are directly joined.
41 . The method of any one of Claims 25 to 40 wherein said geometric shape becomes the boundary within said touch screen module of an area comprising a two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
42. The method of any one of Claims 25 to 40 wherein said geometric shape is created on top of an existing two dimensional image, map or surface having its own coordinate system such that the bounding node positions of said shape define coordinates of said two dimensional image, map or surface displayed on said touch screen module, and a subsequent pan and zoom operation is performed either to the specific area of said image, map or surface defined by said geometric shape or to an area centered on and including said specific area which also shows additional surrounding area due to differences in shape or aspect ratio of the said shape and the available screen area.
43. The method of any one of Claims 25 to 40 wherein said geometric shape becomes the boundary within said touch screen module of a new window which runs a software application independent from any main or background software application, and independent from a software application running in any other window.
44. The method of Claim 43 wherein selection options for applications to run in said new window are presented to the user in a menu or list adjacent to the new window after creation of the new window.
45. The method of Claim 43 wherein selection options for applications to run in said new window are presented to the user via icons appearing inside the new window after creation of the new window.
46. The method of Claim 43 wherein an application to run in said new window is selected by the user prior to creation of the new window.
47. A computer program comprising computer program code adapted to cause a computer to perform all of the steps of any preceding claim when executed by the computer.
48. A tangible computer readable medium storing a computer program according to claim 47.
49. A distance measurement and display system graphical user interface for touch screen devices with a mapping, navigation or image background, comprising: a. a detection module configured to detect two concurrent user touches to a touch screen that permit a user to input two points of definition on a background map or image for which it is desired to know the distance between,
b. a measurement module configured to calculate the representative distance between the two concurrent touches including scaling and conversion to the measurement units and axes of the background map or image, and
c. a display unit configured to display the calculated representative distance between the two concurrent touches to the user of the touch screen device.
50. A windowing system for touch screen devices, comprising:
a. a multiple independent window display and interaction module that permits a user to view concurrently a plurality of computer applications in a plurality of different windows of a plurality of different shapes and sizes,
b. a selection module for identifying which of the plurality of said multiple independent windows is to be the subject of a user defined operation, c. a geometric shape detection module that is configured to define the shape, size and boundary of a new window conveniently, and
d. a user selection module configured to permit a user to select an application for said new window.
51 . An apparatus, comprising:
a. a touch screen module incorporating a touch panel adapted to receiving user input in the form of multi-touch shape gestures including finger touches and finger movements, and a display surface adapted to present point of definition, line, route and corridor information to the user,
b. a control module which is operatively connected to said touch screen module to determine node and point of definition positions from said finger touches, to determine node motions and touch sequence from said finger movements, to recognize a line or route segment from combinations of said node positions and touch sequences, to create multi-segment lines and routes from individual segments by node position equivalence detection, to create multi-segment lines and routes from detection of latent node selection and movement on line and route segments, to detect a selection touch to a preexisting entity from the list including point of definition, line segment, route segment, multi-segment line and multi-segment route, to control the editing of said pre-existing entity, and to generate a continuous graphical image including said node positions and plurality of said pre-existing entities for display on the touch screen module,
c. a memory module logically connected to said control module which is able to store from and provide to said control module a logical element selected from the group consisting of operating systems, system data for said operating systems, applications which can be executed by the control module, data for said applications, node data, point of definition data, line segment data, route segment data, multi-segment line data, multi-segment route data and corridor data.
52. The apparatus of Claim 51 , wherein the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen, a touch screen display which interacts with medical and scientific image display equipment and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
53. The apparatus of Claim 51 , wherein the node, point of definition, line segment, route segment, multi-segment line, multi-segment route and corridor information presented to the user includes symbols and lines currently detected or selected and those recalled from memory, previously defined, or received from a remote database, device or application.
54. The apparatus of Claim 51 , wherein a communications module is incorporated adapted to the transfer of node, point of definition, line segment, route segment, multi-segment line, multi-segment route and corridor information including node position and entity type, to and from other devices, networks and databases.
55. The apparatus of Claim 54, wherein the control module is configured to accept external said information from said communications module and pass the information to the touch screen module for display to the user.
56. The apparatus of Claim 54, wherein the control module is configured to pass locally created said information to the communications module for communication to other devices, networks and databases.
57. The apparatus of any one of Claims 51 to 56, wherein the entities recognised by said control module from the said detected node positions, touch sequences and node motions include nodes, points of definition, line segments, route segments, multi-segment lines, multi-segment routes and corridors.
58. The apparatus of any one of Claims 51 to 57, wherein the said editing of pre-existing entities includes the movement of points of definition, the movement of entire entities, the deletion of entire entities, the stretching of lines by the movement of their individual nodes, the editing of a corridor width, the creation of multi-segment lines and routes by joining segments at common nodes, and the addition of a new node between two existing nodes of an entity.
59. The apparatus of any one of Claims 51 to 58, wherein the said nodes and points of definition recognised by said control module represent locations on a two dimensional image, map or surface having its own coordinate system which are readable by said control module from the memory module.
60. An apparatus according to Claim 51 , wherein the display surface is adapted to present node and shape information to the user, the control module which is configured to determine node positions from said finger touches, to determine node motions from said finger movements, to recognize a geometric shape from combinations of said node positions and node motions, to recognize an application associated with said geometric shape, to detect a selection touch to a pre-existing said shape, to control the editing of said selected shape, and to generate a continuous graphical image including said node positions and plurality of said geometric shapes for display on the touch screen module, the memory module being configured to store from and provide to said control module a logical element selected from the group consisting of operating systems, system data for said operating systems, applications which can be executed by the control module, data for said applications, node data shape data, area data and windows data.
61 . The apparatus of Claim 58, wherein the apparatus is selected from the group consisting of a mobile telephone with a touch screen, a tablet computer with a touch screen, a satellite navigation device with a touch screen, an electronic book reader with a touch screen, a television with a touch screen, a desktop computer with a touch screen, a notebook computer with a touch screen and a workstation computer of the type used in command and control operations centers such as air traffic control centers, but having a touch screen.
62. The apparatus of Claim 60 or Claim 61 , wherein the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows currently detected or selected.
63. The apparatus of Claim 60 or Claim 61 , wherein the node, shape, area and window information presented to the user includes node symbols, shapes, areas and windows from previous user touches, recalled from memory, previously defined, or received from a remote database, device or application.
64. The apparatus of any one of Claims 60 to 63, wherein the apparatus comprises a communications module which is configured to the transfer of node, shape, area and window information including node position and shape type, to and from other devices, networks and databases.
65. The apparatus of Claim 64, wherein the control module is configured to accept external node, shape, area and window data from said communications module and pass them to the touch screen module for display to the user.
66. The apparatus of Claim 64, wherein the control module is configured to pass locally created nodes, shapes, areas and windows to the communications module for communication to other devices, networks and databases.
67. The apparatus of any one of Claims 60 to 66, wherein said geometric shapes recognised by said control module from the detected node positions and node movements include circles, rectangles, triangles, quadrilaterals, pentagons and polygons with greater than five vertices.
68. The apparatus of any one of Claims 60 to 67, wherein the edits to selected shapes includes the transformation of a selected circle into an ellipse.
69. The apparatus of Claims 60 to 68, wherein the edits to selected shapes includes the two dimensional stretching of a selected shape in the axes determined by node movements.
70. The apparatus of any one of Claims 60 to 69, wherein the edits to selected shapes, areas and windows includes the creation of shapes, areas and windows with one side more than when selected, by the addition of a new node between two existing nodes of a shape.
71 . The apparatus of any one of Claims 60 to 70, wherein the node positions of a re-selected geometric shape, area or window are moved by the node being touched with a finger which is moved over the surface of said touch screen module wherein the node moves accordingly, and the said geometric shape, area or window is modified accordingly.
72. The apparatus of any one of Claims 60 to 71 , wherein said geometric shapes are re-selected by a non-moving user touch of a constituent node of said geometric shape for a period of less than three seconds, after which the whole geometric shape is moved accordingly by subsequent movement of said constituent node.
73. The apparatus of any one of Claims 60 to 72, wherein the said geometric shapes recognised by said control module represent areas on a two dimensional image, map or surface having its own coordinate system wherein the bounding node positions of said areas on the display of said touch screen correspond to the coordinates of said image, map or surface.
74. The apparatus of any one of Claims 60 to 73, wherein the said geometric shapes recognised by said control module represent window boundaries on the display of said touch screen such that a plurality of independent computer programs or applications are run concurrently on said display.
75. The apparatus of any one of Claims 60 to 74, wherein said geometric shapes recognised by said control module represent area boundaries on the display of said touch screen such that a combined pan and zoom operation is performed to the specified said area boundary.
PCT/GB2014/051157 2013-04-13 2014-04-14 Systems and methods for interacting with a touch screen WO2014167363A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1517611.8A GB2527244B (en) 2013-04-13 2014-04-14 Systems and methods for interacting with a touch screen

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/862,382 US9268423B2 (en) 2012-09-08 2013-04-13 Definition and use of node-based shapes, areas and windows on touch screen devices
US13/862,382 2013-04-13
US14/020,835 US20150338974A1 (en) 2012-09-08 2013-09-07 Definition and use of node-based points, lines and routes on touch screen devices
US14/020,835 2013-09-07

Publications (1)

Publication Number Publication Date
WO2014167363A1 true WO2014167363A1 (en) 2014-10-16

Family

ID=50543616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2014/051157 WO2014167363A1 (en) 2013-04-13 2014-04-14 Systems and methods for interacting with a touch screen

Country Status (2)

Country Link
GB (1) GB2527244B (en)
WO (1) WO2014167363A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3570154A1 (en) * 2015-01-21 2019-11-20 LG Electronics Inc. -1- Mobile terminal and method for controlling the same
CN111443864A (en) * 2020-04-14 2020-07-24 重庆赋比兴科技有限公司 iOS-based curve drawing method
CN114415912A (en) * 2021-12-31 2022-04-29 乐美科技股份私人有限公司 Element editing method and device, electronic equipment and storage medium
WO2023274119A1 (en) * 2021-06-30 2023-01-05 华为技术有限公司 Touch operation recognition method and apparatus, and related device
CN115576451A (en) * 2022-12-09 2023-01-06 普赞加信息科技南京有限公司 Multi-point touch device and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4555775A (en) 1982-10-07 1985-11-26 At&T Bell Laboratories Dynamic generation and overlaying of graphic windows for multiple active program storage areas
US20070024624A1 (en) * 2005-07-26 2007-02-01 Poppen Richard F Generalization of Features In A Digital Map
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
WO2009154951A2 (en) * 2008-05-28 2009-12-23 Apple Inc. Defining a border for an image
WO2010081040A1 (en) * 2009-01-09 2010-07-15 Skinit, Inc. Path creation utility for image editor
US7805442B1 (en) * 2000-12-05 2010-09-28 Navteq North America, Llc Method and system for representation of geographical features in a computer-based system
US7840912B2 (en) 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
US20110254797A1 (en) 2009-12-18 2011-10-20 Adamson Peter S Techniques for recognizing multi-shape, multi-touch gestures including finger and non-finger touches input to a touch panel interface
US20120026100A1 (en) * 2010-07-30 2012-02-02 Migos Charles J Device, Method, and Graphical User Interface for Aligning and Distributing Objects
US8155391B1 (en) * 2006-05-02 2012-04-10 Geoeye Solutions, Inc. Semi-automatic extraction of linear features from image data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9182854B2 (en) * 2009-07-08 2015-11-10 Microsoft Technology Licensing, Llc System and method for multi-touch interactions with a touch sensitive screen
EP2325737B1 (en) * 2009-10-28 2019-05-08 Orange Verfahren und Vorrichtung zur gestenbasierten Eingabe in eine graphische Benutzeroberfläche zur Anzeige von Anwendungsfenstern

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4555775A (en) 1982-10-07 1985-11-26 At&T Bell Laboratories Dynamic generation and overlaying of graphic windows for multiple active program storage areas
US4555775B1 (en) 1982-10-07 1995-12-05 Bell Telephone Labor Inc Dynamic generation and overlaying of graphic windows for multiple active program storage areas
US7805442B1 (en) * 2000-12-05 2010-09-28 Navteq North America, Llc Method and system for representation of geographical features in a computer-based system
US20070024624A1 (en) * 2005-07-26 2007-02-01 Poppen Richard F Generalization of Features In A Digital Map
US7840912B2 (en) 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
US8155391B1 (en) * 2006-05-02 2012-04-10 Geoeye Solutions, Inc. Semi-automatic extraction of linear features from image data
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
WO2009154951A2 (en) * 2008-05-28 2009-12-23 Apple Inc. Defining a border for an image
WO2010081040A1 (en) * 2009-01-09 2010-07-15 Skinit, Inc. Path creation utility for image editor
US20110254797A1 (en) 2009-12-18 2011-10-20 Adamson Peter S Techniques for recognizing multi-shape, multi-touch gestures including finger and non-finger touches input to a touch panel interface
US20120026100A1 (en) * 2010-07-30 2012-02-02 Migos Charles J Device, Method, and Graphical User Interface for Aligning and Distributing Objects

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3570154A1 (en) * 2015-01-21 2019-11-20 LG Electronics Inc. -1- Mobile terminal and method for controlling the same
US10698596B2 (en) 2015-01-21 2020-06-30 Lg Electronics Inc. Mobile terminal and method for controlling the same
US11023125B2 (en) 2015-01-21 2021-06-01 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN111443864A (en) * 2020-04-14 2020-07-24 重庆赋比兴科技有限公司 iOS-based curve drawing method
CN111443864B (en) * 2020-04-14 2023-03-07 重庆赋比兴科技有限公司 iOS-based curve drawing method
WO2023274119A1 (en) * 2021-06-30 2023-01-05 华为技术有限公司 Touch operation recognition method and apparatus, and related device
CN114415912A (en) * 2021-12-31 2022-04-29 乐美科技股份私人有限公司 Element editing method and device, electronic equipment and storage medium
CN115576451A (en) * 2022-12-09 2023-01-06 普赞加信息科技南京有限公司 Multi-point touch device and system

Also Published As

Publication number Publication date
GB2527244B (en) 2021-08-11
GB2527244A (en) 2015-12-16
GB201517611D0 (en) 2015-11-18

Similar Documents

Publication Publication Date Title
US9268423B2 (en) Definition and use of node-based shapes, areas and windows on touch screen devices
US9436369B2 (en) Touch interface for precise rotation of an object
US8464181B1 (en) Floor selection on an interactive digital map
US9857952B2 (en) Methods and devices for adjusting chart magnification
US9146660B2 (en) Multi-function affine tool for computer-aided design
US11010032B2 (en) Navigating a hierarchical data set
EP2748738B1 (en) Method of creating a snap point in a computer-aided design system
JP7032451B2 (en) Dynamically changing the visual properties of indicators on digital maps
US20160147434A1 (en) Device and method of providing handwritten content in the same
JP2002140147A (en) Graphical user interface
KR101735442B1 (en) Apparatus and method for manipulating the orientation of an object on a display device
US9513795B2 (en) System and method for graphic object management in a large-display area computing device
US10169493B2 (en) Method for manipulating a computer aided design (CAD) model, computer program product and server therefore
WO2014167363A1 (en) Systems and methods for interacting with a touch screen
US20130326424A1 (en) User Interface For Navigating In a Three-Dimensional Environment
US20150324068A1 (en) User interface structure (uis) for geographic information system applications
US10990277B2 (en) Creating tables using gestures
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
CN105243469A (en) Method for mapping from multidimensional space to low-dimensional space, and display method and system
US20210096698A1 (en) Systems and methods for indicating organizational relationships between objects
US9836200B2 (en) Interacting with electronic devices using a single-point gesture
US11269418B2 (en) Proximity selector
KR102392675B1 (en) Interfacing method for 3d sketch and apparatus thereof
KR20240036737A (en) Dynamically varying visual properties of indicators on a digital map
Huang et al. A Visual Lens Toolkit for Mobile Devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14719068

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 1517611

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20140414

WWE Wipo information: entry into national phase

Ref document number: 1517611.8

Country of ref document: GB

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14719068

Country of ref document: EP

Kind code of ref document: A1