WO2020240164A1 - Methods and apparatus for processing user interaction data for movement of gui object - Google Patents

Methods and apparatus for processing user interaction data for movement of gui object Download PDF

Info

Publication number
WO2020240164A1
WO2020240164A1 PCT/GB2020/051261 GB2020051261W WO2020240164A1 WO 2020240164 A1 WO2020240164 A1 WO 2020240164A1 GB 2020051261 W GB2020051261 W GB 2020051261W WO 2020240164 A1 WO2020240164 A1 WO 2020240164A1
Authority
WO
WIPO (PCT)
Prior art keywords
gui
interaction
location
target
user
Prior art date
Application number
PCT/GB2020/051261
Other languages
French (fr)
Inventor
Ian Masters
Original Assignee
Flick Games, Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flick Games, Ltd filed Critical Flick Games, Ltd
Publication of WO2020240164A1 publication Critical patent/WO2020240164A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop

Definitions

  • This invention is directed to methods and apparatus for processing user interaction data for movement of a graphical user interface (GUI) object in a two-dimensional GUI.
  • GUI graphical user interface
  • GUI graphical user interface
  • a user interface such as a graphical user interface (GUI)
  • GUI graphical user interface
  • the interaction for example by a user touching the GUI location of the GUI object on the GUI of the touchscreen device, can include movement of the GUI object by the user, for example movement of the GUI object across the GUI while the user contacts the GUI object, known as dragging.
  • Other user interaction gestures for GUI interfaces are known to the art.
  • gesture and movement paradigms are inefficient from the point of view of user interaction, and also for computational processing requirements. Dragging user interface objects from one interface location to another requires processing power for fracking the movement for the entire journey. The user interaction also requires the time for the user to drag the object for the entire journey.
  • user interfaces e.g. larger interfaces, increasingly larger portable devices
  • such interactions can be inefficient as the user may need to use more than one interaction or interaction type to complete a movement (for example, using both hands, or more than one digit), particularly for users less able to use their/both hands.
  • GUI objects are able to trigger movement of GUI objects.
  • these can be highly inaccurate; objects may be moved without correctly identifying the destination for the object to be moved to, again causing inefficiency and excessive computational processing.
  • the Interface response is not sufficiently informative for the user to determine whether movement is triggered as intended.
  • known methods are too prescriptive or restrictive, for example requiring too many parameters or conditions in order to trigger movement of an object, again causing inefficiency.
  • the present invention aims to address these problems and provide improvements upon the known devices and methods.
  • one embodiment of a first aspect of the invention can provide a method of processing user interaction data for movement of a graphical user interface (GU) object in a two-dimensional GUI, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detecting a release event for a cessation of the user interaction with the GUI object; obtaining timing data and GUI location data for each of: the at least one contact event; and the release event; using the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and using the determined interaction velocity to determine a target GUI location for movement of the GUI object.
  • GUI graphical user interface
  • the velocity determined for the GUI object may be the (GUI) object velocity.
  • the target determined, selected, nominated or proposed may be one of any of the theoretically possible positions on the GUI for the GUI object.
  • the target may additionally/instead be approximated or estimated.
  • the displacement associated with the user interaction may be a movement of the GUI object by the user, when interacting with the object.
  • the location data may include a first GUI location associated with the contact event, and a second GUI location associated with the release event.
  • the step of using the determined interaction velocity to determine a target GUI location comprises: obtaining a list of a plurality of candidate target GUI locations; and using the determined interaction velocity to select the target GUI location from the candidate target GUI locations.
  • the list may be a set of candidates, and may be pre-determined (prior to use of the GUI by the user).
  • the step of using the determined interaction velocity to determine a target GUI location comprises: using the determined interaction velocity to estimate an interaction intent GUI location.
  • the estimated interaction intent GUI location may be designated itself as the determined/selected target GUI location.
  • the method comprises selecting from the list a candidate target GUI location associated with the interaction intent GUI location.
  • the step of selecting from the list comprises selecting one of: a candidate target GUI location having a shortest distance from the interaction intent GUI location; a candidate target GUI location within a defined neighbourhood region containing the interaction intent GUI location; and a candidate target GUI location within a circular sector region, having a predetermined central angle centred on a line between a release event location and the interaction intent GUI location.
  • neighbourhood region may be a region defined at a predetermined distance from or around the interaction intent GUI location, for example a circle having a predetermined radius.
  • the method comprises selecting a candidate target GUI location within a region containing the release event location, for example a circle region having a pre- determined radius.
  • the timing data comprises a time for each respective event
  • the GUI location data comprises a GUI location for each respective event
  • the step of using the obtained timing and GUI location data to determine an interaction velocity comprises: comparing a contact event time and a release event time to determine an interaction time difference; comparing a contact event GUI location and a release event GUI location to determine an interaction distance; and using the interaction time difference and the interaction distance to determine the interaction velocity.
  • the step of using the determined interaction velocity to determine a target GUI location comprises: obtaining a direction component of the interaction velocity for a direction of the movement of the GUI object; and applying a scaling factor to a speed component of the interaction velocity to determine a distance for the movement.
  • the scaling factor may be a simple multiple of the vector of the interaction velocity. This multiple determines a distance“travelled” at the speed component.
  • the method comprises generating the scaling factor by modelling a physical force acting on the GUI object in a GUI environment.
  • the method comprises using a GUI location result from the obtained direction and distance for the movement as the estimated interaction intent GUI location.
  • the method comprises, following selection of the target GUI location, moving the GUI object to the target GUI location.
  • the method comprises generating a path for displaying on the GUI to the user the movement of the GUI object along the path to the target GUI.
  • the method comprises using the interaction intent GUI location as a control point for generating a parametric curve for the path.
  • the path generated may not follow a one-dimensional line between the release event location and the target GUI location; the visual feedback to the user may be more likely to prompt timely corrective action by the user if the path is visually appropriate from the point of view of the user for the representation of the objects and their environment in the GUI. In this case, a curved path is generated for this improved visual feedback.
  • the step of detecting at least one contact event for a user interaction with the GUI object comprises detecting at least two contact events for the user interaction.
  • the steps of obtaining and using the timing data and GUI location data comprise: obtaining timing data and GUI location data for each of: the at least two contact events; and the release event; and using the obtained timing and GUI location data to determine a rate of change of the interaction velocity for the GUI object.
  • the method comprises using GUI location data for each of: the at least two contact events; and the release event, to generate a non-linear route between the events, and using the generated route to project a direction for the movement of the GUI object.
  • One embodiment of another aspect of the invention can provide a method of processing user interaction data for movement of a graphical user interface (GUI) object in a two- dimensional GUI, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a GUI object
  • GUI location data for each of: the at least one contact event; and the release event; using the obtained GUI location data to determine an interaction direction for the GUI object; and using the determined interaction direction to determine a target GUI location for movement of the GUI object.
  • the method comprises using the determined interaction direction and a predetermined distance parameter to determine a target GUI location for movement of the GUI object.
  • the predetermined distance parameter is a predetermined speed.
  • the method comprises using the obtained GUI location data to determine an interaction distance for the GUI object, wherein the predetermined distance parameter is a predetermined distance scaling parameter. It is therefore possible in some embodiments to use standard parameters, factors, scaling parameters or the like in place of a measurement of the velocity, distance, speed and the like of the interaction; instead a standard parameter can give an approximation of the target, given a user interaction direction.
  • One embodiment of another aspect of the invention can provide apparatus for processing user interaction data for movement of a graphical user interface (GUI) object in a two- dimensional GUI, the apparatus comprising: a processor; and a memory, the apparatus being configured, under control of the processor, to execute instructions stored in the memory to: for a user interaction with a GUI object: detect at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detect a release event for a cessation of the user interaction with the GUI object; obtain timing data and GUI location data for each of: the at least one contact event; and the release event; use the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and use the determined interaction velocity to determine a target GUI location for movement of the GUI object.
  • GUI graphical user interface
  • an initial contacted state in which the apparatus is operable to detect a contact event during a user interaction in which the user contacts the GUI, may be contrasted with a following un-contacted state, in which a release event has been detected, and in which the apparatus is configured to then determine the target GUI location.
  • One embodiment of another aspect of the invention can provide a method for a user to execute a flick gesture to trigger the movement of a GUI object to an intended target location, comprising: facilitating a user touch of the GUI object; registering a plurality of touch events associated with a user selection of the GUI object; registering a release event for the user releasing the touch; registering a user flick gesture comprising touch events, including position, occurring over multiple time intervals, followed by the user releasing their touch; calculating a velocity for the flick gesture; and extrapolating the velocity to calculation an intent point location, to approximate the intention of the user.
  • One embodiment of another aspect of the invention can provide a method of processing user interaction data, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and detecting a release event for a cessation of the user interaction with the GUI object; obtaining timing data and GUI location data for each of: the at least one contact event; and the release event; using the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and using the determined interaction velocity to determine a target GUI location.
  • Processor and/or controllers may comprise one or more computational processors, and/or control elements having one or more electronic processors.
  • Uses of the term “processor” or“controller” herein should therefore be considered to refer either to a single processor, controller or control element, or to pluralities of the same; which pluralities may operate in concert to provide the functions described.
  • individual and/or separate functions of the processor(s) or controller(s) may be hosted by or undertaken in different control units, processors or controllers.
  • a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the techniques specified herein.
  • the set of instructions may suitably be embedded in said one or more electronic processors.
  • the set of instructions may be provided as software to be executed on said computational device.
  • Embodiments of the invention and the functionality thereof may be incorporated into or implemented in software, stored in a memory of a device having a user interface; for example, a desktop or laptop computer, a tablet, or a portable user device.
  • the software may be embodied in or incorporated in a software application (app).
  • the display of the device may be used as the user interface; for example, a portable user device having a display with a touchscreen interface.
  • Figure 1 a is a flow chart illustrating steps of a method of processing user interaction data for movement of a GUI, according to an embodiment of the invention
  • Figure 1 b is a schematic diagram illustrating an interaction used to calculate a vector and an intent point, according to an embodiment of the invention
  • Figure 1 c is a flow chart illustrating steps of a method of processing user interaction data for movement of a GUI, according to an embodiment of the invention
  • Figure 1 d is a schematic diagram illustrating a plurality of contact events and a release event for a GUI object, according to an embodiment of the invention
  • Figures 2 to 4 are schematic diagrams illustrating methods for determining an area for valid target locations, according to embodiments of the invention.
  • Figure 5 is a schematic diagram illustrating selecting an intended target location and generation of a curved path to move the GUI object, according to an embodiment of the invention
  • Figure 6 is a diagram illustrating components of a system according to an embodiment of the invention.
  • Embodiments of the invention provide methods and apparatus for managing user interaction (gesture-based) movement of GUI (graphical user interface) objects, in particular a novel“flick” gesture software mechanic for movement of a GUI object to an intended target location in embodiments, the system can calculate a user's intent point or intent GUI location from the interaction, subsequently moving the GUI object along a generated path to the target location, with reference to the intent point.
  • gesture-based gesture-based
  • GUI graphical user interface
  • the system can calculate a user's intent point or intent GUI location from the interaction, subsequently moving the GUI object along a generated path to the target location, with reference to the intent point.
  • Embodiments comprise tracking the touch events of contact or press and release on a GUI object, moving it (optionally) along a path, a vector representing the interaction or gesture, and extrapolation of the vector to approximate an intent point which represents an estimate of the user's intended target location. There may be multiple valid target locations (or none). An additional step of calculating a valid area for target can be used to achieve the desired targeting.
  • Embodiments of the invention provide advantages of reduced computational resource requirement, and of increased accuracy and efficiency of the user interface, as noted above.
  • the“flick” gesture will typically be much faster an interaction than dragging an object ail the way to the intended target location; this will therefore require less computation, and allow the user to interact with the GUI sooner to perform the next task than would otherwise have been possible.
  • Previously considered methods would not have been able to avoid dragging the object the entire journey to the target location, as there was no consideration of how to determine the target location, other than at the end of the dragging gesture.
  • the use of interaction or object velocity to approximate the intent point will in most cases be sufficient to obtain a valid target location, thus saving further processing resources in more precise calculation or modelling.
  • Other advantages are:
  • GUI object movements can be achieved with single hand or digit gestures, even on large touchscreen devices.
  • Accuracy/reliability - previously considered methods for moving GUI objects have, for example, triggered the movement only, without determining a final location, or for instance always triggered a movement on instruction, without considering whether a final location is achieved. These are unsuitable for GUI interfaces in which the object must achieve a final location (rather than, for example, being removed or exiting the GUI, for example allowing a further attempt at movement). Others have prompted a standard GUI movement or displacement or translation or the like once the movement command is issued, again potentially not providing accuracy of movement or final location. On the other hand, other methods have limited movement without certain restrictions or prior conditions being met - these can be unreliable, and can also produce inefficiency of interaction and resource usage.
  • the determination and use of the interaction velocity or velocity of the GUI object at release allows estimation of a (intended) target, and moreover a more accurate determination of the user intent. This is in contrast with previously considered methods, in which interaction velocity may not be considered, or used solely to trigger a standard movement, or a standard path of movement without consideration of estimating or generating a target.
  • the generation of a path for the travel of the GUI object once released (and a target and/or intent point determined) allows for instant intuitive feedback to the user, allowing quicker and more efficient correction if necessary of the estimation of the users intended target.
  • FIGS 1 a and 1 c are flow charts illustrating steps of methods (100, 150) of processing user interaction data for movement of a GUI object, according to embodiments of the invention.
  • the user wants to move a GUI object to a location without having to drag the GUI object all of the way.
  • a contact event is detected (102) for the user interaction with the GUI object.
  • the contact event will correspond to a time and a GUI location for the user interaction with the GUI object of the interface.
  • the initiation of the interaction by the user will be realised by the co-location of the user's interaction (or interactive device) with the interface itself, and a represented location on the software-generated GUI.
  • a touch event is generated or triggered when the user interacts with the screen, meaning when their finger makes contact with the screen, or when their finger moved across the screen or when their finger is released from the screen.
  • the touch event will be registered by the processing system of the device.
  • Touch events will include information about the position of each touch (point of contact) and the time it happened.
  • the touchscreen system may simply report any touch locations at each refresh of the system or screen - typically every 30th or 60th of a second (the frequency of refresh is dependent on the specific touchscreen system used).
  • These regular frequent screen status updates can also be considered touch events for our purposes as they contain all of the touch information needed.
  • the timing data obtained for it may not be timing data obtained pro-actively or specifically for that event, but rather it may be that events are simply registered or noted at each regular refresh or update time/point for the GUI, for example a screen refresh or a touch refresh.
  • the timing data for this given event will thus be the data that was available in any case, namely that this event took place at the (nth) refresh point.
  • a different kind of user interface may be used to cause the initiation or contact event.
  • the contact event would thus be a click or similar occurrence, rather than a touch event.
  • a GUI object may be any visual representation of an object displayed on screen.
  • Examples may include: a word, a box, a circle, a button, an image, a video, an icon.
  • the GUI object(s) are generated in the GUI by the software running on the processing system of the device, and therefore the timing and location of the contact/release events can be compared with the generated location of the GUI objects by the processing system.
  • a two-dimensional interface such as a touchscreen or trackpad
  • a GUI object which is actually itself represented as a three-dimensional object, or indeed is represented (as a 3D object) in a three-dimensional environment.
  • the interface itself may also be three-dimensional, so that a further dimension is used to capture location data for the contact and release events of the GUI object.
  • a user may move an object along a z-axis as well as x- and y-axes. These may then be used to calculate interaction velocity in three-dimensions, in similar fashion to the other embodiments described herein.
  • the users interacts with (e.g. touches) the GUI object to move or displace (translate) the object across the GUI, for example by moving their finger on the touchscreen.
  • the GUI object itself may or may not move under their direction.
  • further contact or interaction events may be registered during the interaction moving the object, thus tracking its position; the contact event used in the calculation may be the first contact event, or any of the other contact events (or as described in later embodiments a plurality of them).
  • the user releases or lets go of the object, for example by removing the finger from the touchscreen, and a release event is detected on cessation of the user interaction with the GUI object (104).
  • the user has executed a movement or gesture (contact, displacement, release), in this case a“flick” gesture intended to move or throw the GUI object towards an intended target elsewhere on the GUI, usually at a location distant from the release location.
  • the flick gesture thus comprises contact events (in the case of a touchscreen, touch events) which include position or location data, occurring over multiple time intervals or updates of the screen, followed by the user releasing their touch it may be noted that this and similar movements or gestures by the user are innately less accurate than dragging an object to a final GUI location; indeed, this is a reason that an approximation or estimation of the intent location is used (with a subsequent choice of associated final location), because by the nature of the gesture the estimated intent point may not necessarily be relied upon as the precise location intended by the user.
  • the timing and GUI location data for the events can then be obtained (106, 108), from the processing system running the software and managing the interface.
  • the timing data for the contact event (106) and the release event (108) may be stored on a timing module of the processing system of the device.
  • the GUI location data for the events i.e. the positions on the GUI at which the GUI object was disposed during the events
  • the interface locations e.g. touchscreen positions
  • corresponding to the GUI locations of the GUI object may also be recorded by the system.
  • the velocity (speed and direction) for the interaction or gesture is calculated (1 10) using the (at least) two events (contact, release).
  • This velocity can be designated as the interaction velocity, or the velocity of the gesture or interaction of the user with the object; alternatively, the GUI object velocity (from the interaction).
  • This interaction velocity is determined (1 10) from the timing and location data obtained for the (at least) two events. For example, for contact and release events at first and second time points (contact time point and release time point), the time difference can be calculated. Then for the given separate GUI locations of the two events, a GUI distance can be calculated, so that a GUI speed can be calculated. An interaction direction (on the GUI) can also be determined from the two GUI locations. Thus the components of the interaction velocity vector are determined.
  • this determined interaction velocity is used to select (1 12) a target GUI location for the movement of GUI object following its release by the user.
  • the target GUI location could be any of the physically/computationally possible positions on the GUI, or could be one of a given set of GUI locations, depending on the GUI paradigm.
  • the GUI location may be selected from a set of candidate GUI locations.
  • the GUI location intended by the user is estimated from the interaction.
  • the determined interaction velocity is used or extrapolated to approximate the intention of the user, as if“throwing” a physical object, to estimate (166) an interaction intent GUI location, or“intent point”.
  • the intent point represents an approximation of the user's intended target location.
  • a direction component can be used to determine an approximate direction in which the user intends the object to travel
  • a speed component can be used to determine how far the object is intended to travel across the GUI.
  • the speed may simply be used to determine whether the object is travelling“too fast” to“land” at a nearby candidate GUI location, and instead infer a distant candidate location.
  • a scalar factor may be used to determine or extrapolate the distance of travel (as shown at 6 in Figure 1 b).
  • the scalar may be linked to a represented physical paradigm, for example a represented force F in Figure 1 b, such as a friction component, to determine the distance to the intent point.
  • the intent point or interaction intent location is simply used as the final GUI target location at which the object finishes the movement.
  • a list of candidate target GUI locations is obtained (164), and from the list a candidate target GUI location which is associated with the interaction intent GUI location (168) is selected. For example this might be a candidate target nearest the intent point.
  • an estimation of the area the user is trying to flick the GUI object to is calculated.
  • This area may contain any number of valid (candidate) target locations (positions that the user can move the GUI object to). If more than one valid target location is present then the best, closest or most relevant valid target location is selected, and this has then been estimated as target location the user intended to hit, having calculated the intent point/location.
  • the GUI object is then moved to this target location along a path which is dependent on the desired visual effect for relevant or optimum visual feedback for the user.
  • a curved path may be plotted to indicate a motion which the user might consider visually appropriate, or a“natural motion”; the visual cues displayed during the animation of the movement of the GUI object can be beneficial in providing feedback, so that the user can correct or moderate user interaction in order to provide efficient input, and thus for the system to provide an efficient interface response to minimise resource usage in embodiments, the movement of the GUI object to the target location may be paused before completion, in order to allow additional time for the user to correct or moderate the interaction if necessary.
  • FIG 1 b is a schematic diagram illustrating an interaction used to calculate a vector and an intent point, according to an embodiment of the invention in the specific arrangement shown in Figure 1 b, a touchscreen is used as the user interface for the GUI.
  • the user “touches” a GUI object 1 on the touchscreen GUI 20 by contacting their finger on the screen at or near the position of the GUI object on the software-generated GUI.
  • the touchscreen system registers a touch event (the contact and its position) 2, and the touch event can therefore be associated with selection of or picking up of the GUI object.
  • the GUI object may then be moved (which may not be displayed or represented on the GUI) by the user maintaining contact with the screen and moving their finger, with the touchscreen system and GUI software combining to track the change of position of the GUI object.
  • the user then executes a flick gesture to throw the GUI object towards their intended target location 8.
  • a flick gesture to throw the GUI object towards their intended target location 8.
  • touch events the touchscreen system and GUI software register over time a sequence points of contact (touch events) followed by a release event 3, each of which happens at a specific time.
  • touch events describe the path 4 the user dragged on the screen.
  • a flick vector 5 can be calculated with both direction and speed (as shown in the diagram, the size of the vector being the magnitude, in this case speed), and is an estimation that can be based on two or more of the points of contact (touch events), in embodiments with more recent events having more weight in calculation of the interaction velocity.
  • the flick vector can be calculated simply based on the vector between the two points of contact or can factor in additional points of contact with the aim of more accurately estimating the user's intention, for example to check if the user's touch is accelerating or decelerating. Once calculated, the flick vector can then be extrapolated 6 to determine the "intent point” 7, a position that can be calculated as an estimate of the user's intended target location, in lieu of knowing the users actual intended target
  • GUI objects have some physical metaphor (e.g. objects on a table or other surface with estimable physical properties) then it may make sense to account for additional forces that would act on the GUI object such as friction (with gas, liquids or surface contact) or gravity (for example represented figuratively in Figure 1 b with the arrow F).
  • Figure 1 d is a schematic diagram illustrating a plurality of contact events and a release event for a GUI object, according to an embodiment of the invention. This diagram also includes schematic indications of timing and location for the respective events.
  • the “current” time is t3, at which the object 1 is being released, at GUI location C.
  • the location may comprise for example the two-dimensional co-ordinates for the interface.
  • the first and second contact events were at time t1 , location A, and time t2, location B, respectively.
  • more than one contact event may be noted, recorded or triggered, as shown here.
  • the GUI object 1 is initially interacted with at a first contact event 2, and during the movement 4 of the GUI object during the user interaction, another contact event 2' is registered. Finally, the release event 3 is registered, as above.
  • the plurality of contact events 2 and 2' can be used to determine for example a rate of change of interaction velocity, or acceleration (or deceleration) of the interaction or object. This can inform the calculation of the intent point; for instance, the acceleration can be used to estimate an increased (or decreased) final speed at the release event (greater than the speed between earlier contact events).
  • calculations based on two or more contact events can be weighted to later contact events. For example, if the one or more most recent contact events change a direction from an initial direction, the changed direction can be used instead, or the direction calculated as a comparison, with a weighting on the later (changed direction) events.
  • the location of the plurality of events can be used to better evaluate the intended direction of the interaction, or whether a shaped path for the gesture was intended. For example, if the event GUI locations describe a curve, the projected or extrapolated path for the GUI object can maintain the same curve. This can be achieved by determining the series of locations for the events and generating a line or curve (e.g. a parametric curve) joining or fitting the locations, and using as a direction component (either alone or in combination with the direction component of the determined velocity vector) a direction following or projected from the generated line or curve.
  • a line or curve e.g. a parametric curve
  • a standard parameter can give an approximation of the target, given a user interaction direction.
  • GUI location data is determined for the contact event(s) and the release event, but timing data is not acquired. Therefore a direction of the displacement for the user interaction is available from the (at least two) locations, which gives an indication of the directional intent of the user.
  • a target area can then be generated in the direction of intent, in order to obtain a candidate location falling within the area (for example a circular sector as described with reference to Figure 3).
  • a standard parameter can be used in place of determining the interaction velocity. For instance, a standard speed can be assumed for all user interactions. Given the direction and this standard speed, a user intent location or intent point can be generated; the velocity is now made up of the determined direction and the standard speed parameter.
  • a standard multiple can be applied to give a distance for the intent location; for instance, it can be assumed that for an interaction distance of X, then the object will travel a distance of Y, or a multiple Z of that interaction distance. This scaled distance can then be used with the direction to find an intent location.
  • GUI refresh points are repeated at given times, these can also be used to obtain timing data for the contact and release events.
  • timing data rather than timing data being sought out specifically for a given event, if events are triggered or registered at the refresh points in any case, the timing data for the events will be known and/or obtainable in any case.
  • Figures 2 to 4 are schematic diagrams illustrating methods for determining an area for valid target locations, according to embodiments of the invention. In embodiments, it may be that any GUI location is a valid target location, but in others it may be that only some feasible GUI locations are designated as valid target locations.
  • a valid area for candidate target locations 10 can for example include individually or in combination (including intersections and/or unions of): i. as shown in Figure 2, the valid area 10 within a radius 1 1 of the intent point 7; ii. as shown in Figure 3, the valid area 10 within a circular sector, where:
  • the starting point 3 is the apex of the circular sector
  • the centra! angle is defined by the desired or pre-determined leeway 12 either side of the vector from the starting point to the intent point 7;
  • the sector can be limited by radius (minimum and/or maximum) or extended indefinitely 13 in order to capture the desired valid target locations; iii. as shown in Figure 4, the valid area 10 within a distance range based on the flick speed, i.e. between 14 an inner radius and outer radius.
  • an area within a radius of the starting position may be added. This may be beneficial in order to capture the intended target location when the user drags close to or even past their intended target.
  • An alternative valid area could be a channel represented by any point that falls within a fixed distance of the line described by the flick vector.
  • areas can be calculated to exclude results (invalid areas).
  • the required combination of areas for inclusion or exclusion will depend on the desired result in embodiments, the target area(s) may be variable depending on the type of candidate location available. For example, for a given GUI with certain types of target locations in a first region, and others in a second region, the target area cast for candidates in the first region may be a different size or shape than that for the second region. For example, if a GUI object is a potential match with several targets in the first region, the candidate area may include all those targets, whereas in the second region a candidate/target area may be restricted to certain targets within the second region.
  • the system can then skip this step and move straight to evaluating which of the valid targets most closely matched the user's intent (for example, finding the target closest to the intent point).
  • the desired area for valid candidate target locations may include any number of valid target locations for the GUI object: zero, one or more if multiple valid target locations are found then they are evaluated to determine which most closely matches the user's intended target location. Ways to calculate the intended target location may include (individually or in combination):
  • the GUI object can be moved to it, and therefore a path for that movement can be determined, and displayed to the user for visual feedback (rather than the object simply (re)appearing in target location).
  • the path can be chosen as a direct path, or one that appears appropriate to a user's visual frame of reference (natural motion), to optimize visual feedback.
  • Figure 5 a schematic diagram illustrating selecting an intended target location and generation of a curved path to move the GUI object, according to an embodiment of the invention.
  • the GUI object can therefore be moved along a curved path 15 to the target location.
  • a curve starting at the position of the GUI object 1 , travelling initially in the direction of the flick vector 5, curving towards the intended target location 8 and ending at that position.
  • the curve can be calculated using the mathematics for a quadratic Bézier curve where either the intent point 7 or another point along the flick vector is used as the control point for the curve.
  • the choice of control point can depend on the desired motion (see below) as the control point directly controls the shape of the curve.
  • Other methods for calculating a curve can be used in place of the quadratic Bézier curve.
  • the shape of the curve can vary depending on a required visual effect of the motion.
  • the GUI object may be required to:
  • a curve may be defined at the point of release, with positions along the curve being generated at each time interval as the GUI object moves along it following release.
  • points along the curve can be generated in advance, and applied according to the parameters determined (for example limits can be defined for selecting an appropriate pre-defined curve).
  • the object may sometimes move slower or faster than other times, or the curve length may vary within some desired limits.
  • the curve that the GUI object travels along can be made visible to the user (being rendered on the GUI) or it can be invisible.
  • a curve may not be generated in advance of the movement, and instead a physics simulation can be used to create a similar effect.
  • a physics simulation can include forces acting on the GUI object to move it along a curve to the intended target location, and can be applied in real time (iteratively at intervals of time).
  • the speed at which the Ul object travels will depend on the required effect. Typically, the speed of motion at the start of the curve should match the speed of the flick gesture (at the release point).
  • GUI object can stop in position or be returned to its original position.
  • the motion of the movement will depend on the required effect and can be linear or use a curve as described above.
  • steps of a method according to a particular embodiment of the invention can be represented as pseudo-code, as follows:
  • Card object can be picked up and moved around freely by the user. When the card is released:
  • GUI objects In an example application of use of an embodiment of the invention, consider a software application where the user can drag GUI objects from their current location to multiple other target locations on screen, only some of which are valid, depending on the current state of the application and GUI object moved.
  • the user may need to stack GUI objects in a specific order (e.g. alphabetical, numerical). Therefore, a valid target location would be a target location that would result in stacking in the correct order, and an invalid target location could be that same location if the stacking order was incorrect.
  • this invention provides a more efficient alternative: to be able to flick the GUI object towards their intended target location with the appropriate speed and have the underlying system figure out their intention (and in addition to make it move smoothly to their desired target location).
  • the system calculates the vector (direction and speed) of the flick and extrapolates (scales) this in order to estimate the intended target location of the user.
  • an area we consider valid based on the users flick gesture This can consist of a circular sector centred around the vector.
  • the user is able to pickup and drag a GUI representation of a computer file e.g. an image file.
  • a GUI representation of a computer file e.g. an image file.
  • On screen there are multiple commands represented by boxes with words including: delete, print, email, duplicate, compress, and open. The commands are distributed across the screen so as to be well spaced out.
  • the user is able to flick the file towards the action they wish to execute.
  • the boxes all represent possible targets but not all will be valid targets depending on the current context (e.g. the file may be protected and cannot be deleted, thus the delete box's location is not valid).
  • the GUI object the file
  • the system then creates a list of ail valid target locations.
  • the system extrapolates the flick vector to estimate the intent point. i.e. where a real object would land if flicked in this way, in the represented physical environment.
  • the system checks if any of the valid target locations are within a radius of this intent point; if there are one or more valid locations it chooses the closest one. if that fails then the system checks for valid target locations within a circular sector centred around the flick vector. If multiples are found then it selects the one closest to the intent point.
  • the system checks within a radius around the starting point, again choosing the closest to the intent point if multiple options are found. If still no valid target location is found then the target is set to be the starting point so that the object returns to its starting position.
  • a Bézier curve for the GUI object to travel along is then calculated from the starting point to the selected target location using the intent point as the control point for the Bézier curve. The GUI object is then moved along this path over time, starting at a speed approximating that which it was flicked with, thus creating a natural motion.
  • the image of the GUI object itself can be an image of an abstract icon that represents a/the command (as opposed to an image that represents a physical object such as a card or file).
  • an additional factor considered is pressure as an additional component of the touch events.
  • Some user interface devices such as touchscreens and pointing devices including some styluses, provide information to the interface control system on the pressure applied by the user during the interaction. Pressure information associated with the user interaction can then be used to inform or enhance the interaction, or the estimation of the (intended) target. For example, a parameter associated with the pressure information can be used to produce a three-dimensional vector for the interaction.
  • the system can use pressure to represent the z-axis, whereby the pressure towards the end of a "flick" gesture would reduce and can be used to simulate the object lifting up off the surface and towards the user.
  • This can be represented in the GUI itself for example in a true three dimensional environment, or by scaling the size of the GUI object to simulate 3D perspective.
  • pressure values included in the touch event they can be combined with the flick vector into a three dimensional (interaction) vector.
  • This three-dimensional vector can then be used to estimate the intended target location, for example shortening a distance (in two- dimensions) which would otherwise have been predicted from a two-dimensional vector.
  • a 3D curve can be determined for the path or alternatively, the pressure values can simply be used in an extra step to simulate an approximation of depth, for example by scaling the GUI object up during first half of the curve and scaling it back down during second half until It lands at its destination.
  • a player can flick a card towards a target using methods described herein, and an intent point can be generated to determine or select a GUI location target, such as a deck of cards or a specific game-play location, for instance a given“table” location.
  • an intended target can be generated from the 3D vector determined including the additional pressure information.
  • the card can be made to appear to lift up off the table, gently follow the curved path to the target point and land in place (scaling down).
  • Figure 6 is a diagram illustrating components of a system according to an embodiment of the invention. Certain of the above embodiments of the invention may be conveniently realized as a system 600 (such as a desktop or portable user device, such as a mobile phone) suitably programmed with instructions for carrying out the steps of the methods according to the invention.
  • the computing device or system may include software and/or hardware for providing functionality and features described herein.
  • the computing device or system may include one or more of logic arrays, memories, analogue circuits, digital circuits, software, firmware and processors.
  • the hardware and firmware components of the device/system may include various specialized units, circuits, software and interfaces for providing the functionality and features described herein.
  • a processing unit/processor 608 is able to implement such steps as described herein as aspects and embodiments of the invention.
  • the processor 608 may be or include one or more microprocessors, application specific integrated circuits (ASICs), programmable logic devices (PLDs) and programmable logic arrays (PLAs).
  • Interface device 604 may be a display device with an integrated user interface such as a touchscreen.
  • User input 602 is additionally available for inputting data separately from the user interface.
  • the memories 614 and/or 615 may be or include RAM, ROM, DRAM, SRAM and MRAM, and may include firmware, such as static data or fixed instructions, BIOS, system functions, configuration data, and other routines used during the operation of the computing device and/or processor.
  • the memory/memories also provide a storage area for data and instructions associated with applications and data handled by the processor 608.
  • the storage provides non volatile, bulk or long term storage of data or instructions in the computing device or system. Multiple storage devices may be provided or available to the computing device/system. Some of these storage devices may be external, such as network storage or cloud-based storage.
  • a processing environment 612 including the processor 608 may also include, for example: a timing module 606 for storing, processing and/or generating timing data for software events; and a GUI location module 610 for managing locations of GUI objects during generation and processing of the GUI.
  • a timing module 606 for storing, processing and/or generating timing data for software events
  • a GUI location module 610 for managing locations of GUI objects during generation and processing of the GUI.

Abstract

Methods and apparatus for processing user interaction data for movement of a graphical user interface (GUI) object (1) in a two-dimensional GUI (20) are disclosed. For a user interaction (2, 3) with a GUI object, at least one contact event (2) for a user interaction with the GUI object is detected and, following a displacement (4) associated with the user interaction, a release event (3) for a cessation of the user interaction with the GUI object is also detected. Timing data and GUI location data are obtained for each of the at least one contact event and the release event. The obtained timing and GUI location data are used to determine an interaction velocity (5) for the GUI object, and the determined interaction velocity is used to determine a target GUI location (8) for movement of the GUI object.

Description

METHODS AND APPARATUS FOR PROCESSING USER INTERACTION DATA FOR
MOVEMENT OF GUI OBJECT
FIELD OF THE INVENTION
This invention is directed to methods and apparatus for processing user interaction data for movement of a graphical user interface (GUI) object in a two-dimensional GUI.
BACKGROUND OF THE INVENTION
Known devices and systems with user interaction capability (such as computer systems having touchscreen devices) include a user interface, such as a graphical user interface (GUI), that allows a user to interact with GUI objects. The interaction, for example by a user touching the GUI location of the GUI object on the GUI of the touchscreen device, can include movement of the GUI object by the user, for example movement of the GUI object across the GUI while the user contacts the GUI object, known as dragging. Other user interaction gestures for GUI interfaces are known to the art.
Many such gesture and movement paradigms are inefficient from the point of view of user interaction, and also for computational processing requirements. Dragging user interface objects from one interface location to another requires processing power for fracking the movement for the entire journey. The user interaction also requires the time for the user to drag the object for the entire journey. In addition, in certain types of user interfaces (e.g. larger interfaces, increasingly larger portable devices) such interactions can be inefficient as the user may need to use more than one interaction or interaction type to complete a movement (for example, using both hands, or more than one digit), particularly for users less able to use their/both hands.
Other user interface systems are able to trigger movement of GUI objects. However, these can be highly inaccurate; objects may be moved without correctly identifying the destination for the object to be moved to, again causing inefficiency and excessive computational processing. In some methods, the Interface response is not sufficiently informative for the user to determine whether movement is triggered as intended. In other cases, known methods are too prescriptive or restrictive, for example requiring too many parameters or conditions in order to trigger movement of an object, again causing inefficiency.
There is a need for more efficient and user-friendly interactions to move GUI objects, and a need for adequate visual feedback for the user.
The present invention aims to address these problems and provide improvements upon the known devices and methods.
STATEMENT OF INVENTION
Aspects and embodiments of the invention are set out in the accompanying claims.
In general terms, one embodiment of a first aspect of the invention can provide a method of processing user interaction data for movement of a graphical user interface ( GUI) object in a two-dimensional GUI, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detecting a release event for a cessation of the user interaction with the GUI object; obtaining timing data and GUI location data for each of: the at least one contact event; and the release event; using the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and using the determined interaction velocity to determine a target GUI location for movement of the GUI object.
This allows a GUI object to be moved from an initial user interaction location or region to another location on the GUI, without the user having to interact with the GUI object for the entire journey. This improves efficiency of both the user interface and user interaction with it, and use of resources, such as computational processing power. This and other methods of embodiments of the invention are also more accurate and less restrictive than previously considered methods, as the target GUI location can be reliably identified. The velocity determined for the GUI object may be the (GUI) object velocity. The target determined, selected, nominated or proposed may be one of any of the theoretically possible positions on the GUI for the GUI object. The target may additionally/instead be approximated or estimated. The displacement associated with the user interaction may be a movement of the GUI object by the user, when interacting with the object. The location data may include a first GUI location associated with the contact event, and a second GUI location associated with the release event.
Suitably, the step of using the determined interaction velocity to determine a target GUI location comprises: obtaining a list of a plurality of candidate target GUI locations; and using the determined interaction velocity to select the target GUI location from the candidate target GUI locations. In embodiments, the list may be a set of candidates, and may be pre-determined (prior to use of the GUI by the user).
In embodiments, the step of using the determined interaction velocity to determine a target GUI location comprises: using the determined interaction velocity to estimate an interaction intent GUI location. In an embodiment, the estimated interaction intent GUI location may be designated itself as the determined/selected target GUI location.
Suitably the method comprises selecting from the list a candidate target GUI location associated with the interaction intent GUI location.
In embodiments, the step of selecting from the list comprises selecting one of: a candidate target GUI location having a shortest distance from the interaction intent GUI location; a candidate target GUI location within a defined neighbourhood region containing the interaction intent GUI location; and a candidate target GUI location within a circular sector region, having a predetermined central angle centred on a line between a release event location and the interaction intent GUI location. The defined
neighbourhood region may be a region defined at a predetermined distance from or around the interaction intent GUI location, for example a circle having a predetermined radius. Optionally, the method comprises selecting a candidate target GUI location within a region containing the release event location, for example a circle region having a pre- determined radius.
Suitably, the timing data comprises a time for each respective event, and the GUI location data comprises a GUI location for each respective event, and the step of using the obtained timing and GUI location data to determine an interaction velocity comprises: comparing a contact event time and a release event time to determine an interaction time difference; comparing a contact event GUI location and a release event GUI location to determine an interaction distance; and using the interaction time difference and the interaction distance to determine the interaction velocity.
In embodiments, the step of using the determined interaction velocity to determine a target GUI location comprises: obtaining a direction component of the interaction velocity for a direction of the movement of the GUI object; and applying a scaling factor to a speed component of the interaction velocity to determine a distance for the movement. For example, the scaling factor may be a simple multiple of the vector of the interaction velocity. This multiple determines a distance“travelled” at the speed component. Thus a GUI location is determined from this distance using the direction component. Optionally, the method comprises generating the scaling factor by modelling a physical force acting on the GUI object in a GUI environment.
Suitably, the method comprises using a GUI location result from the obtained direction and distance for the movement as the estimated interaction intent GUI location.
In embodiments, the method comprises, following selection of the target GUI location, moving the GUI object to the target GUI location. Optionally, the method comprises generating a path for displaying on the GUI to the user the movement of the GUI object along the path to the target GUI. Further optionally, the method comprises using the interaction intent GUI location as a control point for generating a parametric curve for the path. For example, the path generated may not follow a one-dimensional line between the release event location and the target GUI location; the visual feedback to the user may be more likely to prompt timely corrective action by the user if the path is visually appropriate from the point of view of the user for the representation of the objects and their environment in the GUI. In this case, a curved path is generated for this improved visual feedback.
Suitably, the step of detecting at least one contact event for a user interaction with the GUI object comprises detecting at least two contact events for the user interaction.
Optionally, the steps of obtaining and using the timing data and GUI location data comprise: obtaining timing data and GUI location data for each of: the at least two contact events; and the release event; and using the obtained timing and GUI location data to determine a rate of change of the interaction velocity for the GUI object.
In embodiments, the method comprises using GUI location data for each of: the at least two contact events; and the release event, to generate a non-linear route between the events, and using the generated route to project a direction for the movement of the GUI object.
One embodiment of another aspect of the invention can provide a method of processing user interaction data for movement of a graphical user interface (GUI) object in a two- dimensional GUI, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a
displacement associated with the user interaction, detecting a release event for a cessation of the user interaction with the GUI object; obtaining GUI location data for each of: the at least one contact event; and the release event; using the obtained GUI location data to determine an interaction direction for the GUI object; and using the determined interaction direction to determine a target GUI location for movement of the GUI object.
Thus for an approximate result, timing data per se may not be required in order to find a suitable target location. In embodiments, the method comprises using the determined interaction direction and a predetermined distance parameter to determine a target GUI location for movement of the GUI object. Suitably, the predetermined distance parameter is a predetermined speed. Alternatively, the method comprises using the obtained GUI location data to determine an interaction distance for the GUI object, wherein the predetermined distance parameter is a predetermined distance scaling parameter. It is therefore possible in some embodiments to use standard parameters, factors, scaling parameters or the like in place of a measurement of the velocity, distance, speed and the like of the interaction; instead a standard parameter can give an approximation of the target, given a user interaction direction.
One embodiment of another aspect of the invention can provide apparatus for processing user interaction data for movement of a graphical user interface (GUI) object in a two- dimensional GUI, the apparatus comprising: a processor; and a memory, the apparatus being configured, under control of the processor, to execute instructions stored in the memory to: for a user interaction with a GUI object: detect at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detect a release event for a cessation of the user interaction with the GUI object; obtain timing data and GUI location data for each of: the at least one contact event; and the release event; use the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and use the determined interaction velocity to determine a target GUI location for movement of the GUI object.
For such apparatus, an initial contacted state, in which the apparatus is operable to detect a contact event during a user interaction in which the user contacts the GUI, may be contrasted with a following un-contacted state, in which a release event has been detected, and in which the apparatus is configured to then determine the target GUI location.
One embodiment of another aspect of the invention can provide a method for a user to execute a flick gesture to trigger the movement of a GUI object to an intended target location, comprising: facilitating a user touch of the GUI object; registering a plurality of touch events associated with a user selection of the GUI object; registering a release event for the user releasing the touch; registering a user flick gesture comprising touch events, including position, occurring over multiple time intervals, followed by the user releasing their touch; calculating a velocity for the flick gesture; and extrapolating the velocity to calculation an intent point location, to approximate the intention of the user.
One embodiment of another aspect of the invention can provide a method of processing user interaction data, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and detecting a release event for a cessation of the user interaction with the GUI object; obtaining timing data and GUI location data for each of: the at least one contact event; and the release event; using the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and using the determined interaction velocity to determine a target GUI location.
Steps of the methods according to the above described aspects and embodiments may be undertaken in any order. The above aspects and embodiments may be combined to provide further aspects and embodiments of the invention.
Further aspects of the invention comprise computer programs or computer program applications which, when loaded into or run on a computer or processor, cause the computer or processor to carry out methods according to the aspects and embodiments described above.
Processor and/or controllers may comprise one or more computational processors, and/or control elements having one or more electronic processors. Uses of the term “processor” or“controller” herein should therefore be considered to refer either to a single processor, controller or control element, or to pluralities of the same; which pluralities may operate in concert to provide the functions described. Furthermore, individual and/or separate functions of the processor(s) or controller(s) may be hosted by or undertaken in different control units, processors or controllers. To configure a processor or controller, a suitable set of instructions may be provided which, when executed, cause said control unit or computational device to implement the techniques specified herein. The set of instructions may suitably be embedded in said one or more electronic processors. Alternatively, the set of instructions may be provided as software to be executed on said computational device.
Embodiments of the invention and the functionality thereof may be incorporated into or implemented in software, stored in a memory of a device having a user interface; for example, a desktop or laptop computer, a tablet, or a portable user device. The software may be embodied in or incorporated in a software application (app). The display of the device may be used as the user interface; for example, a portable user device having a display with a touchscreen interface.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described by way of example with reference to the
accompanying drawings, in which:
Figure 1 a is a flow chart illustrating steps of a method of processing user interaction data for movement of a GUI, according to an embodiment of the invention;
Figure 1 b is a schematic diagram illustrating an interaction used to calculate a vector and an intent point, according to an embodiment of the invention;
Figure 1 c is a flow chart illustrating steps of a method of processing user interaction data for movement of a GUI, according to an embodiment of the invention;
Figure 1 d is a schematic diagram illustrating a plurality of contact events and a release event for a GUI object, according to an embodiment of the invention;
Figures 2 to 4 are schematic diagrams illustrating methods for determining an area for valid target locations, according to embodiments of the invention;
Figure 5 is a schematic diagram illustrating selecting an intended target location and generation of a curved path to move the GUI object, according to an embodiment of the invention; and Figure 6 is a diagram illustrating components of a system according to an embodiment of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the invention provide methods and apparatus for managing user interaction (gesture-based) movement of GUI (graphical user interface) objects, in particular a novel“flick” gesture software mechanic for movement of a GUI object to an intended target location in embodiments, the system can calculate a user's intent point or intent GUI location from the interaction, subsequently moving the GUI object along a generated path to the target location, with reference to the intent point.
Embodiments comprise tracking the touch events of contact or press and release on a GUI object, moving it (optionally) along a path, a vector representing the interaction or gesture, and extrapolation of the vector to approximate an intent point which represents an estimate of the user's intended target location. There may be multiple valid target locations (or none). An additional step of calculating a valid area for target can be used to achieve the desired targeting.
Embodiments of the invention provide advantages of reduced computational resource requirement, and of increased accuracy and efficiency of the user interface, as noted above. For example, the“flick” gesture will typically be much faster an interaction than dragging an object ail the way to the intended target location; this will therefore require less computation, and allow the user to interact with the GUI sooner to perform the next task than would otherwise have been possible. Previously considered methods would not have been able to avoid dragging the object the entire journey to the target location, as there was no consideration of how to determine the target location, other than at the end of the dragging gesture. Furthermore, in embodiments of the invention, the use of interaction or object velocity to approximate the intent point will in most cases be sufficient to obtain a valid target location, thus saving further processing resources in more precise calculation or modelling. Other advantages are:
Reachability - since the user interaction does not require, for example, dragging the GUI object all the way to the GUI location. As user portable device screens get bigger it has become harder to reach areas of the screen with one-handed operation. In addition, GUI object movements can be achieved with single hand or digit gestures, even on large touchscreen devices.
Accuracy/reliability - previously considered methods for moving GUI objects have, for example, triggered the movement only, without determining a final location, or for instance always triggered a movement on instruction, without considering whether a final location is achieved. These are unsuitable for GUI interfaces in which the object must achieve a final location (rather than, for example, being removed or exiting the GUI, for example allowing a further attempt at movement). Others have prompted a standard GUI movement or displacement or translation or the like once the movement command is issued, again potentially not providing accuracy of movement or final location. On the other hand, other methods have limited movement without certain restrictions or prior conditions being met - these can be unreliable, and can also produce inefficiency of interaction and resource usage.
The determination and use of the interaction velocity or velocity of the GUI object at release allows estimation of a (intended) target, and moreover a more accurate determination of the user intent. This is in contrast with previously considered methods, in which interaction velocity may not be considered, or used solely to trigger a standard movement, or a standard path of movement without consideration of estimating or generating a target.
The generation of a path for the travel of the GUI object once released (and a target and/or intent point determined) allows for instant intuitive feedback to the user, allowing quicker and more efficient correction if necessary of the estimation of the users intended target.
Figures 1 a and 1 c are flow charts illustrating steps of methods (100, 150) of processing user interaction data for movement of a GUI object, according to embodiments of the invention. Within the GUI of a software program or application, the user wants to move a GUI object to a location without having to drag the GUI object all of the way. Initially a contact event is detected (102) for the user interaction with the GUI object. The contact event will correspond to a time and a GUI location for the user interaction with the GUI object of the interface. The initiation of the interaction by the user will be realised by the co-location of the user's interaction (or interactive device) with the interface itself, and a represented location on the software-generated GUI.
For instance, in a touchscreen system a touch event is generated or triggered when the user interacts with the screen, meaning when their finger makes contact with the screen, or when their finger moved across the screen or when their finger is released from the screen. The touch event will be registered by the processing system of the device.
Touch events will include information about the position of each touch (point of contact) and the time it happened. Alternatively, the touchscreen system may simply report any touch locations at each refresh of the system or screen - typically every 30th or 60th of a second (the frequency of refresh is dependent on the specific touchscreen system used). These regular frequent screen status updates can also be considered touch events for our purposes as they contain all of the touch information needed. In other words, for a given touch or contact event, the timing data obtained for it may not be timing data obtained pro-actively or specifically for that event, but rather it may be that events are simply registered or noted at each regular refresh or update time/point for the GUI, for example a screen refresh or a touch refresh. The timing data for this given event will thus be the data that was available in any case, namely that this event took place at the (nth) refresh point. In other embodiments, a different kind of user interface may be used to cause the initiation or contact event. In embodiments without the use of a touchscreen, the contact event would thus be a click or similar occurrence, rather than a touch event.
A GUI object may be any visual representation of an object displayed on screen.
Examples may include: a word, a box, a circle, a button, an image, a video, an icon. The GUI object(s) are generated in the GUI by the software running on the processing system of the device, and therefore the timing and location of the contact/release events can be compared with the generated location of the GUI objects by the processing system.
It may also be noted here that embodiments of the invention are directed towards two- dimensional GUI objects, in two-dimensional GUIs, but that other options are envisioned in alternative embodiments. For example, a two-dimensional interface, such as a touchscreen or trackpad, may be used to control a GUI object which is actually itself represented as a three-dimensional object, or indeed is represented (as a 3D object) in a three-dimensional environment. The interface itself may also be three-dimensional, so that a further dimension is used to capture location data for the contact and release events of the GUI object. For example, in an interactive three-dimensional interface, a user may move an object along a z-axis as well as x- and y-axes. These may then be used to calculate interaction velocity in three-dimensions, in similar fashion to the other embodiments described herein.
Following detection or registering of the contact event (102), the users interacts with (e.g. touches) the GUI object to move or displace (translate) the object across the GUI, for example by moving their finger on the touchscreen. The GUI object itself may or may not move under their direction. During this movement, further contact or interaction events may be registered during the interaction moving the object, thus tracking its position; the contact event used in the calculation may be the first contact event, or any of the other contact events (or as described in later embodiments a plurality of them). After the displacement or movement caused by the user interaction, the user releases or lets go of the object, for example by removing the finger from the touchscreen, and a release event is detected on cessation of the user interaction with the GUI object (104).
The user has executed a movement or gesture (contact, displacement, release), in this case a“flick” gesture intended to move or throw the GUI object towards an intended target elsewhere on the GUI, usually at a location distant from the release location. The flick gesture thus comprises contact events (in the case of a touchscreen, touch events) which include position or location data, occurring over multiple time intervals or updates of the screen, followed by the user releasing their touch it may be noted that this and similar movements or gestures by the user are innately less accurate than dragging an object to a final GUI location; indeed, this is a reason that an approximation or estimation of the intent location is used (with a subsequent choice of associated final location), because by the nature of the gesture the estimated intent point may not necessarily be relied upon as the precise location intended by the user.
The timing and GUI location data for the events can then be obtained (106, 108), from the processing system running the software and managing the interface. The timing data for the contact event (106) and the release event (108) may be stored on a timing module of the processing system of the device. The GUI location data for the events (i.e. the positions on the GUI at which the GUI object was disposed during the events) can be obtained from the system, for example from a GUI location module. The interface locations (e.g. touchscreen positions) corresponding to the GUI locations of the GUI object may also be recorded by the system.
When the object is released from the user interaction, the velocity (speed and direction) for the interaction or gesture is calculated (1 10) using the (at least) two events (contact, release). This velocity can be designated as the interaction velocity, or the velocity of the gesture or interaction of the user with the object; alternatively, the GUI object velocity (from the interaction). This interaction velocity is determined (1 10) from the timing and location data obtained for the (at least) two events. For example, for contact and release events at first and second time points (contact time point and release time point), the time difference can be calculated. Then for the given separate GUI locations of the two events, a GUI distance can be calculated, so that a GUI speed can be calculated. An interaction direction (on the GUI) can also be determined from the two GUI locations. Thus the components of the interaction velocity vector are determined.
Finally, this determined interaction velocity is used to select (1 12) a target GUI location for the movement of GUI object following its release by the user. The target GUI location could be any of the physically/computationally possible positions on the GUI, or could be one of a given set of GUI locations, depending on the GUI paradigm. In embodiments, the GUI location may be selected from a set of candidate GUI locations.
In embodiments (150) of the invention, for example as embodied in Figure 1 c, the GUI location intended by the user is estimated from the interaction. For example, in an embodiment, the determined interaction velocity is used or extrapolated to approximate the intention of the user, as if“throwing” a physical object, to estimate (166) an interaction intent GUI location, or“intent point”. The intent point represents an approximation of the user's intended target location. For example, with a given velocity, a direction component can be used to determine an approximate direction in which the user intends the object to travel, and a speed component can be used to determine how far the object is intended to travel across the GUI. In a simple implementation the speed may simply be used to determine whether the object is travelling“too fast” to“land” at a nearby candidate GUI location, and instead infer a distant candidate location. In others, a scalar factor may be used to determine or extrapolate the distance of travel (as shown at 6 in Figure 1 b). In embodiments, the scalar may be linked to a represented physical paradigm, for example a represented force F in Figure 1 b, such as a friction component, to determine the distance to the intent point. In some embodiments, the intent point or interaction intent location is simply used as the final GUI target location at which the object finishes the movement. In the embodiment in Figure 1 c and others, a list of candidate target GUI locations is obtained (164), and from the list a candidate target GUI location which is associated with the interaction intent GUI location (168) is selected. For example this might be a candidate target nearest the intent point.
In other embodiments, using the GUI object's starting/release location, its calculated interaction velocity and the intent point, an estimation of the area the user is trying to flick the GUI object to is calculated. This area may contain any number of valid (candidate) target locations (positions that the user can move the GUI object to). If more than one valid target location is present then the best, closest or most relevant valid target location is selected, and this has then been estimated as target location the user intended to hit, having calculated the intent point/location. The GUI object is then moved to this target location along a path which is dependent on the desired visual effect for relevant or optimum visual feedback for the user. For example a curved path may be plotted to indicate a motion which the user might consider visually appropriate, or a“natural motion”; the visual cues displayed during the animation of the movement of the GUI object can be beneficial in providing feedback, so that the user can correct or moderate user interaction in order to provide efficient input, and thus for the system to provide an efficient interface response to minimise resource usage in embodiments, the movement of the GUI object to the target location may be paused before completion, in order to allow additional time for the user to correct or moderate the interaction if necessary.
Figure 1 b is a schematic diagram illustrating an interaction used to calculate a vector and an intent point, according to an embodiment of the invention in the specific arrangement shown in Figure 1 b, a touchscreen is used as the user interface for the GUI. The user “touches” a GUI object 1 on the touchscreen GUI 20 by contacting their finger on the screen at or near the position of the GUI object on the software-generated GUI. The touchscreen system registers a touch event (the contact and its position) 2, and the touch event can therefore be associated with selection of or picking up of the GUI object. The GUI object may then be moved (which may not be displayed or represented on the GUI) by the user maintaining contact with the screen and moving their finger, with the touchscreen system and GUI software combining to track the change of position of the GUI object.
The user then executes a flick gesture to throw the GUI object towards their intended target location 8. The result of which is that the touchscreen system and GUI software register over time a sequence points of contact (touch events) followed by a release event 3, each of which happens at a specific time. These touch events describe the path 4 the user dragged on the screen. From this path a flick vector 5 can be calculated with both direction and speed (as shown in the diagram, the size of the vector being the magnitude, in this case speed), and is an estimation that can be based on two or more of the points of contact (touch events), in embodiments with more recent events having more weight in calculation of the interaction velocity. The flick vector can be calculated simply based on the vector between the two points of contact or can factor in additional points of contact with the aim of more accurately estimating the user's intention, for example to check if the user's touch is accelerating or decelerating. Once calculated, the flick vector can then be extrapolated 6 to determine the "intent point” 7, a position that can be calculated as an estimate of the user's intended target location, in lieu of knowing the users actual intended target
To extrapolate the vector it can simply be multiplied by a fixed value, or additional factors can be taken into account, such as the visual representation that the GUI is portraying. If for example, the GUI objects have some physical metaphor (e.g. objects on a table or other surface with estimable physical properties) then it may make sense to account for additional forces that would act on the GUI object such as friction (with gas, liquids or surface contact) or gravity (for example represented figuratively in Figure 1 b with the arrow F).
Figure 1 d is a schematic diagram illustrating a plurality of contact events and a release event for a GUI object, according to an embodiment of the invention. This diagram also includes schematic indications of timing and location for the respective events. The “current” time is t3, at which the object 1 is being released, at GUI location C. The location may comprise for example the two-dimensional co-ordinates for the interface. The first and second contact events were at time t1 , location A, and time t2, location B, respectively.
In embodiments more than one contact event may be noted, recorded or triggered, as shown here. The GUI object 1 is initially interacted with at a first contact event 2, and during the movement 4 of the GUI object during the user interaction, another contact event 2' is registered. Finally, the release event 3 is registered, as above.
The plurality of contact events 2 and 2' (and further contact events, in embodiments) can be used to determine for example a rate of change of interaction velocity, or acceleration (or deceleration) of the interaction or object. This can inform the calculation of the intent point; for instance, the acceleration can be used to estimate an increased (or decreased) final speed at the release event (greater than the speed between earlier contact events).
In embodiments, calculations based on two or more contact events (in addition to the release event) can be weighted to later contact events. For example, if the one or more most recent contact events change a direction from an initial direction, the changed direction can be used instead, or the direction calculated as a comparison, with a weighting on the later (changed direction) events.
In addition, should the contact events and the release event describe a shape other than a straight line, the location of the plurality of events can be used to better evaluate the intended direction of the interaction, or whether a shaped path for the gesture was intended. For example, if the event GUI locations describe a curve, the projected or extrapolated path for the GUI object can maintain the same curve. This can be achieved by determining the series of locations for the events and generating a line or curve (e.g. a parametric curve) joining or fitting the locations, and using as a direction component (either alone or in combination with the direction component of the determined velocity vector) a direction following or projected from the generated line or curve. It is possible in some embodiments to use standard parameters, factors, scaling parameters or the like in place of a measurement of the velocity, distance, speed and the like of the interaction; instead a standard parameter can give an approximation of the target, given a user interaction direction. For example, In an embodiment, GUI location data is determined for the contact event(s) and the release event, but timing data is not acquired. Therefore a direction of the displacement for the user interaction is available from the (at least two) locations, which gives an indication of the directional intent of the user. At a minimum, a target area can then be generated in the direction of intent, in order to obtain a candidate location falling within the area (for example a circular sector as described with reference to Figure 3).
In cases where this minimum may not be sufficient (for instance if too many candidate targets are found in such an area), a standard parameter can be used in place of determining the interaction velocity. For instance, a standard speed can be assumed for all user interactions. Given the direction and this standard speed, a user intent location or intent point can be generated; the velocity is now made up of the determined direction and the standard speed parameter.
In another embodiment, given a distance measured between the event locations, a standard multiple can be applied to give a distance for the intent location; for instance, it can be assumed that for an interaction distance of X, then the object will travel a distance of Y, or a multiple Z of that interaction distance. This scaled distance can then be used with the direction to find an intent location.
In alternative embodiments, as noted above since GUI refresh points are repeated at given times, these can also be used to obtain timing data for the contact and release events. Thus, rather than timing data being sought out specifically for a given event, if events are triggered or registered at the refresh points in any case, the timing data for the events will be known and/or obtainable in any case. Figures 2 to 4 are schematic diagrams illustrating methods for determining an area for valid target locations, according to embodiments of the invention. In embodiments, it may be that any GUI location is a valid target location, but in others it may be that only some feasible GUI locations are designated as valid target locations. This can be done for example by using a restricted area or distance from the intent point, outside which locations can be designated not valid it may be noted that in some embodiments, there may be further restrictions on valid target locations in the paradigm of the given GUI; for example some GUI locations may be designated invalid for certain GUI objects, whether or not that location is inside or outside a target area.
Using the position of the GUI object 1 (starting/release point) in combination with the intent point 7 and the vector 5 between them, there are various ways to calculate a valid area for candidate target locations 10, which can for example include individually or in combination (including intersections and/or unions of): i. as shown in Figure 2, the valid area 10 within a radius 1 1 of the intent point 7; ii. as shown in Figure 3, the valid area 10 within a circular sector, where:
the starting point 3 is the apex of the circular sector;
the centra! angle is defined by the desired or pre-determined leeway 12 either side of the vector from the starting point to the intent point 7;
the sector can be limited by radius (minimum and/or maximum) or extended indefinitely 13 in order to capture the desired valid target locations; iii. as shown in Figure 4, the valid area 10 within a distance range based on the flick speed, i.e. between 14 an inner radius and outer radius.
Additionally, an area within a radius of the starting position may be added. This may be beneficial in order to capture the intended target location when the user drags close to or even past their intended target. An alternative valid area could be a channel represented by any point that falls within a fixed distance of the line described by the flick vector. Alternatively, areas can be calculated to exclude results (invalid areas). The required combination of areas for inclusion or exclusion will depend on the desired result in embodiments, the target area(s) may be variable depending on the type of candidate location available. For example, for a given GUI with certain types of target locations in a first region, and others in a second region, the target area cast for candidates in the first region may be a different size or shape than that for the second region. For example, if a GUI object is a potential match with several targets in the first region, the candidate area may include all those targets, whereas in the second region a candidate/target area may be restricted to certain targets within the second region.
In some scenarios, it may be possible to obtain a result that all targets are assumed valid, so it may not be necessary to cull possible targets through the area calculation process. The system can then skip this step and move straight to evaluating which of the valid targets most closely matched the user's intent (for example, finding the target closest to the intent point).
Once the desired area for valid candidate target locations is defined, it may include any number of valid target locations for the GUI object: zero, one or more if multiple valid target locations are found then they are evaluated to determine which most closely matches the user's intended target location. Ways to calculate the intended target location may include (individually or in combination):
· The closest target to the intent point.
· The target closest to the line described by the flick vector.
Once the intended target location has been calculated, the GUI object can be moved to it, and therefore a path for that movement can be determined, and displayed to the user for visual feedback (rather than the object simply (re)appearing in target location). The path can be chosen as a direct path, or one that appears appropriate to a user's visual frame of reference (natural motion), to optimize visual feedback. An example of path
generation is shown in Figure 5, a schematic diagram illustrating selecting an intended target location and generation of a curved path to move the GUI object, according to an embodiment of the invention.
It is likely that the user's intended target location 8 will not lie exactly in the direction of the flick, and will be to one side. The GUI object can therefore be moved along a curved path 15 to the target location.
In a specific arrangement, we can define a curve starting at the position of the GUI object 1 , travelling initially in the direction of the flick vector 5, curving towards the intended target location 8 and ending at that position. The curve can be calculated using the mathematics for a quadratic Bézier curve where either the intent point 7 or another point along the flick vector is used as the control point for the curve. The choice of control point can depend on the desired motion (see below) as the control point directly controls the shape of the curve. Other methods for calculating a curve can be used in place of the quadratic Bézier curve.
The shape of the curve can vary depending on a required visual effect of the motion. Depending on the use case the GUI object may be required to:
· Orbit the target location before coming to rest.
· Move past the target before swinging back to it.
· Minimise curvature so the GUI object travels to the target point efficiently.
In a specific arrangement, a curve may be defined at the point of release, with positions along the curve being generated at each time interval as the GUI object moves along it following release. Alternatively, points along the curve can be generated in advance, and applied according to the parameters determined (for example limits can be defined for selecting an appropriate pre-defined curve).
In order to make the curved path and motion visually optimal for visual feedback to the user, it may be desirable to add a degree of randomness. For example the object may sometimes move slower or faster than other times, or the curve length may vary within some desired limits.
The curve that the GUI object travels along can be made visible to the user (being rendered on the GUI) or it can be invisible.
In an alternative arrangement, a curve may not be generated in advance of the movement, and instead a physics simulation can be used to create a similar effect. A physics simulation can include forces acting on the GUI object to move it along a curve to the intended target location, and can be applied in real time (iteratively at intervals of time).
The speed at which the Ul object travels will depend on the required effect. Typically, the speed of motion at the start of the curve should match the speed of the flick gesture (at the release point).
Note that if there are no valid target locations then the GUI object can stop in position or be returned to its original position. The motion of the movement will depend on the required effect and can be linear or use a curve as described above.
In a specific example, in this case a video game having a GUI, and containing playing cards as GUI objects, steps of a method according to a particular embodiment of the invention can be represented as pseudo-code, as follows:
“Card” object can be picked up and moved around freely by the user. When the card is released:
- Store the current velocity of the card
- Determine a list of all valid target destinations within the scene which the card is permitted to land on
- Determine a single final target destination from the list: - Using the stored velocity, project an "intent point" where the card would land naturally
- Check whether a valid target destination is within a fixed radius of the intent point
- If multiple targets are within this radius, select the one closest to the intent point
- If no targets are found within the intent point radius, project an infinite circular sector with a fixed angle from the card's release point in the direction of the stored velocity
- Check whether a valid target destination lies within the sector
- If multiple targets lie within this sector, select the one closest to the centre line of the sector
- If no targets are found within the sector, check if any of the targets are within a radius around the release point of the card itself
- If no target has been found, set the target to be the card's origin point
- Generate a curved Bézier path from the card's release point to the target
destination:
- Path start point = card's release point
- Path end point = target destination
- Path control point to determine curvature of path = card's release point + card's stored velocity
- Move card along generated path
In an example application of use of an embodiment of the invention, consider a software application where the user can drag GUI objects from their current location to multiple other target locations on screen, only some of which are valid, depending on the current state of the application and GUI object moved. In this example the user may need to stack GUI objects in a specific order (e.g. alphabetical, numerical). Therefore, a valid target location would be a target location that would result in stacking in the correct order, and an invalid target location could be that same location if the stacking order was incorrect. Typically a user is able to drag the GUI object to the target location but this invention provides a more efficient alternative: to be able to flick the GUI object towards their intended target location with the appropriate speed and have the underlying system figure out their intention (and in addition to make it move smoothly to their desired target location). When the flick gesture is released the system calculates the vector (direction and speed) of the flick and extrapolates (scales) this in order to estimate the intended target location of the user. Further we can calculate an area we consider valid based on the users flick gesture. This can consist of a circular sector centred around the vector. These techniques allow the system to eliminate locations not in this area so the user's intentions can be better estimated. Should multiple valid target locations fall within the valid area then the distance from the intent point or angle deviation from the centre of the circle sector can be used to decide which best matches the user's intent.
In another example application of use in an embodiment of the invention, the user is able to pickup and drag a GUI representation of a computer file e.g. an image file. On screen there are multiple commands represented by boxes with words including: delete, print, email, duplicate, compress, and open. The commands are distributed across the screen so as to be well spaced out. The user is able to flick the file towards the action they wish to execute. The boxes all represent possible targets but not all will be valid targets depending on the current context (e.g. the file may be protected and cannot be deleted, thus the delete box's location is not valid).
In this example when the GUI object, the file, is flicked and the touch released the velocity of the file across the GUI is calculated. The system then creates a list of ail valid target locations. The system extrapolates the flick vector to estimate the intent point. i.e. where a real object would land if flicked in this way, in the represented physical environment. The system checks if any of the valid target locations are within a radius of this intent point; if there are one or more valid locations it chooses the closest one. if that fails then the system checks for valid target locations within a circular sector centred around the flick vector. If multiples are found then it selects the one closest to the intent point. If still no valid target locations are found then the system checks within a radius around the starting point, again choosing the closest to the intent point if multiple options are found. If still no valid target location is found then the target is set to be the starting point so that the object returns to its starting position. A Bézier curve for the GUI object to travel along is then calculated from the starting point to the selected target location using the intent point as the control point for the Bézier curve. The GUI object is then moved along this path over time, starting at a speed approximating that which it was flicked with, thus creating a natural motion.
In other similar embodiments, the image of the GUI object itself can be an image of an abstract icon that represents a/the command (as opposed to an image that represents a physical object such as a card or file).
In embodiments, an additional factor considered is pressure as an additional component of the touch events. Some user interface devices, such as touchscreens and pointing devices including some styluses, provide information to the interface control system on the pressure applied by the user during the interaction. Pressure information associated with the user interaction can then be used to inform or enhance the interaction, or the estimation of the (intended) target. For example, a parameter associated with the pressure information can be used to produce a three-dimensional vector for the interaction.
In an example implementation the system can use pressure to represent the z-axis, whereby the pressure towards the end of a "flick" gesture would reduce and can be used to simulate the object lifting up off the surface and towards the user. This can be represented in the GUI itself for example in a true three dimensional environment, or by scaling the size of the GUI object to simulate 3D perspective. With pressure values included in the touch event they can be combined with the flick vector into a three dimensional (interaction) vector. This three-dimensional vector can then be used to estimate the intended target location, for example shortening a distance (in two- dimensions) which would otherwise have been predicted from a two-dimensional vector. Further, a 3D curve can be determined for the path or alternatively, the pressure values can simply be used in an extra step to simulate an approximation of depth, for example by scaling the GUI object up during first half of the curve and scaling it back down during second half until It lands at its destination.
In a specific example of a video game having a GUI, and containing playing cards as GUI objects, a player can flick a card towards a target using methods described herein, and an intent point can be generated to determine or select a GUI location target, such as a deck of cards or a specific game-play location, for instance a given“table” location. In this embodiment, an intended target can be generated from the 3D vector determined including the additional pressure information. At the release point the card can be made to appear to lift up off the table, gently follow the curved path to the target point and land in place (scaling down).
Figure 6 is a diagram illustrating components of a system according to an embodiment of the invention. Certain of the above embodiments of the invention may be conveniently realized as a system 600 (such as a desktop or portable user device, such as a mobile phone) suitably programmed with instructions for carrying out the steps of the methods according to the invention. The computing device or system may include software and/or hardware for providing functionality and features described herein.
The computing device or system may include one or more of logic arrays, memories, analogue circuits, digital circuits, software, firmware and processors. The hardware and firmware components of the device/system may include various specialized units, circuits, software and interfaces for providing the functionality and features described herein. For example, a processing unit/processor 608 is able to implement such steps as described herein as aspects and embodiments of the invention.
The processor 608 may be or include one or more microprocessors, application specific integrated circuits (ASICs), programmable logic devices (PLDs) and programmable logic arrays (PLAs). Interface device 604 may be a display device with an integrated user interface such as a touchscreen. User input 602 is additionally available for inputting data separately from the user interface.
Software applications loaded on memory 614 are executed to process the data in random access memory 615. The memories 614 and/or 615 may be or include RAM, ROM, DRAM, SRAM and MRAM, and may include firmware, such as static data or fixed instructions, BIOS, system functions, configuration data, and other routines used during the operation of the computing device and/or processor.
The memory/memories also provide a storage area for data and instructions associated with applications and data handled by the processor 608. The storage provides non volatile, bulk or long term storage of data or instructions in the computing device or system. Multiple storage devices may be provided or available to the computing device/system. Some of these storage devices may be external, such as network storage or cloud-based storage.
A processing environment 612 including the processor 608 may also include, for example: a timing module 606 for storing, processing and/or generating timing data for software events; and a GUI location module 610 for managing locations of GUI objects during generation and processing of the GUI.
It will be appreciated by those skilled in the art that the invention has been described by way of example only, and that a variety of alternative approaches may be adopted without departing from the scope of the invention, as defined by the appended claims.

Claims

1. A method of processing user interaction data for movement of a graphical user interface (GUI) object in a two-dimensional GUI, comprising:
for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detecting a release event for a cessation of the user interaction with the GUI object;
obtaining timing data and GUI location data for each of: the at least one contact event; and the release event;
using the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and
using the determined interaction velocity to determine a target GUI location for movement of the GUI object.
2. A method according to Claim 1 , wherein the step of using the determined interaction velocity to determine a target GUI location comprises:
obtaining a list of a plurality of candidate target GUI locations; and using the determined interaction velocity to select the target GUI location from the candidate target GUI locations.
3. A method according to Claim 1 or Claim 2, wherein the step of using the determined interaction velocity to determine a target GUI location comprises: using the determined interaction velocity to estimate an interaction intent GUI location.
4. A method according to Claim 3 as dependent on Claim 2, wherein the step of using the determined interaction velocity to determine a target GUI location comprises:
selecting from the list of candidate target GUI locations a candidate target GUI location associated with the interaction intent GUI location.
5. A method according to Claim 4, wherein the step of selecting from the list comprises selecting one of: a candidate target GUI location having a shortest distance from the interaction intent GUI location; a candidate target GUI location within a defined neighbourhood region containing the interaction intent GUI location; and a candidate target GUI location within a circular sector region, having a predetermined central angle centred on a line between a release event location and the interaction intent GUI location.
6. A method according to any preceding claim, wherein the timing data comprises a time for each respective event, and wherein the GUI location data comprises a GUI location for each respective event,
and wherein the step of using the obtained timing and GUI location data to determine an interaction velocity comprises: comparing a contact event time and a release event time to determine an interaction time difference; comparing a contact event GUI location and a release event GUI location to determine an interaction distance; and using the interaction time difference and the interaction distance to determine the interaction velocity.
7. A method according to any preceding claim, wherein the step of using the determined interaction velocity to determine a target GUI location comprises: obtaining a direction component of the interaction velocity for a direction of the movement of the GUI object; and applying a scaling factor to a speed component of the interaction velocity to determine a distance for the movement.
8. A method according to Claim 7, comprising generating the scaling factor by modelling a physical force acting on the GUI object in a GUI environment.
9. A method according to Claim 7 or Claim 8 as dependent on any of the Claims 3 to 5, comprising using a GUI location result from the obtained direction and distance for the movement as the estimated interaction intent GUI location.
10. A method according to any preceding claim, comprising, following selection of the target GUI location, moving the GUI object to the target GUI location.
1 1 . A method according to Claim 10, comprising generating a path for displaying on the GUI to the user the movement of the GUI object along the path to the target GUI.
12. A method according to Claim 11 as dependent on Claim 3, comprising using the interaction intent GUI location as a control point for generating a parametric curve for the path.
13. A method according to any preceding claim, wherein the step of detecting at least one contact event for a user interaction with the GUI object comprises detecting at least two contact events for the user interaction.
14. A method according to Claim 13, wherein the steps of obtaining and using the timing data and GUI location data comprise:
obtaining timing data and GUI location data for each of: the at least two contact events; and the release event; and
using the obtained timing and GUI location data to determine a rate of change of the interaction velocity for the GUI object.
15. A method according to Claim 13 or Claim 14, using GUI location data for each of: the at least two contact events; and the release event, to generate a non linear route between the events,
and using the generated route to project a direction for the movement of the GUI object.
16. A method of processing user interaction data for movement of a graphical user interface (GUI) object in a two-dimensional GUI, comprising: for a user interaction with a GUI object: detecting at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detecting a release event for a cessation of the user interaction with the GUI object;
obtaining GUI location data for each of: the at least one contact event; and the release event;
using the obtained GUI location data to determine an interaction direction for the GUI object; and
using the determined interaction direction to determine a target GUI location for movement of the GUI object.
17. A method according to Claim 16, comprising using the determined interaction direction and a predetermined distance parameter to determine a target GUI location for movement of the GUI object.
18. A method according to Claim 17, wherein the predetermined distance parameter is a predetermined speed.
19. A method according to Claim 17, comprising using the obtained GUI location data to determine an interaction distance for the GUI object,
and wherein the predetermined distance parameter is a predetermined distance scaling parameter.
20. A computer program or computer program application comprising computer program code adapted, when loaded into or run on a computer or processor, to cause the computer or processor to carry out a method according to any preceding claim.
21 . Apparatus for processing user interaction data for movement of a graphical user interface (GUI) object in a two-dimensional GUI, the apparatus comprising: a processor; and a memory,
the apparatus being configured, under control of the processor, to execute instructions stored in the memory to:
for a user interaction with a GUI object: detect at least one contact event for a user interaction with the GUI object; and, following a displacement associated with the user interaction, detect a release event for a cessation of the user interaction with the GUI object;
obtain timing data and GUI location data for each of: the at least one contact event; and the release event;
use the obtained timing and GUI location data to determine an interaction velocity for the GUI object; and
use the determined interaction velocity to determine a target GUI location for movement of the GUI object.
PCT/GB2020/051261 2019-05-24 2020-05-22 Methods and apparatus for processing user interaction data for movement of gui object WO2020240164A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1907377.4A GB201907377D0 (en) 2019-05-24 2019-05-24 Method for managing gesture-based movement of a UI object
GB1907377.4 2019-05-24

Publications (1)

Publication Number Publication Date
WO2020240164A1 true WO2020240164A1 (en) 2020-12-03

Family

ID=67385481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2020/051261 WO2020240164A1 (en) 2019-05-24 2020-05-22 Methods and apparatus for processing user interaction data for movement of gui object

Country Status (2)

Country Link
GB (1) GB201907377D0 (en)
WO (1) WO2020240164A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222540A1 (en) * 2007-03-05 2008-09-11 Apple Inc. Animating thrown data objects in a project environment
US20090237363A1 (en) * 2008-03-20 2009-09-24 Microsoft Corporation Plural temporally overlapping drag and drop operations
WO2011079566A1 (en) * 2009-12-30 2011-07-07 中兴通讯股份有限公司 Method and device for controlling mobility event
US20150029231A1 (en) * 2013-07-25 2015-01-29 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Method and system for rendering a sliding object
US20160216862A1 (en) * 2012-04-25 2016-07-28 Amazon Technologies, Inc. Using gestures to deliver content to predefined destinations
US20180300036A1 (en) * 2017-04-13 2018-10-18 Adobe Systems Incorporated Drop Zone Prediction for User Input Operations
WO2019217043A1 (en) * 2018-05-08 2019-11-14 Google Llc Drag gesture animation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222540A1 (en) * 2007-03-05 2008-09-11 Apple Inc. Animating thrown data objects in a project environment
US20090237363A1 (en) * 2008-03-20 2009-09-24 Microsoft Corporation Plural temporally overlapping drag and drop operations
WO2011079566A1 (en) * 2009-12-30 2011-07-07 中兴通讯股份有限公司 Method and device for controlling mobility event
US20160216862A1 (en) * 2012-04-25 2016-07-28 Amazon Technologies, Inc. Using gestures to deliver content to predefined destinations
US20150029231A1 (en) * 2013-07-25 2015-01-29 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Method and system for rendering a sliding object
US20180300036A1 (en) * 2017-04-13 2018-10-18 Adobe Systems Incorporated Drop Zone Prediction for User Input Operations
WO2019217043A1 (en) * 2018-05-08 2019-11-14 Google Llc Drag gesture animation

Also Published As

Publication number Publication date
GB201907377D0 (en) 2019-07-10

Similar Documents

Publication Publication Date Title
US10503395B2 (en) Multi-touch object inertia simulation
US8429565B2 (en) Direct manipulation gestures
US8122384B2 (en) Method and apparatus for selecting an object within a user interface by performing a gesture
US9477333B2 (en) Multi-touch manipulation of application objects
US9335913B2 (en) Cross slide gesture
US9524097B2 (en) Touchscreen gestures for selecting a graphical object
US9886190B2 (en) Gesture discernment and processing system
US20130174070A1 (en) Drag and drop operation in a graphical user interface with highlight of target objects
US11429272B2 (en) Multi-factor probabilistic model for evaluating user input
US10802710B2 (en) System and method for inputting one or more inputs associated with a multi-input target
US10089000B2 (en) Auto targeting assistance for input devices
GB2510333A (en) Emulating pressure sensitivity on multi-touch devices
KR101949493B1 (en) Method and system for controlling play of multimeida content
US20180121000A1 (en) Using pressure to direct user input
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
CN105474164A (en) Disambiguation of indirect input
US10379639B2 (en) Single-hand, full-screen interaction on a mobile device
WO2020240164A1 (en) Methods and apparatus for processing user interaction data for movement of gui object
US10089965B1 (en) User-controlled movement of graphical objects
KR20190114348A (en) Apparatus and method for multi-touch recognition
TWI768407B (en) Prediction control method, input system and computer readable recording medium
JP2016129019A (en) Selection of graphical element
CN104951051A (en) Information processing method and electronic equipment
CN114939275A (en) Object interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20734798

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20734798

Country of ref document: EP

Kind code of ref document: A1