CN111026306B - Apparatus, method and graphical user interface for touch input processing - Google Patents

Apparatus, method and graphical user interface for touch input processing Download PDF

Info

Publication number
CN111026306B
CN111026306B CN201911262546.3A CN201911262546A CN111026306B CN 111026306 B CN111026306 B CN 111026306B CN 201911262546 A CN201911262546 A CN 201911262546A CN 111026306 B CN111026306 B CN 111026306B
Authority
CN
China
Prior art keywords
touch
user interface
application
gesture
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911262546.3A
Other languages
Chinese (zh)
Other versions
CN111026306A (en
Inventor
B·D·尼罗
C·K·托马斯
D·瓦格纳
M·T·特尼尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201770371A external-priority patent/DK179979B1/en
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN111026306A publication Critical patent/CN111026306A/en
Application granted granted Critical
Publication of CN111026306B publication Critical patent/CN111026306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Abstract

An electronic device, while displaying a user interface for a first application, receives, at the first application, information describing a first touch detected at a location on a touch-sensitive surface that corresponds to a respective user interface object. In accordance with a determination that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation, the device initiates a respective type of user interface operation involving a respective user interface object. The first application receives an instruction to ignore the first touch with respect to a user interface operation other than the corresponding type of interface operation, and to ignore the first touch with respect to a user interface operation other than the corresponding type of interface operation. In some embodiments, the respective type of interface operation is a drag operation for dragging an object in or from a view of the first application.

Description

Apparatus, method and graphical user interface for touch input processing
The present application is a divisional application of an invention patent application having an application date of 2018, 16/5, application No. 201880031229.8, entitled "apparatus, method, and graphical user interface for touch input processing".
Technical Field
This relates generally to electronic devices with touch-sensitive surfaces, including but not limited to electronic devices with touch-sensitive surfaces that use heuristics and other gesture recognition techniques to distinguish between various touch inputs including drag-and-drop gestures, long-press gestures, swipe gestures, and the like.
Background
The use of touch sensitive surfaces as input devices for computers and other electronic computing devices has increased significantly in recent years. Exemplary touch sensitive surfaces include trackpads and touch screen displays. Such surfaces are widely used for manipulating user interface objects on a display.
Exemplary manipulations include dragging and dropping objects or user interface elements, such as moving information from one application to another, or from one part of an application or data structure to another. Other manipulations include selecting user interface objects by touching them in a predefined manner, launching an application by tapping or otherwise interacting with a corresponding application launch icon or other user interface element, accessing application features by performing a swipe, drag, tap, pinch, long press, deep press (e.g., with an intensity above a threshold) and other touch inputs on respective user interface elements of a user interface of an application.
Disambiguation is common between various touch-based gestures, such as drag-and-drop gestures, spring-loaded gestures (e.g., opening an object to see an element of the object), and swipe gestures and long-press gestures are very important so that the device or system performs an action corresponding to the user's intent. Given the sometimes subtle differences between touch-based gestures, such disambiguation can be challenging.
Although gesture recognizers with different event or gesture definitions have been used to aid in such disambiguation. Disambiguating between touch-based drag-and-drop gestures and spring-loaded (sometimes written as "spring-loaded") gestures (e.g., opening an object to see an element of the object), on the one hand, and swipe gestures and long-press gestures, on the other hand, continues to be challenging and sometimes cumbersome and inefficient. Such inefficiencies may involve mechanisms for disambiguation, and once disambiguation or at least a portion of disambiguation, delivery of touch input information to various software mechanisms has been achieved. Furthermore, as devices with touch-based input mechanisms provide more and more features, the number of active gesture recognizers for at least some applications or application views has grown to the point where such mechanisms are cumbersome and inefficient if only a portion of those gesture recognizers are needed to process user input.
Disclosure of Invention
Accordingly, there is a need for an electronic device having a faster, more efficient method and interface for handling drag-and-drop gestures, and corresponding drag-and-drop operations (and optionally spring-loaded operations). Such methods and interfaces optionally complement or replace conventional methods for touch input processing and gesture recognition and processing. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user and result in a more efficient human-machine interface. For battery-driven devices, such methods and interfaces may conserve power and increase the time between battery charges.
The above-described deficiencies and other problems associated with user interfaces for electronic devices having touch-sensitive surfaces may be reduced or eliminated with the disclosed devices. In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a laptop, tablet, or handheld device). In some embodiments, the device is a personal electronic device (e.g., a wearable electronic device, such as a watch). In some embodiments, the device has a touch pad. In some embodiments, the device has a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the device has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs or sets of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI primarily through stylus and/or finger contacts and gestures on the touch-sensitive surface. In some embodiments, these functions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephone answering, video conferencing, e-mail sending and receiving, instant messaging, fitness support, digital photography, digital video recording, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
In accordance with some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: a user interface for a first application that includes a plurality of user interface objects is displayed on a display, and information (e.g., one or more touch events) describing a first touch detected at a location on a touch-sensitive surface that corresponds to a respective user interface object of the plurality of user interface objects in the first application is received at the first application while the user interface for the first application is displayed. The method includes, in response to receiving information describing the first touch, and in accordance with a determination that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation: initiating a respective type of user interface operation involving a respective user interface object; when the first touch continues to be detected on the touch-sensitive surface and when user interface operations involving the respective user interface object continue, an instruction is received at the first application to ignore the first touch with respect to user interface operations other than the respective type of interface operation, and the first touch is ignored by the first application with respect to user interface operations other than the respective type of interface operation. In some embodiments, the respective type of interface operation is a drag operation for dragging an object from the first application or a view of the first application; or a drag and drop operation, wherein the dragged object is dropped into a different application or application view than the application from which the dragged object was dragged; or a spring-loaded operation in which a view or container in which content may be stored is opened, or a control for a view or object is activated.
In accordance with some embodiments, a method is performed at an electronic device with a display and a touch-sensitive surface. The method includes displaying on a display a user interface for a first application comprising a plurality of user interface objects; and, while displaying the user interface for the first application, receiving, at the first application, information describing the first touch. A first touch is detected at locations on the touch-sensitive surface that correspond to respective user interface objects of the plurality of user interface objects in the user interface for the first application, and moved outside of an area corresponding to the user interface for the first application into an area corresponding to the user interface for the second application. The method also includes, in response to receiving the information describing the first touch: in accordance with a determination that the first touch does not satisfy the predefined criteria associated with initiating the respective type of interface operation, continuing to provide information describing the first touch to the first application even after the first touch moves outside of an area corresponding to a user interface for the first application and enters an area corresponding to a user interface for the second application. The method also includes, in accordance with a determination that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation: initiating a respective type of user interface operation involving a respective user interface object; and, while continuing to detect the first touch on the touch-sensitive surface and while continuing to user interface operations involving the respective user interface object: receiving, at the first application, an instruction to ignore the first touch with respect to a user interface operation other than the respective type of interface operation; ignoring, by the first application, the first touch with respect to user interface operations other than the respective type of interface operation; providing information describing the first touch to the first application when the first touch is within an area corresponding to a user interface for the first application; and when the first touch moves outside of an area corresponding to a user interface for the first application and enters an area corresponding to a user interface for the second application, switching to providing information describing the first touch to the second application.
In accordance with some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: displaying, on a display, a user interface comprising a plurality of views, the plurality of views comprising a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view and a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view, wherein one or more of the plurality of views are associated with a first interaction identifier (e.g., a drag, drop, or spring-loaded interaction identifier) indicating that the first set of one or more supplemental gesture recognizers can be added to the corresponding view. The method also includes detecting, via the touch-sensitive surface, a touch input at a location on the touch-sensitive surface that corresponds to a respective view of the plurality of views; and in response to detecting a touch input via the touch-sensitive surface: in accordance with a determination that the touch input satisfies supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion that is satisfied when the respective view has a first interaction identification, adding a first set of one or more supplemental gesture recognizers associated with the first interaction identification to the respective view; and processing the touch input using one or more supplemental gesture recognizers and one or more standard gesture recognizers associated with the respective views. The method also includes, in accordance with a determination that the touch input does not satisfy the supplemental gesture recognizer addition criteria, processing the touch input at the respective view without adding the first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view.
In accordance with some embodiments, the method is performed at an electronic device having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method includes displaying a user interface of an application on a display; and while displaying the user interface of the application, detecting a user input corresponding to a portion of the user interface of the application associated with the plurality of gesture recognizers, wherein: a first set of one or more standard gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) a portion of the user interface by the application; a second set of one or more supplemental gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) the portion of the user interface by a system process (e.g., an application-independent process); and a first fault involving the first standard gesture recognizer and the first supplemental gesture recognizer requires association with (e.g., assignment to) the portion of the user interface by a system process. The method also includes, in response to detecting the user input, processing the user input in accordance with the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement. For example, in some embodiments, the first set of one or more standard gesture recognizers and the second set of one or more gesture recognizers are arranged in a gesture recognition hierarchy, wherein the first fault requirement comprises requiring the first respective gesture recognizer to fail in order for the second respective gesture recognizer to successfully recognize the gesture, wherein the first respective gesture recognizer is at a higher level in the gesture recognition hierarchy than the second respective gesture recognizer.
In accordance with some embodiments, an electronic device includes a display, a touch-sensitive surface, memory, one or more processors, one or more programs, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface; the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing or causing the performance of the operations of any of the methods described herein.
According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device with a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform or cause to be performed the operations of any of the methods described herein.
According to some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory includes one or more elements displayed in any of the methods described herein that are updated in response to an input, as described in any of the methods described herein.
According to some embodiments, an electronic device comprises: a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface; and means for performing or causing performance of the operations of any of the methods described herein.
According to some embodiments, an information processing apparatus for use in an electronic device with a display and a touch-sensitive surface and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface comprises means for performing, or causing to be performed, operations of any method described herein.
Accordingly, electronic devices having a display, a touch-sensitive surface, and optionally one or more sensors for detecting intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for processing touch-based inputs, including drag-and-drop gestures, long-press gestures, and other gestures that may be efficiently recognized for the same application or application view, thereby improving the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for processing touch inputs in such electronic devices.
Drawings
For a better understanding of the various described embodiments, reference should be made to the following detailed description taken in conjunction with the following drawings, wherein like reference numerals designate corresponding parts throughout the figures.
FIG. 1 is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
Fig. 3B is a block diagram illustrating exemplary components for event processing, according to some embodiments.
Fig. 3C is a block diagram illustrating exemplary classes and examples of gesture recognizers according to some embodiments.
Fig. 3D is a block diagram illustrating event information flow according to some embodiments.
FIG. 4A illustrates an exemplary user interface of an application menu on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
Fig. 4C-4E illustrate examples of dynamic intensity thresholds according to some embodiments.
Fig. 5A-5R illustrate exemplary user interfaces for drag-and-drop gestures and operations and spring-loaded gestures and operations, according to some embodiments.
FIG. 6A illustrates touch event generation, stage and cancellation and drag event generation and stage during execution of a drag gesture that spans from a user interface area of a first application or application view to a user interface area of a second application or application view, according to some embodiments.
FIG. 6B illustrates touch event generation and phases, according to some embodiments, while a touch is moving across a user interface area of a first application or application view to a user interface area of a second application or application view without separating the touch.
FIG. 6C illustrates a mechanism for detecting a touch, detecting the start of a drag gesture in source processing, separating the touch, associating a session ID with the touch, and providing the touch to a target process, according to some embodiments.
Fig. 6D and 6E illustrate gesture recognition hierarchies before and after one or more supplemental gesture recognizers are added to an application view, according to some embodiments.
Fig. 7A-7F are flow diagrams illustrating methods of processing touch input recognized as a drag gesture, according to some embodiments.
8A-8E are flow diagrams illustrating methods of dynamically adding a supplemental gesture recognizer to an application or application view in response to detecting a touch input when specified criteria are met, according to some embodiments.
9A-9D are flow diagrams illustrating methods of establishing a fault dependency between a first set of standard gesture recognizers associated with a portion of a user interface of an application and a second set of supplemental gesture recognizers associated with the same portion of the user interface of the application, according to some embodiments.
Detailed Description
Many electronic devices have graphical user interfaces and touch-sensitive surfaces, whether integrated with a display or otherwise used to receive touch inputs from a user. As the number of touch input based gestures increases, disambiguation between the various possible gestures becomes more difficult and more important for successful and efficient use of such electronic devices. In practice, multiple gestures usable in the same user interface area may begin with the same initial touch characteristics, such as a stationary touch that remains stationary (e.g., moves less than a predefined distance or amount) for at least a predefined initial amount of time. One mechanism that helps resolve disambiguation between such gestures is to use gesture recognizers, each for recognizing a respective gesture or gesture type, and establish a fault requirement that certain gesture recognizers fail to recognize the user input before allowing other gesture recognizers to successfully recognize the user input as a particular gesture. For example, when the user interface region accepts a long press input for initiating the first operation and a drag input for starting the drag and drop operation, a gesture recognizer for recognizing the long press gesture may be used to recognize the start of the drag and drop operation depending on the drag start gesture recognizer failing.
In another aspect, certain types of gesture recognizers may be added by an application to a user interface area for the application, while supplemental gesture recognizers may be added by a system process to the same user interface area when certain criteria are met. In this manner, the number of gesture recognizers established for a particular user interface region may remain relatively low until a supplemental gesture recognizer is needed or may be needed, as indicated by the satisfaction of the above criteria. For example, gesture recognizers for handling drag-and-drop operations may be added to the user interface area as a supplemental gesture recognizer when they are or may be needed.
In another aspect, interaction identifiers are associated with applications or application views to indicate which, if any, supplemental gesture recognizers can be added to those applications or application views. For example, only applications or application views that can participate in a drag-and-drop operation are associated with an interaction identifier for the drag operation. A second, different interaction identifier may be used for drop interactions, as some application areas may be the source, but not the target, of a drag-and-drop operation, and vice versa. The third interaction identifier may be used for spring-loaded interactions because some, but not all, application areas may include containers that can be opened or have controls that can be activated using spring-loaded gestures. In this manner, applications and views that only allow certain types of gestures or operations, such as drag-and-drop gestures and operations or spring-loaded gestures and operations, are associated with corresponding interaction identifiers. The use of such interactive identifiers and the resulting control of application areas having supplemental gesture recognizers added to them improves the operational efficiency of electronic devices and also improves the responsiveness of such devices to user inputs by reducing the number of gesture recognizers that process such user inputs.
Fig. 1, 2, and 3A illustrate exemplary devices, fig. 3B-3D illustrate event and gesture recognition and touch event delivery mechanisms, and fig. 4A-4B and 5A-5R illustrate exemplary user interfaces for drag-and-drop operations. Fig. 6A-6C illustrate touch event and drag event generation and management mechanisms. Fig. 7A to 7F illustrate a flowchart of a method of processing a touch input recognized as a drag gesture. 8A-8E illustrate a flow diagram of a method of dynamically adding a supplemental gesture recognizer to an application or application view. 9A-9D illustrate a flow diagram of a method of establishing a fault dependency between a first set of standard gesture recognizers associated with a portion of a user interface of an application and a second set of supplemental gesture recognizers associated with the same portion of the user interface of the application. The user interface in fig. 5A to 5R and the mechanisms shown in fig. 3B to 3D and fig. 6A to 6C are used to explain the processes in fig. 7A to 7F, fig. 8A to 8E, and fig. 9A to 9D.
Exemplary device
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of various described embodiments. It will be apparent, however, to one skilled in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact can be termed a second contact, and, similarly, a second contact can be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally interpreted to mean "when … …" after "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined … …" or "if [ stated condition or event ] is detected" is optionally to be construed to mean "upon determination … …" or "in response to determination … …" or "upon detection of [ stated condition or event ] or" in response to detection of [ stated condition or event ] ", depending on the context.
Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also contains other functions, such as PDA and/or music player functions. Exemplary embodiments of portable multifunction devices include, but are not limited to, those from Apple Inc
Figure GDA0002386764800000101
iPod
Figure GDA0002386764800000102
And
Figure GDA0002386764800000103
an apparatus. Other portable electronic devices are optionally used, such as laptops or tablets with touch-sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is described. However, it should be understood that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a note taking application, a drawing application, a presentation application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephony application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a Web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied for different applications and/or within respective applications. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1 is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display system 112 in accordance with some embodiments. Touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience and is sometimes referred to simply as a touch-sensitive display. Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, a speaker 111, a microphone 113, an input/output (I/O) subsystem 106, other input or control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more intensity sensors 165 for detecting the intensity of contacts on device 100 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100). Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or trackpad 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in this specification and claims, the term "haptic output" is a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user with the user's sense of touch. For example, where a device or component of a device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other portion of a user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a haptic sensation corresponding to a perceived change in a physical characteristic of the device or component of the device. For example, movement of the touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is optionally interpreted by the user as a "down click" or "up click" of a physical actuation button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuation button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, even when there is no change in the smoothness of the touch sensitive surface, the movement of the touch sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch sensitive surface. While such interpretation of touch by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is merely one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1 are implemented in hardware, software, firmware, or any combination thereof, including one or more signal processing circuits and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as one or more CPUs 120 and peripheral interface 118, is optionally controlled by a memory controller 122.
Peripheral interface 118 may be used to couple the input and output peripherals of the device to memory 102 and one or more CPUs 120. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data.
In some embodiments, peripherals interface 118, one or more CPUs 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also referred to as electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks such as the internet, also known as the World Wide Web (WWW), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices via wireless communication. The wireless communication optionally uses any of a number of communication standards, protocols, and techniques, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution data Only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11n), Voice over Internet protocol (VoIP), Wi-MAX, electronic mail protocols (e.g., Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS), and/or Short Message Service (SMS)), or any other suitable communication protocol including communication protocols not yet developed at the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signals into sound waves audible to a human. The audio circuit 110 also receives electrical signals converted from sound waves by the microphone 113. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headset or a headset having both an output (e.g., a monaural headset or a binaural headset) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as a touch-sensitive display system 112 and other input or control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. One or more input controllers 160 receive electrical signals from or transmit electrical signals to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, one or more input controllers 160 are optionally coupled to (or not coupled to) any of: a keyboard, an infrared port, a USB port, a stylus, and/or a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
Touch-sensitive display system 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from touch-sensitive display system 112 and/or transmits electrical signals to touch-sensitive display system 112. Touch-sensitive display system 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" is a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed at the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
Touch-sensitive display system 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on haptic and/or tactile contact. Touch-sensitive display system 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch-sensitive display system 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch-sensitive display system 112. In an exemplary embodiment, the point of contact between touch-sensitive display system 112 and the user corresponds to a user's finger or a stylus.
Touch-sensitive display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch-sensitive display system 112. In one exemplary embodiment, projected mutual capacitance sensing technology is used, such as from AppleOf (Cupertino, California)
Figure GDA0002386764800000141
iPod
Figure GDA0002386764800000142
And
Figure GDA0002386764800000143
the technique found in (1).
Touch sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touchscreen video resolution exceeds 400dpi (e.g., 500dpi, 800dpi, or greater). The user optionally makes contact with touch-sensitive display system 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the action desired by the user.
In some embodiments, in addition to a touch screen, device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions. In some embodiments, the trackpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The trackpad is optionally a touch-sensitive surface separate from touch-sensitive display system 112, or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., battery, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1 shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The one or more optical sensors 164 optionally include Charge Coupled Devices (CCDs) or Complementary Metal Oxide Semiconductor (CMOS) phototransistors. The one or more optical sensors 164 receive light projected through the one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), one or more optical sensors 164 optionally capture still images and/or video. In some embodiments, an optical sensor is located on the back of device 100 opposite touch-sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still and/or video image capture. In some embodiments, another optical sensor is located on the front of the device to capture images of the user (e.g., for self-timer shooting, for video conferencing while the user is viewing other video conference participants on a touch screen, etc.).
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1 shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. The one or more contact intensity sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). One or more contact intensity sensors 165 receive contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some implementations, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100 opposite touch-sensitive display system 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1 shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled with the input controller 160 in the I/O subsystem 106. In some embodiments, the proximity sensor turns off and disables touch-sensitive display system 112 when the multifunction device is placed near the user's ear (e.g., when the user is making a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1 shows a haptic output generator coupled to a haptic feedback controller 161 in the I/O subsystem 106. Tactile output generator 167 optionally includes one or more electro-acoustic devices, such as speakers or other audio components, and/or electromechanical devices that convert energy into linear motion, such as motors, solenoids, electroactive aggregators, piezoelectric actuators, electrostatic actuators, or other tactile output generating components (e.g., components that convert electrical signals into tactile output on the device). Tactile output generator 167 receives tactile feedback generation instructions from tactile feedback module 133 and generates tactile outputs on device 100 that can be felt by a user of device 100. In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100 opposite touch-sensitive display system 112 located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1 shows accelerometer 168 coupled with peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled with input controller 160 in I/O subsystem 106. In some embodiments, information is displayed in a portrait view or a landscape view on the touch screen display based on analysis of data received from the one or more accelerometers. Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to the one or more accelerometers 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a haptic feedback module (or set of instructions) 133, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1 and 3A. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, application views, or other information occupy various areas of touch-sensitive display system 112; sensor status, including information obtained from various sensors of the device and other input or control devices 116; and position and/or orientation information regarding the position and/or attitude of the device; and optionally other status information.
Operating system 126 (e.g., iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or embedded operating systems such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124 and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. The external port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly via a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is some of Apple Inc
Figure GDA0002386764800000171
iPod
Figure GDA0002386764800000172
A multi-pin (e.g., 30-pin) connector that is the same as or similar and/or compatible with the 30-pin connector used in iPod devices. In some embodiments, the external port is some of Apple Inc
Figure GDA0002386764800000173
iPod
Figure GDA0002386764800000174
A Lightning connector that is the same as or similar and/or compatible with the Lightning connector used in the iPod device.
Contact/motion module 130 optionally detects contact with touch-sensitive display system 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a trackpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to contact detection (e.g., by a finger or stylus), such as determining whether contact has occurred (e.g., detecting a finger-down event), determining the intensity of contact (e.g., the force or pressure of the contact, or a surrogate for the force or pressure of the contact), determining whether there is movement of the contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-lift-off event or a contact-break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to single point contacts (e.g., single finger contacts or stylus contacts) or multiple simultaneous contacts (e.g., "multi-touch"/multi-finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on a touch pad.
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, the gesture is optionally detected by detecting a particular contact pattern. For example, detecting a single-finger tap gesture includes detecting a finger-down event, and then detecting a finger-up (lift-off) event at the same location (or substantially the same location) as the finger-down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting a particular contact pattern of the stylus.
In some embodiments, detecting a finger tap gesture is dependent on detecting a length of time between a finger down event and a finger up event, but is independent of detecting a finger contact intensity between a finger down event and a finger up event. In some embodiments, a flick gesture is detected based on a determination that the length of time between a finger down event and a finger up event is less than a predetermined value (e.g., less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds), regardless of whether the intensity of the finger contact during the flick reaches a given intensity threshold (greater than a nominal contact detection intensity threshold), such as a tap or a deep tap intensity threshold. Thus, a finger tap gesture may satisfy a particular input criterion that does not require that the characteristic intensity of the contact satisfy a given intensity threshold to satisfy the particular input criterion. For clarity, a finger contact in a flick gesture generally requires that a nominal contact detection intensity threshold below which no contact is detected be met in order to detect a finger press event. A similar analysis applies to detecting flick gestures by a stylus or other contact. Where the device is capable of detecting a finger or stylus contact hovering over the touch-sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch-sensitive surface.
The same concept applies in a similar manner to other types of gestures. For example, a swipe gesture, a pinch gesture, a spread gesture, and/or a long press gesture can optionally be detected based on satisfying criteria that are independent of the intensity of contacts included in the gesture or that do not require one or more contacts performing the gesture to reach an intensity threshold in order to be recognized. For example, a swipe gesture is detected based on an amount of movement of one or more contacts; a zoom gesture is detected based on movement of two or more contacts toward each other; a magnification gesture is detected based on movement of two or more contacts away from each other; the long press gesture is detected based on a duration of contact on the touch-sensitive surface having less than a threshold amount of movement. Thus, the statement that a particular gesture recognition criterion does not require that the contact intensity satisfy a respective intensity threshold to satisfy the particular gesture recognition criterion means that the particular gesture recognition criterion can be satisfied when a contact in the gesture does not meet the respective intensity threshold, and can also be satisfied if one or more contacts in the gesture meet or exceed the respective intensity threshold. In some embodiments, a flick gesture is detected based on determining that a finger-down event and a finger-up event are detected within a predefined time period, regardless of whether the contact is above or below a respective intensity threshold during the predefined time period, and a swipe gesture is detected based on determining that the contact movement is greater than a predefined magnitude, even if the contact is above the respective intensity threshold at the end of the contact movement. Even in implementations where the detection of gestures is affected by the intensity of the contact performing the gesture (e.g., the device detects a long press more quickly when the intensity of the contact is above an intensity threshold, or the device delays the detection of a tap input when the intensity of the contact is higher), the detection of these gestures may not require the contact to reach a particular intensity threshold (e.g., even if the amount of time required to recognize the gesture changes) as long as the criteria for recognizing the gesture can be met if the contact does not reach the particular intensity threshold.
In some cases, the contact intensity threshold, the duration threshold, and the movement threshold are combined in various different combinations in order to create a heuristic algorithm to distinguish two or more different gestures for the same input element or region, such that multiple different interactions with the same input element can provide a richer set of user interactions and responses. The statement that a particular set of gesture recognition criteria does not require that the intensity of the contact satisfy a respective intensity threshold to satisfy the particular gesture recognition criteria does not preclude simultaneous evaluation of other intensity-related gesture recognition criteria to identify other gestures having criteria that are satisfied when the gesture includes a contact having an intensity above the respective intensity threshold. For example, in some cases, a first gesture recognition criterion of a first gesture (which does not require that the intensity of a contact satisfy a respective intensity threshold to satisfy the first gesture recognition criterion) competes with a second gesture recognition criterion of a second gesture (which depends on a contact reaching the respective intensity threshold). In such a competition, if the second gesture recognition criteria of the second gesture are first satisfied, the gesture is optionally not recognized as satisfying the first gesture recognition criteria of the first gesture. For example, if the contact reaches a respective intensity threshold before the contact moves by a predefined amount of movement, a deep-press gesture is detected instead of a swipe gesture. Conversely, if the contact moves by a predefined amount of movement before the contact reaches the respective intensity threshold, a swipe gesture is detected instead of a deep-press gesture. Even in such cases, the first gesture recognition criteria of the first gesture do not require that the intensity of the contact satisfy the respective intensity threshold to satisfy the first gesture recognition criteria, because if the contact remains below the respective intensity threshold until the end of the gesture (e.g., a swipe gesture having a contact that does not increase to an intensity above the respective intensity threshold), the gesture will be recognized by the first gesture recognition criteria as a swipe gesture. Thus, a particular gesture recognition criterion that does not require the intensity of the contact to satisfy a respective intensity threshold to satisfy the particular gesture recognition criterion will (a) in some cases ignore the intensity of the contact with respect to the intensity threshold (e.g., for a flick gesture) and/or (B) in some cases fail to satisfy the particular gesture recognition criterion (e.g., for a long press gesture) in the sense that the intensity of the contact with respect to the intensity threshold is still dependent (e.g., for a long press gesture recognized in competition with a deep press gesture) if a competing set of intensity-related gesture recognition criteria (e.g., for a deep press gesture) recognizes the input as corresponding to the intensity-related gesture before the particular gesture recognition criteria recognizes the gesture corresponding to the input.
Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual properties) of displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphic module 132 receives one or more codes for specifying a graphic to be displayed from an application program or the like, and also receives coordinate data and other graphic attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by haptic output generator 167 in generating haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides such information for use in various applications (e.g., to the phone 138 for location-based dialing; to the camera 143 as picture/video metadata; and to applications that provide location-based services such as weather desktop widgets, local yellow pages desktop widgets, and map/navigation desktop widgets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a browser module 147;
a calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5 and other desktop applets obtained by the user, and a user created desktop applet 149-6;
A desktop applet creator module 150 for forming a user-created desktop applet 149-6;
a search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
The respective software application typically has an application state, at least during execution, indicating the state of the respective software application and its components (e.g., gesture recognizer); see application internal state 321 (fig. 3B) described below.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contacts module 137 includes executable instructions for managing contact lists or contact lists (e.g., stored in memory 102 or in an application internal state 192 of contacts module 137 in memory 370), including: adding a name to the address book; delete names from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and classifying names; providing a telephone number and/or email address to initiate and/or facilitate communication via telephone 138, video conference 139, email 140, or instant message 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, phone module 138 includes executable instructions for: entering a sequence of characters corresponding to a telephone number, accessing one or more telephone numbers in the address book 137, modifying the entered telephone number, dialing a corresponding telephone number, conducting a conversation, and disconnecting or hanging up when the conversation is completed. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephony module 138, video conference module 139 includes executable instructions to initiate, conduct, and terminate video conferences between the user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant messaging module 141 includes executable instructions for: entering a sequence of characters corresponding to an instant message, modifying previously entered characters, sending a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for telephone-based instant messages or using XMPP, SIMPLE, Apple Push Notification Service (APNs) or IMPS for internet-based instant messages), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant messaging" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, APNs, or IMPS).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, fitness support module 142 includes executable instructions for creating a workout (e.g., having time, distance, and/or calorie burning goals); communicating with fitness sensors (in sports equipment and smart watches); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and displaying, storing and transmitting fitness data.
In conjunction with touch-sensitive display system 112, display controller 156, one or more optical sensors 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to: capturing still images or video (including video streams) and storing them in the memory 102, modifying features of the still images or video, and/or deleting the still images or video from the memory 102.
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions to browse the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, email client module 140, and browser module 147, calendar module 148 includes executable instructions for creating, displaying, modifying, and storing calendars and data associated with calendars (e.g., calendar entries, to-do items, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock market desktop applet 149-2, calculator desktop applet 149-3, alarm clock desktop applet 149-4, and dictionary desktop applet 149-5) that is optionally downloaded and used by the user, or a mini-application created by the user (e.g., user-created desktop applet 149-6). In some embodiments, the desktop applet includes an HTML (hypertext markup language) file, a CSS (cascading style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, desktop applet creator module 150 includes executable instructions for creating a desktop applet (e.g., turning a user-specified portion of a web page into a desktop applet).
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions for searching memory 102 for text, music, sound, images, videos, and/or other files that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch-sensitive display system 112 or on an external display wirelessly connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch-sensitive display system 112, display controller 156, contact module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notepads, backlogs, and the like according to user instructions.
In conjunction with RF circuitry 108, touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 includes executable instructions for receiving, displaying, modifying, and storing maps and data associated with maps (e.g., driving directions; data for stores and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with touch-sensitive display system 112, display system controller 156, contact module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes executable instructions that allow a user to access, browse, receive (e.g., by streaming and/or downloading), play back (e.g., on touch screen 112 or on an external display that is wirelessly connected or connected via external port 124), send emails with links to particular online videos, and otherwise manage online videos in one or more file formats, such as h.264. In some embodiments, the link to the particular online video is sent using instant messaging module 141 instead of email client module 140.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures identified above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device in which operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a trackpad. By using a touch screen and/or trackpad as the primary input control device for operating the device 100, the number of physical input control devices (e.g., push buttons, dials, etc.) on the device 100 is optionally reduced.
The predefined set of functions performed exclusively through the touchscreen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the trackpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, a "menu button" is implemented using a touch pad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
Fig. 2 illustrates a portable multifunction device 100 with a touch screen (e.g., touch-sensitive display system 112 of fig. 1) in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by making gestures on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a rolling of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture that swipes over the application icon optionally does not select the corresponding application.
Device 100 optionally also includes one or more physical buttons, such as a "home" button, or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on the touch screen display.
In some embodiments, device 100 includes a touch screen display, menu buttons 204, push buttons 206 for powering the device on/off and for locking the device, one or more volume adjustment buttons 208, a Subscriber Identity Module (SIM) card slot 210, a headset jack 212, and docking/charging external port 124. Pressing the button 206 optionally serves to turn the device on/off by pressing the button and holding the button in a pressed state for a predefined time interval; locking the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or unlocking the device or initiating an unlocking process. In some embodiments, device 100 also accepts voice input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting the intensity of contacts on touch-sensitive display system 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
Fig. 3A is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop, desktop, tablet, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home controller or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 having a display 340, which is typically a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and a touch-sensitive touchpad 355, a tactile output generator 357 (e.g., similar to tactile output generator 167 described above with reference to fig. 1) for generating tactile outputs on device 300, sensors 359 (e.g., optical sensors 164, accelerometers 168, proximity sensors 166, and/or contact intensity sensors 165 described above with reference to fig. 1). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to those stored in memory 102 of portable multifunction device 100 (fig. 1), or a subset thereof. Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 (FIG. 1) of portable multifunction device 100 optionally does not store these modules.
Each of the above identified elements in fig. 3A is optionally stored in one or more of the previously mentioned memory devices. Each of the identified modules corresponds to a set of instructions for performing the functions described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures identified above. Further, memory 370 optionally stores additional modules and data structures not described above.
Fig. 3B is a block diagram illustrating exemplary components for event processing (e.g., event processing component 391) according to some embodiments. In some embodiments, memory 102 (FIG. 1) includes event recognizer global method 312 and one or more applications (e.g., 133-1 to 133-3).
In some embodiments, the event recognizer global method 312 includes an event monitor 311, a hit view determination module 314, an activity event recognizer determination module 316, and an event dispatcher module 315. In some embodiments, the event recognizer global method 312 is located within an event delivery system in the operating system 126 (fig. 1). Alternatively, the event recognizer global method 312 is implemented in the corresponding application 133-1. In other embodiments, the event recognizer global method 312 is implemented as a standalone module, or part of another module stored in the memory 102, such as a contact/motion module (not shown).
Event monitor 311 receives event information from one or more sensors 359, touch-sensitive display 340, and/or one or more input devices 350, 355. The event information includes information about the event (e.g., a user touch on the touch-sensitive display 156 as part of a multi-touch gesture or motion of the device 102) and/or sub-event (e.g., movement of the touch across the touch-sensitive display 156). For example, event information for a touch event includes one or more of: the location of the touch and the timestamp. Similarly, the event information for the swipe event includes two or more of: location, timestamp, direction, and speed of swipe. The sensors 359, touch-sensitive display 156, and input device 128 send information event and sub-event information to event monitor 311 either directly or through a peripheral interface that retrieves and stores the event information. In some embodiments, sensors 359 include one or more of: proximity sensors, accelerometers, gyroscopes, microphones, and cameras. In some embodiments, the sensors 359 further include the input device 128 and/or the touch-sensitive display 156.
In some embodiments, event monitor 311 sends requests to sensors 116 and/or peripheral interfaces at predetermined intervals. In response, the sensor 116 and/or the peripheral interface transmit event information. In other embodiments, the sensor 116 and peripheral interface only send event information when there is a significant event (e.g., receiving an input exceeding a predetermined noise threshold and/or exceeding a predetermined duration).
The event monitor 311 receives the event information and relays the event information to the event dispatcher module 315. In some embodiments, event monitor 311 determines one or more respective applications (e.g., 133-1) to which event information is delivered. In some embodiments, event monitor 311 also determines one or more respective views 317 of one or more respective applications to which event information is to be delivered.
While a view is generally considered a window or other portion of a user interface, technically, a view is a portion of an application that manages a particular area or region of a user interface of an application. The view is typically implemented as an instance of a particular class or one of its subclasses and manages a rectangular area in the application window. Thus, a view is an object that has an associated display area or user interface portion, and also has one or more computer programs (sometimes referred to as "methods") associated with the class for which the view is an instance. The view is responsible for drawing content, handling multi-touch events and managing the layout of any child views. Drawing content involves drawing shapes, images, and text within a rectangular area of a view using various graphics techniques. The view responds to touch events in its rectangular area by using a gesture recognizer or directly processing the touch events. In the view hierarchy, the parent view is responsible for locating and resizing its child views, and this can be performed dynamically. This ability to dynamically modify the sub-views enables the views to adapt to changing conditions, such as interface rotation and animation.
Views may be considered as building blocks used by programmers or application developers to construct user interfaces for applications. Rather than using one view to present all of the content of an application, a view hierarchy is typically built using several views. Each view in the hierarchy presents a particular portion of the user interface of the application and is typically optimized for a particular type of content. For example, an application may have different views that are dedicated to rendering images, text, and other types of content.
In some embodiments, the event recognizer global method 312 further includes a hit view determination module 314 and/or an activity event recognizer determination module 316.
Hit view determination module 314 (if present) provides a software program for determining the location within one or more views at which an event or sub-event occurred when touch-sensitive display 156 displays more than one view. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with a respective application (e.g., 133-1) is a set of views 317, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected may correspond to a particular view within the view hierarchy of the application. For example, the lowest hierarchical view in which a touch is detected may be referred to as a hit view, and the set of events identified as correct inputs may be determined based at least in part on the hit view of the initial touch that began the touch-based gesture. Further discussion of the view hierarchy is provided below with reference to fig. 3D.
Hit view determination module 314 receives information related to events and/or sub-events. When an application has multiple views organized in a hierarchy, hit view determination module 314 identifies the hit view as the lowest view in the hierarchy that should process an event or sub-event. In most cases, the hit view is the lowest level view in which the initiating event or sub-event occurred (i.e., the first event or sub-event in the sequence of events and/or the sub-event that formed the gesture). Once a hit view is identified by the hit view determination module, the hit view typically receives all events and/or sub-events related to the same touch or input source for which it was identified as the hit view. However, the hit view is not always the only view that receives all events and/or sub-events related to the same touch or input source identified as the hit view. In other words, in some embodiments, another application (e.g., 133-2) or another view of the same application also receives at least a subset of events and/or sub-events related to the same touch or input source, even if (or whether) a hit view of the touch or input source has been determined.
The active event identifier determination module 316 determines which view or views within the view hierarchy should receive a particular sequence of events and/or sub-events. In some application contexts, the active event recognizer determination module 316 determines that only the hit view should receive a particular sequence of events and/or sub-events. In other application contexts, the active event recognizer determination module 316 determines that all views that include the physical location of an event or sub-event are actively involved views and thus determines that all actively involved views should receive a particular sequence of events and/or sub-events. In other application contexts, even if a touch event and/or sub-event is completely confined to an area associated with a particular view, the higher view in the hierarchy remains an actively engaged view, and thus the higher view in the hierarchy should receive a particular sequence of events and/or sub-events. Additionally or alternatively, the activity event recognizer determination module 316 determines which application(s) in the programmatic hierarchy should receive a particular sequence of events and/or sub-events. Thus, in some embodiments, the activity event recognizer determination module 316 determines that only the corresponding application in the programming hierarchy should receive a particular sequence of events and/or sub-events. In some embodiments, the activity event recognizer determination module 316 determines that a plurality of applications in a programming hierarchy should receive a particular sequence of events and/or sub-events.
As discussed further below, in some embodiments, the touch is initially considered as being attached to (attached to) a particular view or view set, but when a drag gesture is detected, the touch or touches are "detached", and as the touch or touches traverse the user interface area associated with those other views, events related to the touch or touches may be delivered to the other views.
The event dispatcher module 315 dispatches event information to an event recognizer (also referred to herein as a "gesture recognizer") (e.g., event recognizer 325-1). In embodiments that include the active event recognizer determination module 316, the event dispatcher module 315 delivers the event information to the event recognizer determined by the active event recognizer determination module 316. In some embodiments, the event dispatcher module 315 stores event information in the event queue, which is retrieved by the respective event identifier 325 (or the event receiver 331) in the respective event identifier 325.
In some embodiments, the respective application (e.g., 133-1) includes an application internal state 321 that indicates one or more current application views that are displayed on the touch-sensitive display 156 when the application is active or executing. In some embodiments, the event recognizer global method 312 uses the device/global internal state 157 (fig. 1 and 3A) to determine which application(s) are currently active, and the event recognizer global method 312 uses the application internal state 321 to determine the application view 317 to which to deliver event information.
In some embodiments, the application internal state 321 includes additional information, such as one or more of the following: resume information to be used when the application 133-1 resumes execution, user interface state information indicating that information is being displayed or is ready for display by the application 133-1, a state queue for enabling a user to return to a previous state or view of the application 133-1, and a repeat/undo queue of previous actions taken by the user. In some embodiments, the application internal state 321 also includes context information/text and metadata 323.
In some embodiments, the application 133-1 includes one or more application views 317, each having corresponding instructions for processing touch events occurring within a respective view of the application's user interface (e.g., a corresponding event handler 319, sometimes referred to as a gesture handler). At least one application view 317 of the application 133-1 includes one or more event recognizers 325. Typically, the respective application view 317 includes a plurality of event recognizers 325. In other embodiments, one or more of the event recognizers 325 are part of a separate module that is a higher-level object that inherits methods and other properties, such as a user interface toolkit (not shown) or the application 133-1. In some embodiments, the respective application view 317 further includes one or more of: a data updater, an object updater, a GUI updater, and/or received event data.
The corresponding application (e.g., 133-1) also includes one or more event handlers 319. Typically, a corresponding application (e.g., 133-1) includes a plurality of event handlers 319.
The corresponding event recognizer 325-1 receives event information from the event distributor module 315 (either directly or indirectly through the application 133-1) and recognizes events from the event information. The event recognizer 325-1 includes an event receiver 331 and an event comparator 333.
The event information includes information about an event (e.g., touch) or sub-event (e.g., touch movement). The event information also includes additional information such as the location of the event or sub-event, depending on the event or sub-event. When an event or sub-event involves motion of a touch, the event information may also include the speed and direction of the sub-event. In some embodiments, the event comprises rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 333 compares the event information to one or more predefined gesture definitions (also referred to herein as "event definitions") and, based on the comparison, determines an event or sub-event or determines or updates the state of the event or sub-event. In some embodiments, event comparator 333 includes one or more gesture definitions 335 (also referred to herein as "event definitions," as described above). Gesture definition 335 contains definitions of gestures (e.g., predefined sequences of events and/or sub-events), such as gesture 1(337-1), gesture 2(337-2), and so on. In some embodiments, sub-events in gesture definition 335 include, for example, touch start, touch end, touch move, touch cancel, and multiple touches. In one example, the definition of gesture 1(337-1) is a double click on the displayed object. For example, a double tap includes a first touch on a displayed object (touch start) at a predetermined phase of the gesture, a first lift-off (touch end) at a next predetermined phase of the gesture, a second touch on a displayed object (touch start) at a subsequent predetermined phase of the gesture, and a second lift-off (touch end) at a last predetermined phase of the gesture. In another example, the definition of gesture 2(337-2) includes dragging on a displayed object. For example, dragging includes a touch (or contact) on a displayed object, a movement of the touch on the touch-sensitive display 156, and a lift-off of the touch (touch end).
In some embodiments, event recognizer 325-1 also includes information for event delivery 339. The information for the event delivery 339 includes a reference to the corresponding event handler 319. Optionally, the information for event delivery 339 includes action target pairs 354. In some embodiments, in response to recognizing a gesture (or a portion of a gesture), event information (e.g., an action message) is sent to one or more targets 383 identified by the action target pair 354 (see fig. 3D). In other embodiments, the pair of action targets is activated in response to recognizing the gesture (or a portion of the gesture).
In some embodiments, gesture definitions 335 include definitions of gestures for respective user interface objects. In some embodiments, the event comparator 333 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view that displays three user interface objects on the touch-sensitive display 156, when a touch is detected on the touch-sensitive display 156, the event comparator 333 performs a hit-test to determine which, if any, of the three user interface objects is associated with the touch (event). If each displayed object is associated with a corresponding event handler 319, the event comparator 333 uses the results of the hit test to determine which event handler 319 should be activated. For example, the event comparator 333 selects the event handler 319 associated with the event and object that triggered the hit test.
In some embodiments, the respective gesture definition 337 for the respective gesture further includes a delay action that delays delivery of the event information until it has been determined whether the sequence of events and/or sub-events corresponds to the event type of the event recognizer.
When the respective event recognizer 325-1 determines that the series of events and/or sub-events does not match any event in the gesture definition 335, the respective event recognizer 325-1 enters an event fault state, after which the respective event recognizer 325-1 ignores subsequent events and/or sub-events of the touch-based gesture. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process events and/or sub-events of the ongoing touch-based gesture.
In some embodiments, when no event recognizer for a hit view remains, the event information is sent to one or more event recognizers in higher views in the view hierarchy. Alternatively, event information is ignored when no event recognizer for hit views remains. In some embodiments, event information is sent to one or more event recognizers in a higher programming level in the program hierarchy while no event recognizers for views in the view hierarchy remain. Alternatively, when no event recognizer for views in the view hierarchy remains, then the event information is ignored.
In some embodiments, the corresponding event recognizer 325-1 includes an event recognizer state 334. Event recognizer state 334 includes the state of the corresponding event recognizer 325-1.
In some embodiments, event recognizer state 334 includes recognizer metadata and attributes 349. In some embodiments, the identifier metadata and attributes 349 include one or more of: A) configurable attributes, flags, and/or lists that indicate how the event delivery system should perform event and/or sub-event delivery to the actively participating event recognizer; B) configurable attributes, tags, and/or lists that indicate how event recognizers interact with each other; C) configurable attributes, tags, and/or lists that indicate how event recognizers receive event information; D) configurable attributes, flags, and/or lists that indicate how the event recognizer recognizes gestures; E) configurable attributes, tags, and/or lists that indicate whether an event and/or sub-event is delivered to different levels in the view hierarchy; and F) a reference to the corresponding event handler 319.
In some embodiments, event recognizer state 334 includes event/touch metadata 351. Event/touch metadata 351 includes event/touch information about respective events/touches that have been detected and corresponds to respective gesture definitions 337 of gesture definitions 335. The event/touch information includes one or more of: location, timestamp, speed, direction, distance, scale (or scale change), and angle (or angle change) of the corresponding event/touch.
In some embodiments, when one or more particular events and/or sub-events of a gesture are recognized, the respective event recognizer 325 activates the event handler 319 associated with the respective event recognizer 325. In some implementations, the respective event recognizer 325 delivers event information associated with the event to the event handler 319.
The event handler 319, when activated, performs one or more of the following: create and/or update data, create and update objects, and prepare and send display information for display on the display 126 or touch-sensitive display 156.
In some embodiments, the corresponding application view 317-2 includes view metadata 341. As described above with reference to fig. 3B, the view metadata 341 includes data about the view. Optionally, the view metadata 341 includes one or more of: a stop attribute 342, a skip attribute 343, a miss skip attribute 344, an interaction identifier 347 (discussed below with respect to a dynamically added drag, drop, and spring-loaded gesture recognizer), and other view metadata 329.
In some embodiments, a first active engagement view within the view hierarchy may be configured to prevent delivery of respective sub-events to an event recognizer associated with the first active engagement view. This behavior may implement skip attribute 343. When the skip attribute is set for the application view, delivery of the corresponding sub-event is still performed for the event recognizers associated with the other actively participating views in the view hierarchy.
Alternatively, a first active engagement view within the view hierarchy may be configured to prevent delivery of respective sub-events to an event recognizer associated with the first active engagement view unless the first active engagement view is a hit view. This behavior may implement conditional miss skip attribute 344.
In some embodiments, a second active engagement view within the view hierarchy is configured to prevent delivery of respective sub-events to an event recognizer associated with the second active engagement view and an event recognizer associated with a prototype of the second active engagement view. This behavior may implement stop attribute 342.
Fig. 3C is a block diagram illustrating exemplary classes and instances of a gesture recognizer (e.g., event processing component 390) according to some embodiments.
A software application (e.g., application 133-1) has one or more event recognizers 345. In some embodiments, the corresponding event recognizer (e.g., 345-2) is an event recognizer class. The corresponding event recognizer (e.g., 345-2) includes event recognizer specific code 338 (e.g., a set of instructions defining the operation of the event recognizer) and a state machine 340.
In some embodiments, application state 321 of a software application (e.g., application 133-1) includes an instance of an event recognizer. Each instance of an event recognizer is an object having a state (e.g., event recognizer state 334). "execution" of a respective event recognizer instance is accomplished by executing corresponding event recognizer specific code (e.g., 338) and updating or maintaining state 334 of event recognizer instance 365. State 334 of event recognizer instance 365 comprises state 351 of state machine 340 of the event recognizer instance.
In some embodiments, the application state 321 includes a plurality of event recognizer instances 365. The respective event recognizer instance 365 generally corresponds to an event recognizer that has been bound (also referred to as "attached") to a view of an application. In some embodiments, one or more event recognizer instances 365 are bound to respective applications in the program hierarchy without reference to any particular view of the respective applications. In some embodiments, the application state 321 includes multiple instances (e.g., 365-1 through 365-L) of a respective event recognizer (e.g., 345-2). In some embodiments, the application state 321 includes a plurality of instances 365 of the event recognizer (e.g., 345-1 to 345-R).
In some implementations, a respective instance (e.g., 365-2) of the gesture recognizer 345 includes an event recognizer state 334. As described above, in some embodiments, event recognizer state 334 includes recognizer metadata and attributes 349 and event/touch metadata 351. In some implementations, the event recognizer state 334 also includes a view hierarchy reference 336 that indicates to which view the respective instance 365-2 of the gesture recognizer 345-2 is attached.
In some embodiments, the recognizer metadata and attributes 349 include the following or a subset or superset thereof:
an exclusive flag 324;
exclusive exception list 326;
waiting list 327; when included in the event recognizer state 334 for a respective event recognizer (or gesture recognizer), the list 327 indicates a group of event recognizers (or gesture recognizers), if any, that must enter an event impossible or event cancelled state before the respective event recognizer can recognize the respective event. In other words, the listed event or gesture recognizers must not recognize an input or event until the event recognizer with the wait list 327 is allowed to recognize the input or event. In effect, the listed event recognizers have a higher priority for recognizing events than the event recognizer with waiting list 327.
Delayed touch start flag 328;
delayed touch end marker 330; and
touch cancel tab 332.
In some embodiments, the one or more event recognizers may be adapted to delay delivery of one or more sub-events of the sequence of sub-events until after the event recognizer recognizes the event. This behavior reflects a delay event. For example, consider a single flick gesture in a view, where multiple flick gestures are possible. In this case, the flick event becomes a "flick + delay" recognizer. In essence, when the event recognizer implements this behavior, the event recognizer will delay the event recognition until it is determined that the sequence of sub-events does correspond to its event definition. This behavior may be appropriate when the recipient view fails to respond correctly to the cancelled event. In some embodiments, the event recognizer will delay updating its event recognition state to its respective active participation view until the event recognizer determines that the sequence of sub-events does not correspond to its event definition. Delayed touch start flag 328, delayed touch end flag 330, and touch cancel flag 332 are provided to customize sub-event delivery techniques, as well as event recognizer and view state information updates to specific needs.
In some embodiments, the recognizer metadata and attributes 349 include the following or a subset or superset thereof:
state machine state/stage 351, which indicates the state of the state machine (e.g., 340) for the respective event recognizer instance (e.g., 365-2); state machine state/stage 351 may have various state values, such as "event possible," "event identification," "event failed," etc., as described below; alternatively or additionally, state machine state/stage 351 can have various stage values, such as "touch stage start," which can indicate that the touch data structure defines a new touch that is not referenced by the previous touch data structure; the "touch phase move" value may indicate that the defined touch has moved from a previous position; a "touch phase stationary" value may indicate that the touch remains in the same location; the "touch phase end" value may indicate that the touch has ended (e.g., the user has lifted his/her finger from the surface of the multi-touch display); a "touch phase cancel" value may indicate that a touch has been cancelled by the device; a cancelled touch may be a touch that is not necessarily ended by the user but the device has determined to be ignored; for example, the device may determine that a touch was inadvertently generated (i.e., due to placing the portable multi-touch enabled device in a pocket) and ignore the touch for that reason; each value of state machine state/stage 351 may be an integer (referred to herein as a "gesture recognizer state value");
Action-target pairs 354, wherein each pair identifies a target to which a respective event recognizer instance sends a recognized action message in response to recognizing an event or touch as a gesture or as part of a gesture;
delegation 353, delegation 353 being a reference to a corresponding delegation when the delegation is assigned to a corresponding event recognizer instance; when a delegate is not assigned to a corresponding event recognizer instance, delegate 346 contains a null value; and
an enable attribute 356 that indicates whether the corresponding event recognizer instance is enabled; in some embodiments, when a respective event recognizer instance is not enabled (e.g., disabled), the respective event recognizer instance does not process the event or touch.
Additional information regarding the gesture recognizer, including the state and attributes of the gesture recognizer, may be found in U.S. patent application 14/290,931, filed 5/29 2014, which is incorporated herein by reference in its entirety.
Each touch data structure 361 may include various entries. In some embodiments, the touch data structure may include data corresponding to at least touch-specific entries in event/touch metadata 351, such as the following or a subset or superset thereof:
"first touch of view" entry 362;
"Each touch information" entry 363, including "timestamp" information, indicating a particular time (e.g., touch time) to which the touch data structure relates; optionally, "per touch information" entry 362 includes other information, such as the location of the corresponding touch; and
optional "tap count" entry 364.
Thus, each touch data structure can define what happens to a respective touch (or other input source) at a particular time (e.g., whether the touch is stationary, moving, etc.) as well as other information associated with the touch (such as location). Thus, each touch data structure can define the state of a particular touch at a particular time. One or more touch data structures referencing the same time may be added to the touch event data structure, which may define the state of all touches that a particular view is receiving at a certain time (as described above, some touch data structures may also reference touches that have ended and are no longer being received). Over time, multiple touch event data structures may be sent to the software implementing the view to provide the software with continuous information describing touches that occurred in the view.
The ability to handle complex touch-based gestures (optionally including multi-touch gestures) can increase the complexity of various software applications. In some cases, this additional complexity may be necessary to achieve advanced and desirable interface functionality. For example, a game may need to be able to handle multiple simultaneous touches that occur in different views, as games typically require multiple buttons to be pressed simultaneously, or accelerometer data to be combined with touches on a touch-sensitive surface. However, some simpler applications and/or views do not require advanced interface functionality. For example, simple soft buttons (i.e., buttons displayed on a touch-sensitive display) may operate satisfactorily with single-touch rather than multi-touch functionality. In these cases, the underlying OS may send unnecessary or excessive touch data (e.g., multi-touch data) to software components associated with views that are intended to be operable only with a single touch (e.g., single touch or tap soft buttons). Since the software component may need to process this data, it may need to feature all the complexity of a software application that handles multiple touches, even though it is associated with views that are only relevant to a single touch. This may increase the cost of software development for the device, as software components (i.e., various buttons, etc.) that are traditionally easy to program in a mouse interface environment may be much more complex in a multi-touch environment.
To reduce the complexity of recognizing complex touch-based gestures, delegation may be used to control the behavior of an event recognizer according to some embodiments. As described below, the delegate can determine, for example, whether a corresponding event recognizer (or gesture recognizer) can receive event (e.g., touch) information; whether a corresponding event recognizer (or gesture recognizer) can transition from an initial state (e.g., event possible state) of the state machine to another state; and/or whether the corresponding event recognizer (or gesture recognizer) can concurrently recognize an event (e.g., touch) as a corresponding gesture without preventing other event recognizers (or gesture recognizers) from recognizing the event or being prevented by other event recognizers (or gesture recognizers) of the recognized event.
However, it should be understood that the foregoing discussion regarding the complexity of evaluating and processing user touches on a touch-sensitive surface also applies to all forms of user input for operating the electronic device 102 with the input device 128, not all of which are initiated on a touch screen, e.g., coordinating mouse movement and mouse button presses, with or without: single or multiple keyboard presses or holds, device rotations or other movements, user movements on a touch pad (such as taps, drags, scrolls, etc.), stylus inputs, verbal instructions, detected eye movements, biometric inputs, detected user physiological changes, and/or any combination thereof, which may be used as inputs corresponding to events and/or sub-events defining an event to be recognized.
Turning to event information flow, fig. 3D is a block diagram illustrating event information flow according to some embodiments.
In FIG. 3D, the event dispatcher module 315 (e.g., in the operating system 118 or application software 124) receives event information (e.g., for touch events in view 378) and sends the event information to one or more applications (e.g., application 133-1 in this example, but not application 133-2). In some embodiments, the application 133-1 includes multiple views in the view hierarchy 372 (e.g., FIG. 3B, corresponding to 374, 376, and 378 of view 317) and multiple gesture recognizers (379-1 through 379-3) in the multiple views. Application 133-1 also includes one or more gesture handlers 319-i, which correspond to targets 383 (e.g., targets 383-1 and 383-2) in target-action pair 354. In some embodiments, the event dispatcher module 315 receives hit view information from the hit view determination module 314 and sends the event information to the hit view (e.g., 378) or to event recognizers attached to the hit view (e.g., 379-1 and 379-2). Additionally or alternatively, the event dispatcher module 315 receives hit level information from the hit level determination module 352 and sends the event information to the applications in the hit levels (e.g., 133-1 and 133-2) or one or more event recognizers (e.g., 379-4) in the hit level applications. Note that in the example shown in FIG. 3D, the application 133-2 does not have a view that is a hit view for the received event information, and the event information is not sent to any of the gesture recognizers (e.g., gesture recognizer 379-4) of the application 133-2.
In some embodiments, one of the applications that receive event information is a default application (e.g., 133-2 may be the default application). In some embodiments, only a subset of the gesture recognizers (or configured to receive event information) in each receiving application are allowed to receive event information. For example, in this example, gesture recognizer 379-3 in application 133-1 does not receive event information. The gesture recognizer that receives the event information is referred to herein as a receiving gesture recognizer. In FIG. 3D, receiving gesture recognizers 379-1 and 379-2 receive the event information and compare the received event information to corresponding gesture definitions 337 in the receiving gesture recognizers (FIG. 3B). In FIG. 3D, gesture recognizers 379-1 and 379-4 have corresponding gesture definitions, and when these definitions match the received event information, the gesture recognizer with the matching gesture definition sends a corresponding action message (e.g., 381) to the corresponding gesture handler (e.g., 319-i). In this example, the gesture handler 319-j of the application 133-2 does not receive an action message corresponding to the event information scheduled by the event dispatcher module 315 because the event information is sent to a gesture recognizer in the hit view 378, which hit view 378 is not part of the application 133-2. In some embodiments, the gesture handler is part of an application-independent module that can be used as an application development framework for a plurality of different applications, and when the application is compiled using the application development framework, the gesture handler will be included in the code of the application (e.g., the gesture handler is not part of an application core that is application-specific, but rather part of an application that is application-independent in the sense that it can be used by a plurality of different applications). In other cases, such as touch input in a view of the application program 133-2, the event information will be delivered to one or more gesture recognizers of the application program 133-2.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on portable multifunction device 100.
Fig. 4A illustrates an exemplary user interface of an application menu on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
signal strength indicator 402 for wireless communications (such as cellular signals and Wi-Fi signals);
time 404;
a Bluetooth indicator 405;
a battery status indicator 406;
tray 408 with common application icons, such as:
an icon 416 of the phone module 138 labeled "phone", the icon 416 optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 marked as "mail", the icon 418 optionally comprising an indicator 410 of the number of unread emails;
an icon 420 of the browser module 147 labeled "browser"; and
an icon 422 labeled "iPod" of the video and music player module 152 (also known as iPod (trademark of Apple Inc.) module 152); and
Icons for other applications, such as:
an icon 424 of the IM module 141 labeled "message";
an icon 426 of the calendar module 148 labeled "calendar";
an icon 428 of the image management module 144 labeled "photo";
an icon 430 of the camera module 143 labeled "camera";
an icon 432 of the online video module 155 labeled "online video";
an icon 434 of the stock market desktop applet 149-2 labeled "stock market";
an icon 436 of the map module 154 labeled "map";
an icon 438 labeled "weather" of the weather desktop applet 149-1;
an icon 440 of the alarm clock desktop applet 149-4 labeled "clock";
an icon 442 labeled "fitness support" for fitness support module 142;
an icon 444 of the notepad module 153 labeled "notepad"; and
an icon 446 for setting applications or modules, which provides access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely exemplary. For example, in some embodiments, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for the various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or trackpad 355 in fig. 3A) separate from the display 450. Device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of sensors 357) for detecting the intensity of contacts on touch-sensitive surface 451 and/or one or more tactile output generators 359 for generating tactile outputs for a user of device 300.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or trackpad 355 in fig. 3) separate from the display 450. Although many of the examples that follow will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in fig. 4B) has a major axis (e.g., 452 in fig. 4B) that corresponds to a major axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations that correspond to corresponding locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). Thus, user inputs (e.g., contacts 460 and 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in FIG. 4B) are used by the device to manipulate the user interface on the display (e.g., 450 in FIG. 4B) of the multifunction device while the touch-sensitive surface is separate from the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, single-finger tap gesture, finger swipe gesture, etc.), it should be understood that in some embodiments, one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, a flick gesture is optionally replaced by a mouse click (e.g., instead of detecting a contact, followed by ceasing to detect a contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" refers to an input element that indicates the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1 or the touch screen in fig. 4A) that enables direct interaction with user interface elements on the touch screen display, a contact detected on the touch screen serves as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, the focus is moved from one area of the user interface to another area of the user interface without corresponding movement of a cursor or movement of a contact on the touch screen display (e.g., by moving the focus from one button to another using tab or arrow keys); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by a user to communicate a user-desired interaction with the user interface (e.g., by indicating to the device an element with which the user of the user interface desires to interact). For example, upon detection of a press input on a touch-sensitive surface (e.g., a trackpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user desires to activate the respective button (as opposed to other user interface elements shown on the device display).
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface is the force or pressure (force per unit area) of the contact on the touch-sensitive surface (e.g., finger contact or stylus contact), or is a substitute for (surrogate for) the force or pressure of the contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average or sum) to determine an estimated contact force. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereof, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or changes thereof are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the substitute measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). The intensity of the contact is used as an attribute of the user input, allowing the user to access additional device functionality that the user would otherwise not have readily accessible on a smaller sized device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or physical/mechanical controls such as knobs or buttons).
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined as a function of software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touch screen display may be set to any one of a wide range of predefined thresholds without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more intensity thresholds in a set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on an "intensity" parameter).
As used in the specification and in the claims, the term "characteristic intensity" of a contact refers to a characteristic of the contact based on one or more intensities of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. The characteristic intensity is optionally based on a predefined number of intensity samples or a set of intensity samples acquired during a predetermined time period (e.g., 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, 5 seconds, 10 seconds) relative to a predefined event (e.g., after detecting contact, before detecting contact liftoff, before or after detecting contact start movement, before or after detecting contact end, before or after detecting an increase in intensity of contact, and/or before or after detecting a decrease in intensity of contact). The characteristic intensity of the contact is optionally based on one or more of: maximum value of contact strength, mean value of contact strength, average value of contact strength, value at the first 10% of contact strength, half maximum value of contact strength, 90% maximum value of contact strength, and the like. In some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an operation. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed the first threshold results in a first operation, a contact whose characteristic intensity exceeds the first intensity threshold but does not exceed the second intensity threshold results in a second operation, and a contact whose characteristic intensity exceeds the second threshold results in a third operation. In some embodiments, a comparison between the characteristic intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (e.g., whether to perform a respective option or forgo performing a respective operation) rather than to determine whether to perform a first operation or a second operation.
In some implementations, a portion of the gesture is recognized for determining the feature intensity. For example, the touch-sensitive surface may receive a continuous swipe contact that transitions from a starting location and reaches an ending location (e.g., a drag gesture) where the intensity of the contact increases. In this example, the characteristic strength of the contact at the end position may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (e.g., only a portion of the swipe contact at the end position). In some implementations, a smoothing algorithm may be applied to the intensity of the swipe gesture before determining the characteristic intensity of the contact. For example, the smoothing algorithm optionally includes one or more of: a non-weighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the feature intensity.
User interface wizard as described hereinOptionally including various intensity maps that illustrate contacts on the touch-sensitive surface relative to one or more intensity thresholds (e.g., contact detection intensity threshold IT) 0Light press pressure intensity threshold ITLDeep by intensity threshold ITD(e.g., initially at least above I)L) And/or one or more other intensity thresholds (e.g., ratio I)LLower intensity threshold IH) Current intensity of the light source). This intensity map is typically not part of the displayed user interface, but is provided to assist in interpreting the map. In some embodiments, the light press intensity threshold corresponds to an intensity that: at which intensity the device will perform the operations typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, the deep by intensity threshold corresponds to an intensity that is: at which intensity the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when the characteristic intensity is detected to be below the light press intensity threshold (e.g., and above the nominal contact detection intensity threshold IT)0A contact lower than the nominal contact detection intensity threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact across the touch-sensitive surface without performing the operations associated with the light press or deep press intensity thresholds. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface drawings.
In some embodiments, the response of the device to an input detected by the device depends on criteria based on the intensity of the contact during the input. For example, for some "tap" inputs, the intensity of the contact exceeding a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to an input detected by the device depends on criteria including both the intensity of contact during the input and time-based criteria. For example, for some "deep press" inputs, the intensity of a contact that exceeds a second intensity threshold, greater than the first intensity threshold of a light press, triggers a second response during the input as long as a delay time elapses between the first intensity threshold being met and the second intensity threshold being met. The delay time is typically less than 200ms in duration (e.g., 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, wherein the delay time increases as the second intensity threshold increases). This delay time helps avoid accidental deep press inputs. As another example, for some "deep press" inputs, a period of reduced sensitivity will occur after the first intensity threshold is reached. During this period of reduced sensitivity, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps to avoid accidental deep-press inputs. For other deep press inputs, the response to detecting the deep press input does not depend on time-based criteria.
In some embodiments, one or more of the input intensity thresholds and/or corresponding outputs vary based on one or more factors, such as user settings, contact motion, input timing, application execution, rate at which intensity is applied, number of simultaneous inputs, user history, environmental factors (e.g., environmental noise), focus selector position, and so forth. Exemplary factors are described in U.S. patent applications 14/399,606 and 14/624,296, which are incorporated by reference herein in their entirety.
For example, FIG. 4C illustrates a dynamic intensity threshold 480 that varies over time based in part on the intensity of the touch input 476 over time. The dynamic intensity threshold 480 is the sum of two components: a first component 474 that decays over time after a predefined delay time p1 from when the touch input 476 was initially detected and a second component 478 that tracks the intensity of the touch input 476 over time. The initial high intensity threshold of the first component 474 reduces accidental triggering of a "deep-press" response while still allowing for an immediate "deep-press" response if the touch input 476 provides sufficient intensity. The second component 478 reduces the gradual intensity fluctuations through the touch input without inadvertently triggering a "deep-press" response. In some embodiments, a "deep press" response is triggered when the touch input 476 meets a dynamic intensity threshold 480 (e.g., at point 481 in fig. 4C).
FIG. 4D illustrates another dynamic intensity threshold 486 (e.g., intensity threshold I)D). Fig. 4D also shows two other intensity thresholds:first intensity threshold IHAnd a second intensity threshold IL. In FIG. 4D, although the touch input 484 satisfies the first intensity threshold I before time p2HAnd a second intensity threshold ILBut no response is provided until a delay time p2 elapses at time 482. Also in FIG. 4D, the dynamic intensity threshold 486 decays over time, with decay at slave time 482 (triggering and second intensity threshold I)LTime of associated response) has elapsed a time 488 after a predefined delay time p 1. This type of dynamic intensity threshold reduction is immediately triggered with a lower threshold intensity (such as the first intensity threshold I)HOr a second intensity threshold IL) Accidental triggering of and dynamic intensity threshold I after or concurrently with an associated responseDAn associated response.
FIG. 4E shows another dynamic intensity threshold 492 (e.g., intensity threshold I)D). In FIG. 4E, the trigger vs. intensity threshold I is shown after a delay time p2 has elapsed since the time that the touch input 490 was initially detectedLAn associated response. At the same time, the dynamic intensity threshold 492 decays after a predefined delay time p1 has elapsed from the time the touch input 490 was initially detected. Thus, at the trigger and intensity threshold I LDecreasing the intensity of the touch input 490 after the associated response, then increasing the intensity of the touch input 490 without releasing the touch input 490 may trigger the intensity threshold IDThe associated response (e.g., at time 494), even when the intensity of the touch input 490 is below another intensity threshold (e.g., intensity threshold I)L) The same is true of the case.
Contact characteristic intensity from below light press intensity threshold ITLTo be between the light press intensity threshold ITLAnd deep press intensity threshold ITDThe intensity in between is sometimes referred to as a "tap" input. Characteristic intensity of contact from below deep by intensity threshold ITDIs increased above a deep per intensity threshold ITDIs sometimes referred to as a "deep press" input. Contact characteristic intensity from below contact detection intensity threshold IT0To be intermediate the contact detection intensity threshold IT0And light press strength threshold ITLIs sometimes referred to as detecting contact on the touch surface. Characteristic intensity of contact from above contact detection intensity threshold IT0Is reduced to below the contact detection intensity threshold IT0Is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, IT0Is zero. In some embodiments, IT 0Greater than zero. In some illustrations, a shaded circle or ellipse is used to represent the intensity of a contact on the touch-sensitive surface. In some illustrations, circles or ellipses without shading are used to represent respective contacts on the touch-sensitive surface without specifying the intensity of the respective contacts.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), wherein the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting that the intensity of the respective contact increases above the press input intensity threshold (e.g., performing the respective operation on a "down stroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press input threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., the respective operation is performed on an "up stroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in contact intensity from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., depending on the circumstances, the increase in contact intensity or the decrease in contact intensity).
For ease of explanation, the description of operations performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is optionally triggered in response to detecting: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples in which operations are described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operations are optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold that corresponds to and is less than the press input intensity threshold. As described above, in some embodiments, the triggering of these operations also depends on the time-based criteria being met (e.g., a delay time has elapsed between the first intensity threshold being met and the second intensity threshold being met).
User interface and associated process
Attention is now directed to embodiments of a user interface ("UI") and associated processes that may be implemented on an electronic device, such as portable multifunction device 100 or device 300, having a display, a touch-sensitive surface, and (optionally) one or more sensors for detecting intensity of contacts with the touch-sensitive surface.
Fig. 5A-5R illustrate exemplary user interfaces for drag-and-drop operations, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes described below with reference to fig. 7A-7F, 8A-8E, and 9A-9D. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having touch-sensitive display system 112. In such embodiments, the focus selector is optionally: a respective finger or stylus contact, a point of representation corresponding to the finger or stylus contact (e.g., a center of gravity of or a point associated with the respective contact), or a center of gravity of two or more contacts detected on the touch-sensitive display system 112. However, similar operations are optionally performed on a device having a display 450 and a separate touch-sensitive surface 451 in response to detecting a contact on the touch-sensitive surface 451 when the user interface shown in the figures is displayed on the display 450 along with a focus selector.
In fig. 5A-5R and the following description of methods 700, 800, and 900, touch or touch input and movement of touch or touch input is made under control of the respective user by placement and/or movement on or near a) one or more fingers of the user of the electronic device or B) a touch screen (e.g., touch screen 112) or a touch pad (e.g., touch pad 355, also sometimes referred to as a touch pad) of a stylus of a similar implement or other touch-sensitive surface.
Fig. 5A shows the user interface of portable multifunction device 100 displayed on touch screen 112 with user interface area 502 for a first application (application a) and another user interface area 504 for a second application (application B). The interface area 502 for application a includes four objects, object 1, object 2, object 3, and object 4, and the user interface area 504 for application B includes a fifth object (object 5). In some embodiments, the area 502 of application a may be a view of application a. In some embodiments, the displayed representations of object 1 through object 4 in area 502 may also be views of application a. Similarly, area 504 of application B may be a view of application B, and the displayed representation of object 5 in area 504 may also be a view of application B.
As shown on the right side of FIG. 5A, application A or one or more of the views of application A have interaction identifiers for drag, drop, and spring-load operations, indicating that a supplemental gesture recognizer for drag, drop, and spring-load gestures may (or will) be added to one or more of the views of application A when applicable supplemental gesture recognizer addition criteria are met. In some embodiments, interaction identifiers are stored or included in view metadata 341 (FIG. 3B) for each application view having such interaction identifiers. Also, in the example shown in FIG. 5A, no supplemental gesture recognizer is added to any of the views of application A, nor is any gesture recognizer fault dependency established.
Fig. 5B shows a first touch (touch 1) on the object 4 in the user interface shown in fig. 5A. Thus, the appearance of object 4 in user interface area 502 of application A is updated. Additionally, the focus selector 506 now overlaps object 4, indicating the location of the first touch input.
If the first touch moves laterally (e.g., to location 505 in user interface area 502) without satisfying predefined drag start identification criteria (discussed in more detail below), the focus selector associated with the first touch moves (e.g., to location 505) without dragging the object (object 4) on which it was initially dropped. As discussed below with reference to fig. 5D, this result may occur when the first touch is moved laterally before remaining at the initial touch location for at least a minimum amount of time that the touch input must remain in contact with the touch-sensitive surface of the electronic device, with no substantial lateral movement before the lateral movement is initiated, in order to meet predefined drag initiation recognition criteria.
As shown on the right side of fig. 5B, the time when the first touch (touch 1) on the object 4 is detected is t1. Application A or its view continuationWith interactive identifiers for drag operations, drop operations, and spring-loaded operations. Further, in the example shown in FIG. 5B, a supplemental gesture recognizer for recognizing the drag start, the start of the drag gesture, and the drag addition (which is a gesture for adding an object to an existing drag operation) has been added to the application view corresponding to object 4, or the view corresponding to user interface area 504. Further, a gesture recognizer fault dependency is established such that a Long Press (LP) gesture recognizer corresponding to object 4 or an application view for a view of user interface region 504 fails depending on a Drag Start (DS) gesture recognizer. In some embodiments, the fault dependency is indicated by listing the drag gesture recognizer in a waiting list 327 (FIG. 3B) of long press gesture recognizers.
Further discussion regarding interaction identifiers, supplemental gesture recognizers, and automatic or dynamic addition of supplemental gesture recognizers to an application or application view, and automatic establishment of fault dependencies between standard gesture recognizers and supplemental gesture recognizers is provided below with respect to fig. 6D, 6E, 7A-7F, 8A-8E, and 9A-9D.
Fig. 5C illustrates a user interface generated when the first touch (touch 1) is recognized by the gesture recognizer by a deep-press (e.g., based on intensity) or a long-press (e.g., based on time). In this example, a set of menu options associated with object 4 is displayed in response to recognizing the first touch as a long press gesture.
As shown on the right side of fig. 5C, the time when the first touch (touch 1) on the object 4 is recognized as the long-press gesture is time T1+ T2, where T1 is the time when the first touch is detected for the first time. In some embodiments, T2 is a threshold amount of time that the touch input must remain in contact with the touch-sensitive surface of the electronic device without substantial lateral movement in order to meet predefined long press recognition criteria.
Fig. 5D shows the start of the drag gesture by the first touch (touch 1). Accordingly, the object 4 moves or drags with the first touch. At this time, the first touch and the object 4 remain in the user interface area 502 of the application a.
As shown in the right side of fig. 5D, the time at which the first touch (touch 1) on the object 4 is recognized as the drag start gesture is t1+ T3 wherein T1Is the time that the first touch is detected for the first time. In some embodiments, T3 is at least the minimum amount of time that a touch input must remain in contact with the touch-sensitive surface of the electronic device without substantial lateral movement before initiating lateral movement in order to meet predefined drag initiation identification criteria. Conversely, if movement of touch 1 is to be detected before the time threshold (T3) has elapsed, touch 1 will move on the display (e.g., to location 505, as shown in FIG. 5B) without initiating a drag gesture, and thus without dragging object 4.
In some implementations, after initially recognizing the touch input (e.g., as a long touch) using a standard gesture recognizer, as shown in fig. 5C, the electronic device then recognizes the touch input as a second gesture, such as a drag start gesture, using a supplemental gesture recognizer, as shown in fig. 5D. In such embodiments, T3 (fig. 5D) is a greater or longer time interval than T2 (fig. 5C).
As also shown on the right side of fig. 5D, a session ID (equal to 1 in this example) has been assigned to the first touch, or to the object dragged by the first touch. As described in more detail below, when a touch is recognized as a drag gesture, a session ID is assigned to the touch, becoming a separate touch. The session ID corresponds to a set of supplemental gesture recognizers added to application a or one or more of its views that control the processing of the first touch once recognized as a drag gesture. Other fling gestures (e.g., concurrent or subsequent fling gestures) will obtain their own session ID that is different from the session ID of the fling that includes the first touch.
In these figures (e.g., fig. 5A-5R), separate touches (ellipses with cross-hatched fill patterns and darker perimeters than the separate touches) are depicted using different graphics (ellipses with striped fill patterns) from regular touches (also referred to as attached touches or view-attached touches). In some embodiments, the split touch is split from the standard gesture recognizer of the application view or user interface area it traverses, and thus, touch events associated with the placement and/or movement of the split touch (sometimes referred to herein as drag events) are processed by the supplemental gesture recognizer associated with those views or user interface areas, rather than by the standard gesture recognizer associated with those views or user interface areas. When the predefined criteria are met, supplemental gesture recognizers are dynamically added (e.g., by a system process) to the application or view. Touch events associated with the placement and/or movement of a conventional touch (sometimes referred to herein as an attachment touch or a view-attached touch) are processed by standard gesture recognizers associated with those views or user interface areas. A standard gesture recognizer is a gesture recognizer that is typically associated with or assigned to a user interface area of an application view or application by the application.
In fig. 5E, the first touch is moved to user interface area 504 of application B, with which object 4 is dragged. The right side of FIG. 5E shows that application B or one or more of the views of application B have interactive identifiers for the drop operation and the spring-load operation, indicating that a supplemental gesture recognizer for the drop gesture and the spring-load gesture can (or will) be added to one or more of the views of application B when applicable supplemental gesture recognizer addition criteria are met. Also, in the example shown in FIG. 5E, one or more supplemental gesture recognizers for the drop gesture have been added to one or more views of application B, a gesture recognizer fault dependency has not yet been established, and a session ID has been assigned (or continues to be assigned) to the drag operation that includes the first touch. Other examples of supplemental gesture recognizers added to the view and other examples of gesture recognizer fault dependencies are discussed below with respect to subsequent figures in the sequence (such as fig. 5M).
In FIG. 5F, a two-finger touch input is detected on object 4 (e.g., two simultaneously detected touches on the same object or view in an application). In some implementations, a two-finger touch is used to initiate a drag operation without having to wait for a timeout period before initiating lateral movement of the touch input (e.g., dragging).
In fig. 5G, the two-finger touch starts moving, and the object 4 is dragged by the two-finger touch. The time in the figure is t1+ Δ T, where T1Is the time when the first touch is detected for the first time, and Δ T is a time period shorter than the above-described T2 and T3 time periods. In some embodiments or circumstances, Δ T is close to but greater than zero. Movement of the two-finger touch causes the touches to be separated, and session IDs are assigned to the two touch inputs (e.g., touch 1 and touch 2). As described in more detail below, upon determining that the touch satisfies predefined drag start criteria, both touches are separated. The interaction identifier, added gesture recognizer, and fault dependency are the same as in FIG. 5D. Recognition of the drag gesture is indicated in fig. 5G by a change in appearance of the dragged object (e.g., the perimeter is drawn with a thicker perimeter), and separation of the two touches is indicated in fig. 5G by the changed appearance of the two touches.
In FIG. 5H, the two-finger touch is moved from user interface area 502 of application A into user interface area 504 of application B. When a two-finger touch enters the user interface area 504 of application B, a drop gesture recognizer is added to the user interface area 504. As also shown in FIG. 5H, the user interface area 504 has an interaction identifier for a drag, drop, spring-loaded interaction.
FIG. 5I represents a transition that occurs from the user interface shown in FIG. 5D when a second touch (touch 2) is detected. In this example, the second touch is on object (object 2) in user interface area 502 for application A. Since the first touch has been separated, the second touch is processed independently of the first touch unless the second input is recognized as a drag-add gesture. Thus, if the second touch is not recognized as a drag-add gesture, the second touch is processed using a standard gesture recognizer for user interface area 502 of application A. In this example, object 1, object 2, and object 3 are elements of a vertical scrollable list in application A. When the second touch (touch 2) is moved vertically downward, as shown in fig. 5J, the scrollable list is scrolled downward by an amount corresponding to the amount of vertical movement of the second touch. As shown, the movement of the second touch need not be perfectly vertical to cause scrolling. The scrolling of the scrolling list causes the additional element (object Z) to scroll into the view at the top of the displayed scrollable list.
In fig. 5K, the second touch (touch 2) continues to be moved (e.g., by the user of device 100). In this example, the movement of the second touch moves down and horizontally into user interface area 504 of application B. Since the second touch has not been recognized as a drag start gesture, even when the second touch is moved into user interface area 504, the second touch still attaches application A or one or more views of application A, and therefore, touch events corresponding to the second touch continue to be delivered to application A, and in response to those touch events, the scrollable list in user interface area 502 of application A continues to scroll according to the amount of vertical movement of the second touch. Continued scrolling of the scrollable list causes another additional element (object Y) to scroll into the view at the top of the displayed scrollable list.
Referring again to FIG. 5I, in some embodiments, the drag-add gesture is a tap or other distinctive touch gesture performed on an object available for addition to the drag operation (e.g., an object within a predefined distance from an object already participating in the drag operation or an object designated as available for addition to the drag gesture (e.g., by an application in which the object is displayed). if the second touch shown in FIG. 5I is recognized as a drag-add gesture, object 2 is added to the drag operation associated with the first touch, as shown in FIG. 5L. in FIGS. 5I and 5L, time t2Corresponding to the time that the second touch (touch 2) was first detected, and T4 is or corresponds to the maximum duration of the drag add gesture.
In fig. 5L, an object 2 is stacked behind an object 4, and when the first touch is moved laterally on the touch screen 112, two objects (the object 2 and the object 4) are dragged. In some embodiments, the drag-add gesture needs to be a two-finger tap gesture rather than the one-finger tap gesture shown in fig. 5I and 5L, while in some other embodiments, both the one-finger tap gesture and the two-finger tap gesture may be drag-add gestures as long as they satisfy predefined drag criteria (e.g., requiring the user input to include a contact at a location corresponding to a user interface object that can be dragged and requiring the distance between the first touch and the second touch that recognizes the drag operation to be not greater than a predefined distance or proximity threshold).
In FIG. 5M, object 4 and object 2 (shown only as their shadows stacked under object 4) have been dragged into application B's user interface area 504. In this example, in response to the separate touch (touch 1) being dragged into application B's user interface 504, a drop and spring-loaded supplemental gesture recognizer is added to application B's user interface 504, and further a fault dependency is created or added such that the long press gesture recognizer relies on the spring-loaded gesture recognizer's fault in order to recognize the long press gesture.
FIG. 5N shows the updating of the user interface shown in FIG. 5K that occurs upon lifting off the first touch after dragging object 4 and object 2 into user interface area 504 of application B. In FIG. 5N, after liftoff of the first touch, object 4 and object 2 are unstacked and positioned in user interface area 504 of application B. In some embodiments, once the drag operation (or drop operation) has been completed and the corresponding touch has been lifted off, the supplemental gesture recognizer used to process the touch will be automatically removed, and some (or all) of the gesture recognizer dependencies corresponding to the removed supplemental gesture recognizer are also removed. Thus, in this example, FIG. 5N shows a set of interaction identifiers, but the supplemental gesture recognizers listed in FIG. 5M are no longer listed in FIG. 5N, and similarly, the dependence of the Long Press (LP) gesture recognizer on the (supplemental) spring-loaded gesture recognizer shown in FIG. 5M is no longer presented in FIG. 5N.
FIG. 5O represents a transition from the user interface shown in FIG. 5I after a second touch (touch 2) is recognized as a drag gesture. This is a second drag gesture that is independent of the first drag gesture of the first touch, and the corresponding touch (touch 2) is assigned a different session ID, e.g., session ID 2, followed by the first touch, which in this example is assigned session ID 1. Additionally, the second touch is split when recognized as a touch gesture, thereby eliminating the need for a standard gesture recognizer for user interface region 502 to process touch events generated by (or for) the second touch.
Fig. 5P represents a transition from the user interface shown in fig. 5O, where the first touch (touch 1) and the second touch (touch 2) move in different directions, dragging (object 4) with the movement of the first touch, and dragging object 2 with the movement of the second touch. The first touch and the second touch are separate touches, and each touch is used to drag a different object or set of objects than the other.
FIG. 5Q represents a transition from the user interface shown in FIG. 5E after the first touch (touch 1) is moved down to object 5 in user interface area 504. As a result of this movement, the object 4 is now positioned at least partially on the object 5. In this example, a supplemental spring-loaded gesture recognizer has been added to a set of gesture recognizers associated with one or more views of application B. In some embodiments, upon detection of a triggering event or condition, a supplemental spring-loaded gesture recognizer is added to one or more views of application B, such as movement of a dragged object (e.g., object 4) over another object (e.g., object 5) capable of responding to a spring-loaded gesture.
FIG. 5R represents a transition from the user interface shown in FIG. 5Q after the first touch (touch 1) is dropped, or the drop of the object dragged by the first touch (object 4) is recognized on object 5 as a spring-loaded gesture. As described in more detail below, recognition of the spring-loaded gesture is accomplished by one of the supplemental gesture recognizers added to user interface area 504 (e.g., the view of application B) based on the spring-loaded interaction ID and the presence of a touch in interface area 504 that triggers the addition of one or more supplemental gesture recognizers. In response to the spring-loaded gesture, object 5, which is a folder object (sometimes referred to as a container or container object), is opened while object 4 continues to be dragged by the first touch (touch 1), as shown in FIG. 5R. Optionally, the user may continue to move the first touch in order to drag object 4 into the now-open folder corresponding to object 5, and then drop object 4 into the folder by lifting the first touch off of touch screen 112 after dragging object 4 into the interior of the open folder corresponding to object 5 by the first touch.
It should be noted that unlike the scenario shown in FIG. 5N, in FIG. 5R, the first touch continues to be in contact with touch screen 112, so a drop operation or even a further spring-loaded operation may still occur, and thus the drop and spring-loaded supplemental gesture recognizer continues to be associated with one or more views of application B.
Touch events of the touch are generated to represent the touch over time. The touch has a phase selected from a predetermined set of phases: a touch start for a respective touch indicates the respective touch just detected on the touch-sensitive surface, a touch move for the respective touch indicates that the respective touch has moved on the touch-sensitive surface, a touch end for the respective touch indicates that the respective touch has ceased to be detected on the touch-sensitive surface as part of the end gesture, and a touch cancel for the respective touch indicates that the respective touch has been determined to be an accidental touch or has otherwise been identified as a touch that should be ignored. In some implementations, the touch events are processed by one or more gesture recognizers to recognize a gesture (e.g., a tap, swipe, drag, pinch, etc.) performed by a touch detected on the touch-sensitive surface. In some embodiments, when a touch cancel event (i.e., a touch event for (or including) a touch having a phase equal to touch cancel) is issued for a corresponding touch, the application performing the operation based on the corresponding touch cancels the operation and restores to a state before the touch is detected. In some embodiments, when a touch cancel operation is detected for a respective touch, one or more gesture recognizers that are evaluating the touch as part of the gesture are cancelled (e.g., assuming that the cancellation of the touch would make it impossible to satisfy the gesture definition used by the gesture recognizer). In some embodiments, the application responds to a touch cancel event of the respective touch differently than to a touch end event. For example, a tap input ending with a touch cancel event (e.g., a touch event for a touch, corresponding to a tap input, having a touch cancel phase) is ignored, while a tap input ending with a touch end event (e.g., a touch event for a touch, corresponding to a tap input, having a touch end phase) is processed as a tap. As another example, when a moving contact is detected that moves an object across the display, if a touch event with a touch end phase is detected after the object has moved across the display, the object will remain in the position to which it was moved, or (optionally) continue to advance with some inertia, and conversely, if a touch event with a touch cancel phase is detected after the object has moved across the display, the movement of the object is reversed to the position at which the object was displayed before the moving contact was detected. An example of a touch processed without separating the touches is described below with reference to fig. 6B.
FIG. 6A illustrates a touch event and a drag event generated over a period of time while a touch (sometimes referred to as a touch input) is moving from a user interface area for a first application (application A) to a user interface area for a second application (application B). In some embodiments, the detected touch corresponds to a contact of a user's finger, stylus, or the like on a touch-sensitive surface of the electronic device. Examples of such movements of a touch are shown in fig. 5B, 5D and 5E. As represented by the "touch event" sequence in fig. 6A, before the touch input is recognized as a drag gesture, a touch event is generated when the touch is initially detected. A sequence of touch events represents a touch that evolves through a series of phases, such as start, move, and cancel. When a touch is initially detected in FIG. 5B, the touch is represented in a touch event having a touch start phase; when movement of a touch is detected in FIG. 5D, the touch is represented in a touch event having a touch movement phase; then, in fig. 5D, when the touch is recognized as the drag gesture, the touch is represented in the touch event having a touch cancel stage. Instances of touch events having touch phase values corresponding to the phase or state of the touch are delivered to application A for processing, such as to update the user interface of application A. In some implementations, multiple instances of a touch event having a touch phase equal to "move" can be generated as the touch input moves.
When the touch input is recognized as the drag gesture, the touch event is cancelled and a drag event is generated. Canceling the touch input for application a enables application a to reverse any operation performed in response to detecting the touch (e.g., when touch 1 is canceled in fig. 5D (due to recognition of touch 1 as part of a drag gesture), the menu in fig. 5C displayed in response to detecting the touch on object 4 for a threshold amount of time stops displaying) and ignore the touch for the purpose of recognizing the gesture performed by the attachment touch. In some embodiments, the drag event is a drag event that is marked or treated as a split touch. In some embodiments, the touch events delivered to an application, such as application a, include a list of touches associated with application a or a view of application a, and the list of touches indicates whether the listed touches are attached touches (sometimes referred to as view-attached touches) or detached touches for each listed touch.
The attachment touch delivered to application a is handled by a standard gesture recognizer associated with application a or a view of application a, as well as any supplemental gesture recognizers associated with application a or a view of application a (e.g., gesture recognizers added to a respective view of application a due to an interaction identifier associated with the respective view of application a). On the other hand, the split touch delivered to application A is only processed by the supplemental gesture recognizer associated with application A or the view of application A, and is not processed by, and is therefore ignored by, the standard gesture recognizer associated with application A or the view of application A.
The touch associated with the drag event also has a series of phases, such as "enter", "in", and "exit". As the touches move, instances of the drag event are generated over time, with touches in a first instance of the drag event having a phase equal to "in" (sometimes referred to as a touch phase) and touches in subsequent instances of the drag event having a phase equal to "in" (e.g., corresponding to movement within a view or application view, sometimes referred to as a user interface area), and touches in a final instance of the drag event having a phase equal to "out", which is generated in response to a lift-off touch or a touch off the respective view.
As also shown in fig. 6A, when a touch is separated, the touch event identifies the touch as being cancelled and the touch is included in a corresponding drag event, which recognizes the touch as having an "enter" phase, all of which occur in response to recognizing the corresponding touch as a drag gesture. Note that the initial recognition of the touch as a drag gesture is sometimes referred to as recognition of a drag start gesture.
FIG. 6B is similar to FIG. 6A, but corresponds to movement of an attachment touch from a starting position in the user interface area of application A to an ending position in the user interface area of application B. Since the touch is not split, e.g., because it is not recognized as a drag gesture, only touch event instances are generated and all touch event instances are delivered to application a because the attachment touch is delivered to the application or application view associated with the initial touch location of the attachment touch, even though the touch moves outside of the application or application view associated with the initial touch location according to some embodiments. The phase of the touch in the touch event generated in this example is "start" when the touch is initially detected, "move" when the movement of the touch is detected, and then "end" when the lift-off touch is detected.
FIG. 6C schematically shows the processing of touches detected by a set of processes: a source process (process a) corresponding to a first application such as application a; a system drag management process, which in some embodiments performs the contact/motion module 130 (fig. 1 and 3A) or a portion of the contact/motion module; a system touch management process, which corresponds to hit view determination module 314 (FIG. 3B); and a target process corresponding to a second application such as application B. As shown in FIG. 6C, when one or more touches are detected and determined to perform a drag gesture, the system drag management process will begin a session of the one or more touches and send a session ID assigned to the one or more touches to the source process. The system drag management process also passes information about the one or more touches to system touch management, which separates the one or more touches and sends a touch cancel event or instruction to the source process. The touch cancel event or instruction causes the standard gesture recognizer of the source process to transition to the terminal state and stop processing one or more split touches. Additionally, the system touch management communicates information about the separated one or more touches to a target process when the one or more touches are moved to a user interface area for a second application.
FIG. 6D shows an example of a gesture recognition hierarchy 600-A before any supplemental gesture recognizers are added to a set of gesture recognizers for an application or one or more views of an application, and FIG. 6E shows an example of a corresponding gesture recognition hierarchy 600-B after one or more supplemental gesture recognizers have been added to a set of gesture recognizers for an application or one or more views of an application. In these examples, prior to adding any supplemental gesture recognizers, standard gesture recognizers for applications or application views (i.e., gesture recognizers for applications or application views) (e.g., gesture recognizers added to the view by the application itself to support various interactions with the application) include a long press gesture recognizer (LongPress GR) for recognizing touch gestures based on the duration of the touch (e.g., for triggering display of a menu such as the menu shown in fig. 5C when a long press gesture is detected on object 4), without substantial lateral movement, and an Intensity-Based gesture recognizer (Intensity-Based GR) for recognizing a touch gesture Based on the Intensity of the touch gesture meeting a predefined Intensity threshold (e.g., for triggering display of an expanded view of content corresponding to object 4 when a deep-press gesture meeting the predefined Intensity threshold is detected on object 4). In the example shown in FIG. 6E, the supplemental gesture recognizer added to the set of gesture recognizers for the application includes a drag Start gesture recognizer (DragStart GR) for recognizing a drag start gesture (e.g., for detecting a drag gesture as shown in FIG. 5D, where object 4 is dragged in response to movement of touch 1).
As shown in FIG. 6D, the gesture recognition hierarchy 600-A indicates that a long press gesture recognizer is a fault that depends on intensity-based gesture recognizers before any supplemental gesture recognizers are added to a set of gesture recognizers for an application or application view. In some embodiments, the fact that the long press gesture recognizer is dependent on a failure of the intensity-based gesture recognizer means that the long press gesture recognizer cannot recognize a touch input unless the intensity-based gesture recognizer fails to recognize an intensity-based touch input (e.g., a deep press). In some embodiments, the fact that the long-press gesture recognizer is dependent on a failure of the intensity-based gesture recognizer means that the long-press gesture recognizer cannot recognize the touch input when the intensity-based gesture recognizer recognizes the intensity-based touch input (e.g., a deep press). The long press gesture recognizer is enabled to attempt to recognize the touch input when the intensity-based gesture recognizer fails to recognize the touch input. In some embodiments, this dependency is achieved by listing intensity-based gesture recognizers in the long press gesture recognizer's waiting list 127 (FIG. 3C).
As shown in FIG. 6E, the gesture recognition hierarchy 600-B indicates that after adding a supplemental gesture recognizer to a set of gesture recognizers for an application, the long press gesture recognizer is a fault that depends on an intensity-based gesture recognizer and a drag start gesture recognizer, where the drag start gesture recognizer is the supplemental gesture recognizer added to the set of gesture recognizers for the application or application view. Additionally, dragging the start gesture recognizer is a malfunction that depends on the intensity-based gesture recognizer. Thus, the long press gesture recognizer cannot recognize the touch input as a long press gesture until the intensity-based gesture recognizer fails to recognize the touch input (e.g., as a deep press) and the drag start gesture recognizer fails to recognize the touch input (e.g., as a drag gesture). Additionally, the drag start gesture recognizer cannot recognize the touch input as a drag start gesture until the intensity-based gesture recognizer fails to recognize the touch input (e.g., as a deep press). In some embodiments, these fault dependencies are achieved by listing intensity-based gesture recognizers and drag-to-begin gesture recognizers in the long-press gesture recognizer's waiting list 127 (FIG. 3C), and by listing intensity-based gesture recognizers in the drag-to-begin gesture recognizer's waiting list 127 (FIG. 3C).
Drag-and-drop gesture recognition
Fig. 7A-7F are flow diagrams illustrating a method 700 of processing a touch detected by an electronic device, including determining whether the touch corresponds to a drag gesture (sometimes referred to as a drag-and-drop gesture), and processing the touch according to the determination, according to some embodiments. Method 700 is performed at an electronic device (e.g., device 300 of FIG. 3 or portable multifunction device 100 of FIG. 1) having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 700 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 700 provides an intuitive way to process a touch detected by a device, which may or may not be a drag gesture. When a user wants to drag one or more objects from a first application view or area to another application view or area, the method reduces the number, scope, and/or nature of inputs from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays on its display a user interface for a first application (702) that includes a plurality of user interface objects. While displaying the user interface for the first application, the device receives information describing a first touch (e.g., one or more touch events) at the first application, the first touch being detected at a location on the touch-sensitive surface that corresponds to a respective user interface object of the plurality of user interface objects in the first application (704). For example, the first touch may be touch 1 shown in fig. 5B.
In response to receiving the information describing the first touch, and in accordance with a determination (e.g., by the first application) that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation (e.g., a drag-and-drop operation, also sometimes referred to as a drag operation) (706): the device initiates a respective type of user interface operation (e.g., a first user interface operation) involving a respective user interface object (708).
In some embodiments, the predefined criteria associated with initiating the respective type of interface operation is a criteria associated with initiating a drag operation, sometimes referred to as a drag criteria. In some embodiments, the drag criteria include a timing criterion (e.g., a first criterion) that is met when the user input remains on the touch-sensitive surface for at least a predefined period of time and the intensity of the detected user input remains below an intensity threshold during the predefined period of time (e.g., a long press criterion). In some embodiments, the drag criteria include a lateral displacement criterion (e.g., a second criterion) that is met when the detected lateral displacement of the user input meets (e.g., meets or exceeds) a lateral displacement threshold. In some embodiments, the drag criteria require that the timing criteria be met before the lateral displacement criteria are met (e.g., for an input that meets the drag criteria, the input must first meet the timing criteria and then meet the lateral displacement criteria).
The method 700 further includes, while continuing to detect the first touch on the touch-sensitive surface and while user interface operations involving the respective user interface object continue (710), receiving, at the first application, an instruction to ignore the first touch with respect to a user interface operation other than the respective type of interface operation (e.g., a drag, drop, or spring-load operation) (712); and ignoring the first touch by the first application with respect to user interface operations other than the corresponding type of interface operation (714). For example, ignoring the first touch by the first application is accomplished by stopping processing of the first touch, optionally in response to a touch cancel event. Thus, even if the user interface operation involving the user interface object continues, the first touch is regarded as a cancelled touch. In some embodiments, the first application stops processing (e.g., stops receiving information about) the first touch with respect to user interface operations other than the respective type of interface operation.
In some embodiments, upon detecting the first touch and prior to determining that the first touch satisfies predefined criteria associated with initiating the respective type of interface operation, the method includes processing the touch using one or more gesture recognizers associated with user interface operations other than the respective type of interface operation. In some embodiments, initiating respective types of user interface operations involving respective user interface objects includes processing a touch using a gesture recognizer associated with the respective types of interface operations (e.g., a drag gesture recognizer for processing drag inputs on a touch-sensitive surface). In some implementations, there are multiple instances of the gesture recognizer. For example, the first touch is processed using an instance of a drag gesture recognizer.
In some embodiments, method 700 includes, prior to determining (e.g., by a system drag management process) that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation, processing the first touch as a view-attached touch event (720), and in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, processing the first touch as a separate touch event (e.g., by initiating the respective type of user interface operation) (722). The initial processing of touch input to view-attached touch events provides consistency in the processing of touch input and enables a user to control the view to which the touch input is directed to provide a more intuitive interface for controlling a user interface. Processing the touch input as a separate touch event after determining that the touch input satisfies the predefined criteria associated with the particular operation provides flexibility in processing the touch input for performing the particular operation while enabling the device to continue to receive and process additional touch inputs as normal touches to perform the additional operation. Providing more consistent and intuitive control options while providing flexibility to receive and process multiple touch inputs at once, reducing the number of inputs required to perform operations, enhancing the operability of the device, and making the user device interface more efficient (e.g., by helping the user provide inputs that more quickly achieve desired results and reducing user error in operating/interacting with the device), which in addition reduces power usage and improves the battery life of the device by enabling the user to more quickly and efficiently use the device.
In some embodiments, method 700 further comprises: prior to determining (e.g., by a system drag management process) that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation, processing the first touch as a view-attached touch event (724) using one or more gesture recognizers associated with the first application (e.g., "standard gesture recognizers" discussed above with reference to fig. 5D, 5I, 5O, and 6D-6E); and, in accordance with a determination that the first user input satisfies predefined criteria associated with initiating a respective type of interface operation (726): ceasing to process the first touch as a view-attached touch event using one or more gesture recognizers associated with the first application (728); and processes the first touch as a separate touch event (730). For example, in some embodiments, a drag gesture recognizer is used to process the split touch. The view-attached touch inputs are processed using gesture recognizers associated with the application, and processing of the touch inputs is stopped using those gesture recognizers after determining that the touch inputs satisfy predefined criteria associated with the particular operation so that the gesture recognizers can be used to process additional touch inputs. Providing additional capability to handle touch input enhances the operability of the device and makes the user device interface more efficient (e.g., by helping the user provide input that more quickly achieves the desired result and reducing user error in operating/interacting with the device), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.
In some embodiments, method 700 includes detecting movement of the first touch outside of an area corresponding to a user interface for the first application (732). For example, as shown in fig. 5D and 5E, the first touch moves from the user interface area of application a to the user interface area of application B. In such embodiments, method 700 further includes, in response to receiving the information describing the first touch (734), in accordance with a determination (e.g., by the first application) that the first touch does not satisfy predefined criteria associated with initiating the respective type of interface operation (e.g., the first touch is not recognized as part of the drag operation during the initial portion of the gesture), continuing to provide (e.g., by a different system process than the first application) the information describing the first touch to the first application (736) even after the first touch moves outside of the area corresponding to the user interface for the first application and enters the area corresponding to the user interface for the second application. For example, in the examples shown in fig. 5I, 5J, and 5K, even after touch 2 moves outside of the area corresponding to the user interface for application a and enters the area corresponding to the user interface for application B, the information about touch 2 continues to be provided to application a.
In some embodiments, the first touch is processed using a hit view determined based on the initial contact location of the first touch, regardless of movement of the first touch (e.g., even if the first touch moves to a location on the touch-sensitive surface that corresponds to a location in the user interface that is outside of the hit view).
In such embodiments, the method 700 further includes, in response to receiving the information describing the first touch (734), in accordance with a determination that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation (e.g., the first touch is identified as part of a drag operation during an initial portion of the gesture) (738): providing information describing the first touch to the first application (e.g., by a different system process than the first application) when the first touch is within an area corresponding to a user interface for the first application (740); and when the first touch moves outside of an area corresponding to a user interface for the first application and enters an area corresponding to a user interface for a second application (e.g., in place of the first application), switching to providing information describing the first touch to the second application (742). Accordingly, operations 740 and 742 correspond to a process of separating touches.
In some embodiments of performing operations 734-742, the method 700 includes, upon detecting movement of the first touch, and in accordance with a determination that the first touch satisfies predefined criteria associated with initiating a respective type of interface operation (738): one or more respective regions corresponding to respective locations of the first touch are determined, wherein the one or more respective regions are associated with one or more respective applications (744), and when the first touch is within the respective region, information describing the first touch is provided to the respective application for the respective region (746). For example, as the touch input moves along the touch-sensitive surface, the device performs repeated hit area tests to determine respective hit views for respective locations of the touch input. In some embodiments, the respective hit views are determined at predefined time intervals (sampling intervals of touch inputs). In some embodiments, the predefined time intervals occur sequentially at a rate of 10 or more time intervals per second. In some embodiments, the respective hit views correspond to different applications of the plurality of applications, such as application a and application B in fig. 5A-5R. For example, a first hit view corresponding to a first location of a first user input is in a first application (e.g., application a, fig. 5A-5R) and a second hit view corresponding to a second location of the first user input is in a second application (e.g., application B, fig. 5A-5R) different from the first application.
In some embodiments of performing operations 734-742, the method 700 includes, in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation (738), providing information describing the first touch to the respective application for the respective region while providing information describing one or more additional touches that do not satisfy the predefined criteria associated with initiating the respective type of interface operation (748). For example, in some embodiments, information about the detached touch (e.g., the first touch) and information about the normal, view-attached touch (e.g., one or more additional touches) are provided to the respective application in the same data structure (e.g., a data structure with one or more touch lists that are received and processed (e.g., by the first application) in the same processing frame.
In some embodiments, method 700 further includes, while user interface operations involving respective user interface objects continue (e.g., while drag operations continue, e.g., as shown in fig. 5I and 5J), receiving information (e.g., information about one or more touch events) describing a change in a second touch (e.g., touch 2 in fig. 5I and 5J) on the touch-sensitive surface at a location corresponding to the first application (750); and in response to detecting the change in the second touch, processing information describing the change in the second touch without regard to the first touch using one or more gesture recognizers associated with the first application (752). Note that the second touch may be detected before, at the same time, or after the first touch is detected. In such embodiments, the second touch is unaffected by the first touch as long as it does not meet special case criteria, such as a drag-add criteria.
In some embodiments, as user interface operations involving respective user interface objects continue, the first application receives information (e.g., one or more touch events) describing a second touch (e.g., a second touch detected before, concurrently with, or after the first touch was detected) on the touch-sensitive surface at a location corresponding to the first application; and in response to detecting the second touch, processing information describing the second touch regardless of whether the first touch satisfies predefined criteria associated with initiating a respective type of interface operation. For example, the first application processes current and/or future touches while continuing to detect the first touch as a drag operation.
In some embodiments, when the first touch continues to be detected on the touch-sensitive surface and when user interface operations involving respective user interface objects continue (e.g., when the drag operation continues), method 700 includes receiving, at the first application, information describing a second touch that is different from the first touch (e.g., as shown in FIG. 5I and FIG. 5J, the second touch is detected at a location on the touch-sensitive surface that corresponds to a second user interface object of the plurality of user interface objects in the first application; further, the second touch is detected while the first touch continues to be detected) (754). In these embodiments of method 700, in response to receiving the information describing the second touch, and in accordance with a determination (e.g., by the first application) that the second touch satisfies predefined add criteria (sometimes referred to herein as drag-add criteria) (e.g., criteria for adding a user interface object to a drag operation) (756) associated with the user interface operation, method 700 includes receiving, at the first application, an instruction to ignore (e.g., stop processing) the second touch with respect to user interface operations other than the respective type of interface operation (e.g., drag, drop, or spring-load operation) (758); and ignoring, by the first application, the second touch with respect to a user interface operation other than the corresponding type of interface operation (e.g., treating the second touch as a touch with respect to cancellation of the user interface operation other than the drag and drop operation even if the user interface operation such as the drag operation involving the user interface object continues) (760). In some embodiments, method 700 includes processing the second touch according to a user interface operation involving the respective user interface object.
In some implementations in which the method 700 includes operation 760, the predefined addition criteria include a criterion that is met when a distance between the first touch (e.g., corresponding to the drag operation) and the second touch (e.g., for adding the object to the drag operation) is less than a predefined proximity threshold (762). Optionally, the predefined add criterion requires that a distance between the first touch and the second touch is less than a predefined proximity threshold when lift-off from the second touch is detected.
As briefly explained above with reference to fig. 5O and 5P, two separate drag operations may be performed simultaneously. In some embodiments, method 700 includes, while displaying the user interface for the first application, receiving, at the first application, information (e.g., one or more touch events) describing a third touch (e.g., different from the first touch), the third touch detected at a location on the touch-sensitive surface that corresponds to a third phase application user interface object of the plurality of user interface objects in the first application (764). In such embodiments, method 700 further includes, in response to receiving the information describing the third touch, and in accordance with a determination (e.g., by the first application) that the third touch satisfies predefined criteria associated with initiating a respective type of interface operation (e.g., drag, drop, or spring-load): a second user interface operation of a respective type (e.g., different from the (first) user interface operation) involving a third phase application user interface object is initiated 768.
Further, such aspects of method 700 include, while continuing to detect the third touch on the touch-sensitive surface and while continuing to second user interface operations involving the third application user interface object (770): receiving, at the first application, an instruction to ignore (e.g., cease processing the touch cancel event, optionally in response to the touch cancel event) a third touch with respect to a user interface operation other than the corresponding type of interface operation (e.g., drag, drop, or spring-load) (772); and ignoring, by the first application, the third touch with respect to user interface operations other than the corresponding type of interface operation (774). For example, the third touch is considered a cancelled touch even if the user interface operation involving the third user interface object continues. In some embodiments, a third touch is detected while the first touch is detected. In some embodiments, the second user interface operation (e.g., the second drag operation) is different from the first user interface operation (e.g., the first drag operation), as shown in fig. 5O and 5P, where two separate drag operations are performed simultaneously.
In some embodiments, method 700 includes, in response to receiving the information describing the first touch, determining, by the first application program, whether the first touch satisfies predefined criteria associated with initiating a respective type of interface operation (776). For example, method 700 optionally includes determining whether gesture recognition criteria associated with a respective gesture recognizer are satisfied. In some embodiments, the system process or first application determines whether the touch should be separated. For example, referring to fig. 5B, 5C, and 5D, it may be determined that a touch (e.g., touch 1 initially received in fig. 5B) satisfies the drag start criterion, in which case the touch is separated, as shown in fig. 5D, or does not satisfy the drag start criterion, in which case the touch is processed as an attach touch, as shown in fig. 5C. As shown in FIG. 6C, in some embodiments, whether to separate a touch is determined by a system process, such as a system touch management process, which corresponds to hit view determination module 314 (FIG. 3B). Alternatively, whether to separate the touch is determined by an application having a user interface area corresponding to the initial position of the touch.
In some embodiments, the instruction received at the first application to ignore the first touch with respect to a user interface operation other than the respective type of interface operation includes a touch cancel event (778). For example, please refer to the touch cancel event shown in FIG. 6A and the related discussion above. In some embodiments, method 700 includes sending an instruction to a plurality of applications including a first application (e.g., any application currently monitoring ("listening") for a first touch) that is receiving or has received information describing the first touch regarding user interface operations other than the respective type of user interface operation to ignore the first touch. In some embodiments, the plurality of applications includes a home screen application (sometimes referred to as a "SpringBoard") that monitors touches even when the home screen application is not actively displayed.
In some embodiments, the method 700 includes, in response to receiving the touch cancel event, forgoing performance of a user interface operation other than the respective type of interface operation (e.g., a user interface operation involving the respective user interface object) and failing to recognize the first touch (e.g., stop attempt recognition or cancel attempt recognition) as a gesture other than one of a predefined set of gestures associated with the respective type of interface operation (780). For example, the method 700 optionally cancels or reverses user interface behavior, such as object transformations and/or animations associated with operations other than drag-and-drop operations, and optionally sets a gesture recognizer for gestures other than drag-and-drop related gestures to a cancelled state (or other terminal state) rather than a "gesture possible" state.
In some embodiments of the method 700, initiating (708) a respective type of user interface operation involving a respective user interface object (e.g., a drag operation for dragging the respective user interface object) includes providing (782), to the first application, a unique session identifier identifying the respective type of user interface operation and the respective user interface object. For example, in FIG. 5D, a session identifier has been assigned to touch (touch 1), which initiates a drag operation involving object 4, object 4 being a user interface object in user interface 502 for application A. In some embodiments, the session identifier (sometimes referred to as a "session ID") identifies the drag operation and the content involved in the drag operation (dragged). In some embodiments, each drag operation has a unique session identifier (e.g., concurrent but different drag operations may be distinguished based on their respective session identifiers). For example, as shown in fig. 5O and 5P, two separate drag operations are being performed, and each drag operation is assigned a different session ID.
In some embodiments of the method 700, the unique session identifier is associated with additional information regarding the drag operation, including one or more of: a total number of user interface objects involved in the respective type of user interface operation that include the respective user interface object, and one or more content types of the user interface objects involved in the respective type of user interface operation (784). For example, in some embodiments, the unique session identifier includes additional information, or the unique session identifier may be used to retrieve information from an application from which the system drag management process or drag operation originated. In some embodiments, if multiple drag operations are performed simultaneously, different drag operations have different identifiers so that the drag operations can be tracked independently.
In some embodiments of method 700, respective types of user interface operations have associated operation phase values that describe a current phase of the user interface operation, and the phase values are selected from the group consisting of: exiting, entering and being in (786). The phase values of the touch in the drag event are discussed above with reference to FIG. 6A.
It should be understood that the particular order of operations that have been described in fig. 7A-7F is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., method 800 and method 900) also apply in a similar manner to method 700 described above with reference to fig. 7A-7F. For example, the use of interaction identifiers associated with different views to determine which supplemental gesture recognizers to add to the respective views, and gesture recognition fault dependencies described below with reference to method 900, may also be applicable in a manner similar to method 700. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 7A-7F are optionally implemented by components depicted in fig. 1 or fig. 3A. For example, initiating user interface operations such as a drag operation or a spring-loaded operation, as well as many other operations of method 700 are optionally implemented by contact/motion module 130 or its event recognizer module 312.
Adding supplemental gesture recognizers
Fig. 8A-8E are flow diagrams illustrating a method 800 of processing a touch detected by an electronic device, including adding a supplemental gesture recognizer to a view of an application to provide specialized processing for operations such as drag operations. Method 800 is performed at an electronic device (e.g., device 300 in fig. 3 or portable multifunction device 100 in fig. 1) having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 800 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 800 provides an intuitive way of handling a touch detected by a device, which may or may not be a drag gesture. The method reduces the computational load placed on the electronic device through touch gestures and other user inputs, improves device responsiveness, and thereby reduces latency. Reducing the delay in responding to touch inputs also reduces erroneous inputs by the user, such as repeated or modified touch inputs made by the user while waiting for the device to respond to the user's earlier input. For example, when a user wants to drag one or more objects from a first application view or area to another application view or area, such improvements reduce the number, scope, and/or nature of inputs from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays on its display a user interface comprising a plurality of views including a first view associated with a first set of one or more standard gesture recognizers for processing touch input directed to the first view and a second view associated with a second set of one or more standard gesture recognizers for processing touch input directed to the second view. One or more views (e.g., a first view and/or a second view) of the multiple views are associated with a first interaction identifier (e.g., a drag, drop, or spring-loaded interaction identifier) that indicates that a first set of one or more supplemental gesture recognizers can be added to the corresponding view (e.g., by an application-independent module) (802).
As shown in fig. 5A-5R, the multiple views may include a user interface area 502 for application a and a user interface area 504 for application B. Many of fig. 5A-5R indicate examples of interaction identifier sets associated with a view of application a (e.g., interface region 502) or a view of application B (e.g., interface region 504). While these examples show two views associated with drag, drop, and spring-loaded interaction identifiers, in some embodiments or instances, some views may be associated with fewer interaction identifiers (e.g., only drag and drop interaction identifiers or none of these interaction identifiers) and, in addition, some views may be associated with additional interaction identifiers (e.g., interaction identifiers for additional types of operations associated with a particular multi-touch gesture) than the examples shown in fig. 5A-5R.
FIG. 3B illustrates that the view metadata 341 for the corresponding application view 317-2 includes an interaction identifier 347. Further, the view metadata 341 for each of the plurality of views 317 has its own set of interaction identifiers. In some embodiments, the interaction identifier associated with a particular view is a flag value that indicates which interaction identifiers of a predefined set of interaction identifiers are associated with (e.g., assigned to) the particular view. In some other embodiments, the interaction identifier associated with a particular view is an identifier value (e.g., an alphanumeric or multi-bit value), and the set of interaction identifiers stored for that particular view indicates which interaction identifiers are associated with (e.g., assigned to) that particular view. When an application has multiple views, typically each view may be associated with a corresponding set of interaction identifiers, and some views may have a different set of associated interaction identifiers than other ones of the views.
In some embodiments, the possible interactions (e.g., types of interactions) include a drag interaction (e.g., objects from a view may be picked and dragged across the user interface), drop interactions (e.g., objects dragged across the user interface may be dropped onto the view), and spring-loaded interactions (e.g., hovering over an icon of a view while dragging an object across a user interface may cause the view to open; in another example of the above-described method, hovering over an application icon on the home screen may cause the application itself to open if the application icon view supports spring-loaded interaction; in another example of the above-described method, hovering over a folder icon may cause the folder to open; and in yet another example, hovering over the object may cause a control of the object or a control associated with the object to be activated or deactivated (e.g., by toggling the state of the control when predefined spring-loading criteria have been met).
Continuing with the illustration of method 800, method 800 further includes detecting, via the touch-sensitive surface, a touch input at a location on the touch-sensitive surface that corresponds to a respective view of the plurality of views (804); and in response to detecting a touch input via the touch-sensitive surface (806): and in accordance with a determination that the touch input satisfies supplemental gesture recognizer addition criteria (while continuing to detect the touch input), wherein the supplemental gesture recognizer addition criteria include criteria that are satisfied when the respective view has the first interaction identification (808): adding a first set of one or more supplemental gesture recognizers associated with the first interaction identification to the respective view (810); the touch input is processed using one or more supplemental gesture recognizers and one or more standard gesture recognizers associated with the respective views (812). In some embodiments, one or more supplemental gesture recognizers are attached to or associated with the respective view being run by the system based on determining whether an object in the view supports a particular interaction (as determined by a drag, drop, or spring-loaded interaction identifier for the respective view).
The method 800 further includes, in response to detecting the touch input (806), in accordance with a determination that the touch input does not satisfy the supplemental gesture recognizer addition criteria, processing the touch input at the respective view (e.g., utilizing one or more standard gesture recognizers associated with the respective view or utilizing one or more supplemental gesture recognizers associated with other interaction identifiers) without adding a first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view (814).
In some implementations, the method 800 includes detecting a sequence of touch inputs that includes a first touch input that satisfies the supplemental gesture recognizer addition criteria and includes a second touch input received after the first touch input that does not satisfy the supplemental gesture recognizer addition criteria. The first touch input is processed according to the aforementioned add (first set of one or more supplemental gesture recognizers) (810) and process (with first set of one or more supplemental gesture recognizers) (812) operations, while the second touch input is processed according to the aforementioned process (814) (without adding the first set of one or more supplemental gesture recognizers). In some embodiments, the view receives a touch event and determines whether the view supports an interaction associated with the touch event (e.g., a drag interaction, a drop interaction, or a spring-loaded interaction). In some embodiments, the respective interaction identifier of the view specifies whether the view supports the respective interaction. For example, the view receives a touch event. If the view has a drag interaction identifier indicating that the view allows the addition of a drag gesture recognizer (e.g., by an application-independent module such as a system drag management process) and the view supports drag interactions, the drag gesture recognizer is launched and added to the view (e.g., in conjunction with the view receiving the touch event). In another example, the view receives a drag input event. If the view has a drop interaction identifier, a drop gesture recognizer is launched and added to the view. In some embodiments, if the view has a spring-loaded interaction identifier, a spring-loaded gesture recognizer is launched and added to the view. In some embodiments, a single interaction identifier may be used to specify multiple interactions supported by a view (e.g., a single interaction identifier may specify that a view supports drop interactions and spring-loaded interactions).
In some embodiments of method 800, the supplemental gesture recognizer add criteria includes criteria that are met when the touch input is a predetermined type of touch input (e.g., the touch begins a drag interaction, or a separate touch input for a spring-loaded interaction or a drop interaction) (820).
In some embodiments of method 800, the predetermined type of touch input is a separate touch into the corresponding view (822). For example, fig. 5D and 5E show views of entering application B, touch input (touch 1) of interface area 504; fig. 6A illustrates that the corresponding event is a drag event having a touch (or touch input) in the drag event of the phase value "enter"; FIG. 6C shows that a touch input into the view of application B causes the system touch management process to separate touches, so when a touch input (touch 1) enters the view of application B, the view receives a separate touch.
In some embodiments, the method 800 further includes detecting movement of the touch input outside the respective view (824), and in response to detecting movement of the touch input outside the respective view, and in accordance with a determination that the touch input is a separate touch exiting the respective view (826): the first set of one or more supplemental gesture recognizers associated with the first interaction identifier is removed from the respective view (828). For example, removing one or more supplemental gesture recognizers is performed when the touch input satisfies supplemental gesture recognizer removal criteria, where the supplemental gesture recognizer removal criteria include a criterion that is satisfied when the touch input is a predetermined type of touch input, such as a split touch, that exits the respective view. By avoiding the use of gesture recognizers that no longer need to process such inputs to process touch inputs, removing one or more supplemental gesture recognizers when supplemental gesture recognizer removal criteria are met makes the electronic device more efficient, thereby reducing latency in responding to touch inputs. In addition, reducing latency in responding to touch inputs reduces erroneous inputs by the user, such as repeated or modified touch inputs made by the user while waiting for the device to respond to the user's earlier inputs.
In some embodiments of method 800, or in some cases, the touch input corresponds to a drag interaction (830). For example, a drag interaction is an interaction in which an object from a view is picked up and dragged across the user interface of one or more applications.
In some embodiments of method 800, the first interaction identifier is a drag interaction identifier (832). Further, the first set of one or more supplemental gesture recognizers includes a drag start gesture recognizer that includes, to recognize a drag start gesture: requiring that the location of the touch input correspond to the respective view for at least a predefined period of time; and requires that a change in the position of the touch input satisfies a lateral displacement threshold (e.g., while continuing to detect the touch input) after the position of the touch input corresponds to the position of the respective view for at least a predefined period of time (834).
In some such embodiments (as described above with respect to 832 and 834), the method 800 further includes, in response to recognition of the drag start gesture, the device (e.g., a system drag management process or application) performing a drag start action in the user interface indicating that the drag operation has started (836). For example, performance of the drag start action (836) may include moving the user interface object in accordance with movement of the contact, and/or animating a change in appearance of the user interface object, e.g., giving the appearance that the user interface object has lifted in the z-direction from the displayed user interface.
In some implementations, the drag-start gesture recognizer includes one or more of the following requirements so that the touch input meets predefined drag-start criteria:
1) a timing criterion that is met when the touch input is maintained on the touch-sensitive surface for at least a predefined period of time;
2) an intensity requirement that is met when the detected intensity of the user input remains below an intensity threshold during a predefined period of time (sometimes referred to as a long press criterion);
3) a lateral displacement criterion that is met when the detected lateral displacement of the user input meets (e.g., meets or exceeds) a lateral displacement threshold;
4) requiring that the timing criteria be met before the lateral displacement criteria (e.g., for an input that meets the drag criteria, the input must first meet the timing criteria and then meet the displacement criteria); and/or
5) A requirement that the touch input comprise a single contact, or a requirement that the touch input comprise two concurrent contacts on the touch-sensitive surface (e.g., two different contacts detected simultaneously).
One of ordinary skill in the art will readily appreciate that the fling gesture recognizer may include other combinations of the aforementioned requirements (e.g., a single contact with timing requirements and lateral displacement requirements, or two contacts with lateral displacement requirements but no timing requirements, or various other such combinations).
In some embodiments of method 800, the first interaction identifier is a drag interaction identifier (832). Further, the first set of one or more supplemental gesture recognizers includes a drag-add gesture recognizer that includes requiring the touch input to include one or more contacts (e.g., a single contact or two different contacts detected simultaneously) whose locations correspond to respective views in order to recognize a drag-add gesture (838); and in response to recognition of the drag-add gesture, the device performs a drag-add-start action in the user interface indicating that a drag-add operation has started (840).
In some embodiments, the drag-add gesture recognizer includes requiring that the touch input include two concurrent contacts and requiring that the two concurrent contacts cease to be detected within a predefined period of time (e.g., a two-finger tap gesture). In some implementations, the drag-add gesture recognizer includes requiring that a location of the touch input correspond to the respective view for at least a predefined period of time. In some other embodiments, the drag-add gesture recognizer includes requiring the touch input to include one or two concurrent contacts, ceasing detection for a predefined period of time (e.g., one or two finger tap gestures), and requiring the one or two concurrent contacts to occur within a predefined distance of a previous touch input that has been recognized as a drag gesture, or within a predefined distance of an object that was dragged by the previous touch input in a drag operation. In some embodiments, the drag start action includes animating the user interface object (e.g., corresponding to a touch input) as if the user interface object had been lifted in the z-direction from the displayed user interface.
In some embodiments of method 800, the touch input corresponds to a drop interaction (850). In some embodiments, a drop interaction occurs when an object that has been dragged on a user interface of one or more applications using touch input recognized as a drag gesture is dropped onto a view. Examples of drop interactions are discussed above with reference to fig. 5Q and 5R.
In some embodiments of the method 800, wherein the touch input corresponds to a drop interaction (850), the first interaction identifier is a drop interaction identifier (852), and the first set of one or more supplemental gesture recognizers includes a drop gesture recognizer that includes a requirement that a location of the touch input corresponds to a respective view when the touch input ceases to be detected (854). In such embodiments, as described with respect to 852 and 854, the method 800 includes, in response to recognition of the drop gesture, the device performing a drop action in the user interface indicating that a drop operation has begun (856). Optionally, the drop action includes animating the user interface object (e.g., the object dragged by the touch input) as if the user interface object had been lowered into the displayed user interface in the z-direction from above the displayed user interface (858).
In some embodiments of method 800, the touch input corresponds to a spring-loaded interaction (860). In some embodiments, after dragging an object across the user interface, a spring-loaded interaction occurs, hovering the dragged object over an icon of the view, which causes the view to open. In another example of spring-loaded interaction, if the application icon view supports spring-loaded interaction, hovering the dragged object over the application icon on the home screen causes the application itself to open. In another example of a spring-loaded interaction, hovering a dragged object over a folder icon causes the folder to open and a new view of the opened folder is displayed. And in yet another example, hovering the dragged object over the object causes a control of the object or a control associated with the object to be activated or deactivated (e.g., by toggling the state of the control when predefined spring-loaded criteria have been met).
In some embodiments of the method 800, wherein the touch input corresponds to a spring-loaded interaction (860), the first interaction identifier is a spring-loaded interaction identifier (862), and the first set of one or more supplemental gesture recognizers includes a spring-loaded gesture recognizer that includes requiring a location of the touch input to correspond to the respective view for at least a predefined period of time in order to recognize the spring-loaded gesture (864). In such embodiments, the method 800 further includes, in response to the recognition of the spring-loaded gesture (866), the application performing a spring-loaded action that includes one or more of: a control (e.g., a control corresponding to the view) is activated as if the control had been selected (e.g., tapped or clicked), and a container (e.g., a folder or album) where content may be stored is opened (868). For example, fig. 5Q and 5R show an example of a spring-loading action (sometimes referred to as a spring-loading operation), in which a first object (object 4) is moved on a container object (also referred to as a folder) (object 5) by a touch input (touch 1) dragging the first object. Once the spring-loaded gesture is recognized, the container object (object 5) is opened and its contents are displayed (fig. 5R).
In some embodiments of method 800, the spring-loaded gesture recognizer further comprises requiring the velocity of the touch input on the touch-sensitive surface to be below a predefined velocity threshold (870). For example, in the example shown in fig. 5Q, the touch input (touch 1) dragging the object (object 4) has a velocity equal to or close to zero, hovering substantially over the object 5 while the user waits for the object 5 to open in response to the spring-loaded gesture.
In some embodiments of the method 800, wherein the input corresponds to a spring-loaded interaction (e.g., wherein 860 and 870 are operable), the method 800 includes, in accordance with a determination that a velocity of the touch input on the touch-sensitive surface satisfies a predefined velocity threshold, changing an appearance of the respective view (872). For example, changing the appearance of the respective view includes displaying an indication that the respective view supports spring-loaded interaction (e.g., an animation of an object over which the touch input hovers, or a change in color or thickness of an outer border or perimeter of the object, or a change in background color or blur or other display characteristic of the object) prior to recognizing the spring-loaded gesture or prior to performing the spring-loaded operation. In some embodiments, as a result of the change in appearance, the respective view is displayed with its immediately previous first transformation of visual appearance.
In some embodiments of the method 800, a set of one or more standard gesture recognizers associated with the respective view is associated with the respective view by an application that includes the respective view, and a first set of one or more supplemental gesture recognizers is added to the respective view by a system process (e.g., a system drag management process) (872). For example, the application specifies standard gesture recognizers associated with respective views, and the system process adds supplemental gesture recognizers to respective views in accordance with satisfaction of activity identifiers associated with the respective views and respective trigger conditions associated with those activity identifiers or alternatively satisfaction of trigger conditions associated with a set of supplemental gesture recognizers corresponding to each of the activity identifiers for the respective views.
In some implementations of method 800, method 800 includes detecting, via the touch-sensitive surface, a second touch input at a second location on the touch-sensitive surface that corresponds to the respective view (884); in response to detecting a second touch input via the touch-sensitive surface (886): in accordance with a determination that the second touch input satisfies supplemental gesture recognizer addition criteria, wherein the supplemental gesture recognizer addition criteria include a criterion (888) satisfied when the respective view has the first interaction identifier: adding instances of a first set of one or more supplemental gesture recognizers associated with the first interaction identifier to the respective view (890); and processing the second touch input with an instance of the one or more supplemental gesture recognizers and one or more standard gesture recognizers associated with the respective view (892). For example, the same set of operations performed on the first touch input when it meets the supplemental gesture recognizer addition criteria are performed with respect to the second touch input when it meets the supplemental gesture recognizer addition criteria. As described above, in some embodiments, a separate session identifier is associated with or assigned to each of the two touch inputs, for example as discussed above with reference to fig. 5O and 5P.
It should be understood that the particular order of operations that have been described in fig. 8A-8E is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with reference to other methods described herein (e.g., methods 700 and 900) also apply in a similar manner to method 800 described above with reference to fig. 8A-8E. For example, processing touch inputs meeting predefined criteria as split touches, using an automatically added supplemental gesture recognizer to recognize drag, drop, and spring-loaded gestures described above with reference to method 700, and using gesture recognition fault dependencies described below with reference to method 900 may also be applicable in a manner similar to method 800. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 8A-8E are optionally implemented by components depicted in fig. 1 or fig. 3A. For example, initiating a user interface operation such as a drag operation or a spring-loaded operation, as well as many other operations of the method 800 are optionally implemented by the contact/motion module 130 or its event recognizer module 312.
Automatically establishing fault dependencies between gesture recognizers
9A-9D are flow diagrams illustrating a method 900 of processing a touch detected by an electronic device, including establishing a fault requirement between one or more supplemental gesture recognizers that provide specialized processing for operations such as drag operations and one or more standard gesture recognizers established by an application for processing touch input in one or more views of the application. Method 900 is performed at an electronic device (e.g., device 300 in fig. 3 or portable multifunction device 100 in fig. 1) having a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 900 are optionally combined, and/or the order of some operations is optionally changed.
As described below, the method 900 provides an intuitive way to process a touch detected by a device, which may or may not be a drag gesture. The method reduces the computational load imposed on the electronic device through touch gestures and other user inputs, improves device responsiveness, and thereby reduces user delays and erroneous inputs. For example, when a user wants to drag one or more objects from a first application view or area to another application view or area, such improvements reduce the number, scope, and/or nature of inputs from the user, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, enabling a user to perform drag-and-drop operations faster and more efficiently saves power and increases time during battery charging.
The device displays a user interface (902) of an application on its display, and while displaying the user interface of the application, detects a user input (e.g., a touch input) corresponding to a portion of the user interface of the application associated with the plurality of gesture recognizers, wherein: a first set of one or more first standard gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) a portion of the user interface by the application; a second set of one or more supplemental gesture recognizers of the plurality of gesture recognizers is associated with (e.g., assigned to) the portion of the user interface by a system process (e.g., an application-independent process); and a first fault involving a first standard gesture recognizer (e.g., in the first set of one or more standard gesture recognizers) and a first supplemental gesture recognizer (e.g., in the second set of one or more supplemental gesture recognizers) requires association with (e.g., assignment to) a portion of the user interface through a system process (e.g., an application-independent process) (904). The method 900 further includes, in response to detecting the user input, processing the user input in accordance with the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement (906). As discussed above with reference to FIGS. 6D and 6E, the fault requirement between gesture recognizers may be represented by a gesture recognition hierarchy, such as hierarchy 600-A or 600-B.
In some embodiments of the method 900, the first set of one or more (e.g., application-specific) standard gesture recognizers and the second set of one or more (e.g., system process-specific) gesture recognizers are arranged in a gesture recognition hierarchy, and the first fault requirement comprises requiring the first respective gesture recognizer to fail in order for the second respective gesture recognizer to successfully recognize the gesture, wherein the first respective gesture recognizer is at a higher level in the gesture recognition hierarchy than the second respective gesture recognizer (908). For example, referring to the example shown in fig. 6E, a first respective gesture recognizer (e.g., an intensity-based gesture recognizer) is a parent of a second respective gesture recognizer (e.g., a long press gesture recognizer) in the gesture recognition hierarchy 600-B, or in other words, the second respective gesture recognizer (e.g., a long press gesture recognizer) is a child of the first respective gesture recognizer (an intensity-based gesture recognizer) in the gesture recognition hierarchy 600-B.
In some implementations, one or more gesture recognizers (e.g., system process-specified gesture recognizers) may be inserted into a gesture recognition hierarchy of an existing set of gesture recognizers (e.g., application-specified gesture recognizers). For example, the transition from the gesture recognition hierarchy 600-A of FIG. 6D to the gesture recognition hierarchy 60-B of FIG. 6E is caused by the insertion of a drag start gesture recognizer into the gesture recognition hierarchy 600-A of FIG. 6D. In some embodiments, the gesture recognizer may be added to the gesture recognition hierarchy at a level higher than existing gesture recognizers in the hierarchy (e.g., the drag start gesture recognizer is inserted at a level higher than the long press gesture recognizer), such that processing the corresponding user input using the existing gesture recognizers requires that the added gesture recognizer not recognize the corresponding user input (e.g., in the scenario shown in fig. 6E, the drag start gesture recognizer must fail in order for the long press gesture recognizer to recognize the corresponding user input).
In some embodiments, gesture recognizers may be added to (e.g., inserted into) the gesture recognition hierarchy at a level between two existing gesture recognizers. For example, with respect to a gesture recognition hierarchy that does not include a drag start gesture recognizer (e.g., prior to adding a drag start gesture recognizer to the gesture recognition hierarchy), processing a long press gesture requires determining that a gesture recognizer based on intensity is unable to recognize the gesture. In some embodiments, as shown in fig. 6E, a drag start gesture recognizer is added to the hierarchy between the intensity-based gesture recognizer and the long-press gesture recognizer (e.g., the long-press gesture recognizer is lower than the drag start gesture recognizer, which in turn is lower than the intensity-based gesture recognizer). In some such implementations (e.g., after adding a drag start gesture recognizer to the gesture recognition hierarchy), processing a long press gesture requires determining an intensity-based gesture recognizer, and then dragging the start gesture recognizer, both fail to recognize a user input (e.g., the user input corresponds to neither an intensity-based gesture nor a drag start gesture). In some embodiments, gesture recognizers may be added to the gesture recognition hierarchy at a level below existing gesture recognizers in the hierarchy such that processing the respective user inputs using the added gesture recognizers requires that the existing gesture recognizers fail to recognize the respective user inputs (e.g., processing a drag start gesture requires that an intensity-based gesture recognizer fail to recognize the user inputs).
In some embodiments of the method 900, the first fault requirement includes requiring the first supplemental gesture recognizer to fail to recognize a user input (e.g., a touch input) in order for the first standard gesture recognizer to successfully recognize the gesture (910). For example, in the example shown in FIG. 6E, the drag start gesture recognizer must fail to make a long press gesture recognizer successful. In another example not shown in FIG. 6E, the drag add gesture recognizer must fail in order for the flick gesture recognizer to succeed.
In some embodiments of method 900, the first supplemental gesture recognizer is a drag add gesture recognizer and the first standard gesture recognizer is a tap gesture recognizer. See, for example, fig. 6E.
In some implementations of method 900, to recognize the drag-add gesture, the drag-add gesture recognizer includes requiring the user input to include a contact at a location corresponding to a respective user interface element in the user interface and lift off the contact (e.g., T4 in fig. 5L) for a predefined period of time (914). For example, in such embodiments, the user input identified as the drag add gesture includes the same elements as the tap gesture. In some embodiments, the drag-add gesture recognizer also includes requiring the user input to include a second contact (e.g., a contact corresponding to a drag operation). In some embodiments, the drag addition gesture recognizer further comprises requiring a distance between the contact and the second contact to be less than a predefined proximity threshold. For example, a "tap" (e.g., with a lifted contact) is used to add the corresponding user interface element to an ongoing drag operation (e.g., a second contact). Optionally, the corresponding user interface element is added to the "nearby" ongoing drag operation (e.g., within a proximity threshold). For example, the sequence of fig. 5I, 5L shows a drag-add gesture performed by the second touch (touch 2) on the object 2, which results in adding the object 2 to the drag operation corresponding to the first touch (touch 1). In FIGS. 5I and 5L, time t 2Corresponding to the time the second touch was first detected, and T4 corresponds to the aforementioned predefined time period, which is (or corresponds to) the maximum duration of the drag add gesture.
In some embodiments of method 900, processing the user input according to the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement includes (916): in accordance with a determination (e.g., by a system process) that content associated with the respective user interface element (e.g., content of or represented by the respective user interface element) cannot be added to the drag operation, that the user input cannot be recognized using the drag-add gesture recognizer, and that the user input is processed using the flick gesture recognizer (918); and in accordance with a determination that content associated with the respective user interface element can be added to the drag operation, recognize the drag-add gesture and add content to the drag operation (920). In some embodiments, to make this determination, the system process queries the application to determine whether the application supports adding content (e.g., content associated with the respective user interface element) to the drag operation (e.g., whether an application or application view that includes the respective user interface element supports drag interaction).
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer and the first standard gesture recognizer is a long press gesture recognizer (922). Such an implementation is represented by, or at least consistent with, the gesture recognizer hierarchy discussed above with reference to fig. 6E.
In some embodiments of method 900, processing the user input according to the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement is performed according to a determination that the device satisfies the compact display criterion. For example, if a device (e.g., a smartphone) has a display size below a predefined threshold, the drag start gesture recognizer must fail before the long press gesture recognizer can recognize the gesture.
In some embodiments of the method 900, in accordance with a determination that the device (e.g., a tablet computer) does not satisfy the compact display criteria (926), the first fault requirement includes a requirement to cancel (e.g., subsequently ignore) successful recognition of the user input as the first gesture (e.g., a long press) by the first standard gesture recognizer in accordance with successful recognition of the user input as the second gesture (e.g., a drag start) by the first supplemental gesture recognizer (928). In such embodiments, processing (see 906) the user input in accordance with the first standard gesture recognizer, the first supplemental gesture recognizer, and the first fault requirement includes: recognizing the user input as a first gesture (e.g., a long press gesture) using a standard gesture recognizer (932), and after recognizing the user input as the first gesture using the standard gesture recognizer (934): the user input is recognized as a second gesture (e.g., a drag start gesture) using the supplemental gesture recognizer (936), and recognition of the user input as a first gesture (e.g., a cancel first (long press) gesture) using the standard gesture recognizer is ceased (938).
In some implementations, identifying the user input as the first gesture (e.g., the long press gesture) (932) using the standard gesture recognizer includes, as shown in fig. 5C, performing an operation in a user interface associated with the first gesture, such as displaying a menu of options in response to identifying the long press gesture. In some implementations, ceasing recognition of the user input as the first gesture using the standard gesture recognizer (938) optionally but typically includes ceasing execution of the operation associated with the first gesture. For example, in the example shown in fig. 5C, ceasing to perform the operation associated with the first gesture includes ceasing to display the displayed menu of options (see fig. 5C) in response to recognizing the long press gesture (e.g., ceasing to display at the transition from fig. 5C to fig. 5D). Further, in some embodiments, identifying the user input as the second gesture (e.g., the drag start gesture) (936) using the supplemental gesture recognizer includes performing an operation in a user interface associated with the second gesture, such as displaying an animation of a user interface object that is promoted in the z-direction from the user interface to indicate the start of the drag session in response to identifying the drag start gesture.
In some implementations of the method 900, the first fault requirement includes requiring the first standard gesture recognizer to fail in order for the first supplemental gesture recognizer to successfully recognize the gesture (e.g., the intensity-based gesture recognizer must fail to make the drag start gesture recognizer successful, as shown in the gesture recognizer hierarchy 600-B of fig. 6E) (940).
In some embodiments of the method 900, such as embodiments consistent with the requirement 940, the first standard gesture recognizer is an intensity-based gesture recognizer (e.g., a gesture recognizer that requires touch input to meet an intensity criterion, such as requiring a characteristic intensity of contact to increase above an intensity threshold in order for the gesture recognizer to successfully recognize a gesture), and the first supplemental gesture recognizer is a drag-to-start gesture recognizer (942). See the discussion of FIG. 6E above, which graphically depicts a corresponding gesture recognizer hierarchy 600-B.
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer, and to recognize the drag start gesture, the drag start gesture recognizer includes: requesting that the user input include contact at a location corresponding to a respective user interface element in the user interface for at least a predefined period of time; and requires that a change in the location of the contact satisfies a lateral displacement threshold (e.g., while continuing to detect the contact) after the location of the contact corresponds to the respective user interface element for at least a predefined period of time (944). In such embodiments, the touch input identified as the drag start gesture initially remains at a location, typically on or over an object (e.g., touch 1 on object 4, fig. 5D), for at least a predefined period of time before moving and meeting the lateral displacement threshold (e.g., by laterally moving at least a number of pixels).
In some embodiments of the method 900, the first supplemental gesture recognizer is a drag start gesture recognizer, and to recognize the drag start gesture, the drag start gesture recognizer includes: requiring the user input to include two concurrent contacts (e.g., two different contacts detected simultaneously, as shown in FIG. 5F) on the touch-sensitive surface at locations corresponding to respective user interface elements in the user interface (e.g., object 4 in user interface area 502, FIG. 5F); and the change in position of the two concurrent contacts is required to satisfy a lateral displacement threshold (946). In some embodiments, for a user input comprising two concurrent contacts on a user interface element, the drag start gesture recognizer does not require detection of the user input for at least a predefined period of time. In such implementations, once the two-finger input is in contact with the touch-sensitive surface, using the two-finger input enables the user to begin the lateral displacement phase of the drag gesture. In some embodiments, the drag start gesture recognizer includes a requirement that the two concurrent contacts remain within a second predefined proximity threshold of each other (e.g., the two concurrent contacts move substantially together rather than away from each other).
In some embodiments of method 900, the first supplemental gesture recognizer is a drag-add gesture recognizer, and to recognize the drag-add gesture, the drag-add gesture recognizer includes requiring the user input to include two concurrent contacts at locations corresponding to respective user interface elements in the user interface, and liftoff of the two concurrent contacts within a predefined time period (e.g., the user input is a two-finger tap gesture instead of the single-finger tap gesture discussed above with reference to fig. 5I and 5L) (948). In some embodiments, the drag-add gesture recognizer also includes requiring the user input to include a third contact (e.g., corresponding to a drag operation that begins before the drag-add gesture). In some implementations, the drag addition gesture recognizer also includes requiring that a distance from the third contact to the closest two contacts be less than a predefined proximity threshold. When these requirements are met, a two-finger drag-add gesture (e.g., two concurrent contacts) is used to add the corresponding user interface element to the ongoing drag operation (e.g., the third contact). Accordingly, the corresponding user interface element is added to the "nearby" ongoing drag operation (e.g., within a proximity threshold).
It should be understood that the particular order of operations that have been described in fig. 9A-9D is merely exemplary and is not intended to indicate that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein in connection with other methods described herein (e.g., methods 800 and 900) also apply in a similar manner to method 900 described above in connection with fig. 9A-9D. For example, the interaction identifiers described above with reference to method 800 are associated with different views to determine which supplemental gesture recognizers to add to the respective views, and processing touch inputs meeting predefined criteria as separate touches using the automatically added supplemental gesture recognizers to recognize drag, drop, and spring-loaded gestures described above with reference to method 700 may also be applicable in a manner similar to method 900. For the sake of brevity, these details are not repeated here.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with reference to fig. 1 and 3A) or a dedicated chip.
The operations described above with reference to fig. 7A-7F are optionally implemented by components depicted in fig. 1 or fig. 3A. For example, initiating user interface operations such as a drag operation or a spring-loaded operation, as well as many other operations of method 700 are optionally implemented by contact/motion module 130 or its event recognizer module 312.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments described, with various modifications as are suited to the particular use contemplated.

Claims (32)

1. An information processing method (700), comprising:
at an electronic device with a display and a touch-sensitive surface:
displaying (702) a user interface for a first application comprising a plurality of user interface objects on the display;
while displaying (704) the user interface for the first application, receiving information describing a first touch, wherein the first touch is detected at a location on the touch-sensitive surface that corresponds to a respective user interface object of the plurality of user interface objects in the user interface for the first application and the first touch moves outside of an area corresponding to the user interface for the first application into an area corresponding to a user interface for a second application; and
In response to receiving (734) the information describing the first touch:
in accordance with a determination that the first touch does not satisfy predefined criteria associated with initiating respective types of interface operations, providing (736), to the first application, information describing the first touch when the first touch is within the area corresponding to the user interface for the first application and describing the first touch when the first touch moves outside of the area corresponding to the user interface for the first application and into the area corresponding to the user interface for the second application; and is
In accordance with a determination (706) that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation:
initiating (708) the respective type of user interface operation involving the respective user interface object; and
while continuing to detect the first touch on the touch-sensitive surface (710) and while continuing the user interface operation involving the respective user interface object:
receiving (712), at the first application, an instruction to ignore the first touch with respect to a user interface operation other than the respective type of interface operation;
Ignoring (714), by the first application, the first touch with respect to user interface operations other than the respective type of interface operation, including ceasing processing, by the first application, the first touch with respect to user interface operations other than the respective type of interface operation;
providing (740), to the first application, information describing the first touch when the first touch is within the area corresponding to the user interface for the first application; and is
Switching to (742) providing information to the second application describing the first touch when the first touch moves outside of and into the area corresponding to the user interface for the first application.
2. The method of claim 1, comprising:
processing (720) the first touch as a view-attached touch event prior to determining that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation; and is
In accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, processing (722) the first touch as a split touch event.
3. The method of claim 1, further comprising:
prior to determining that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, processing (724), using one or more gesture recognizers associated with the first application, the first touch as a view-attached touch event; and is
In accordance with a determination (726) that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation:
ceasing (728) processing the first touch as a view-attached touch event using the one or more gesture recognizers associated with the first application; and
processing (730) the first touch as a split touch event.
4. The method of any of claims 1 to 3, comprising:
receiving (750) information describing a change in a second touch on the touch-sensitive surface at a location corresponding to the first application while the user interface operation involving the respective user interface object continues; and
in response to detecting the change in the second touch, processing (752) the information describing the change in the second touch without regard to the first touch using one or more gesture recognizers associated with the first application.
5. The method of any of claims 1-3, including, upon detecting movement of the first touch, and in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation:
determining (744) one or more respective regions corresponding to respective locations of the first touch, wherein the one or more respective regions are associated with one or more respective applications; and
when the first touch is within a respective area, providing (746) information describing the first touch to the respective application for the respective area.
6. The method of any of claims 1-3, including, in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, providing (748), to a respective application for a respective area, the information describing the first touch while providing information describing one or more additional touches that do not satisfy the predefined criteria associated with initiating the respective type of interface operation.
7. The method of claim 1, comprising:
while continuing to detect the first touch on the touch-sensitive surface (710) and while continuing the user interface operation involving the respective user interface object:
Receiving (754), at the first application, information describing a second touch different from the first touch;
in response to (756) receiving the information describing the second touch:
in accordance with a determination that the second touch satisfies predefined add criteria associated with the user interface operation:
receiving (758), at the first application, an instruction to ignore the second touch with respect to a user interface operation other than the respective type of interface operation; and
ignoring (760), by the first application, the second touch with respect to user interface operations other than the respective type of interface operation.
8. The method of claim 7, wherein the predefined addition criteria include (762) a criterion that is met when a distance between the first touch and the second touch is less than a predefined proximity threshold.
9. The method of any of claims 1-3, 7, and 8, comprising:
while displaying the user interface for the first application, receiving (764), at the first application, information describing a third touch detected on the touch-sensitive surface at a location corresponding to a third of the plurality of user interface objects in the first application;
In response to receiving (766) the information describing the third touch:
in accordance with a determination that the third touch satisfies the predefined criteria associated with initiating the respective type of interface operation:
initiating (768) a second user interface operation of the respective type involving the third phase application user interface object;
while continuing to detect the third touch on the touch-sensitive surface (770) and while continuing the second user interface operation involving the third phase application user interface object:
receiving (772), at the first application, an instruction to ignore the third touch with respect to user interface operations other than the respective type of interface operation; and
ignoring (774), by the first application, the third touch with respect to user interface operations other than the respective type of interface operation.
10. The method of any of claims 1-3, 7, and 8, including, in response to (776) receiving the information describing the first touch, determining, by the first application, whether the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation.
11. The method of any of claims 1-3, 7, and 8, wherein the instruction received (778) at the first application to ignore the first touch with respect to a user interface operation other than the respective type of interface operation comprises a touch cancel event.
12. The method of claim 11, comprising, in response to (780) receiving the touch cancel event, forgoing performance of user interface operations other than the respective type of interface operation and failing to recognize the first touch as a gesture other than one of a predefined set of gestures associated with the respective type of interface operation.
13. The method according to any of claims 1-3, 7 and 8, wherein initiating the respective type of user interface operation involving the respective user interface object comprises providing (782) a unique session identifier to the first application, the unique session identifier identifying the respective type of user interface operation and the respective user interface object.
14. The method of claim 13, wherein the unique session identifier (784) is associated with additional information regarding a drag operation, the additional information including one or more of: a total number of user interface objects involved in the respective type of the user interface operation that include the respective user interface object, and one or more content types of user interface objects involved in the respective type of the user interface operation.
15. The method of any of claims 1-3, 7, and 8, wherein the respective type of the user interface operation (786) has an associated operation phase value that describes a current phase of the user interface operation, and wherein the phase value is selected from the group consisting of: exiting, entering and in.
16. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, cause the device to perform the method of any of claims 1-15.
17. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-15.
18. An information processing apparatus for use in an electronic device with a display and a touch-sensitive surface, comprising:
Means for displaying (702) a user interface for a first application on the display comprising a plurality of user interface objects;
means operative while displaying (704) the user interface for the first application, the means for receiving information describing a first touch, wherein the first touch is detected at a location on the touch-sensitive surface that corresponds to a respective user interface object of the plurality of user interface objects in the user interface for the first application and the first touch moves outside of an area corresponding to the user interface for the first application into an area corresponding to a user interface for a second application; and
means, operative in response to receiving (734) the information describing the first touch, for:
in accordance with a determination that the first touch does not satisfy predefined criteria associated with initiating respective types of interface operations, providing (736), to the first application, information describing the first touch when the first touch is within the area corresponding to the user interface for the first application and describing the first touch when the first touch moves outside of the area corresponding to the user interface for the first application and into the area corresponding to the user interface for the second application; and is
In accordance with a determination (706) that the first touch satisfies predefined criteria associated with initiating the respective type of interface operation, the apparatus is to perform operations comprising:
initiating (708) the respective type of user interface operation involving the respective user interface object; and
while continuing to detect the first touch on the touch-sensitive surface (710) and while continuing the user interface operation involving the respective user interface object:
receiving (712), at the first application, an instruction to ignore the first touch with respect to a user interface operation other than the respective type of interface operation;
ignoring (714), by the first application, the first touch with respect to user interface operations other than the respective type of interface operation, including ceasing processing, by the first application, the first touch with respect to user interface operations other than the respective type of interface operation;
providing (740), to the first application, information describing the first touch when the first touch is within the area corresponding to the user interface for the first application; and is
Switching to (742) providing information to the second application describing the first touch when the first touch moves outside of and into the area corresponding to the user interface for the first application.
19. The information processing apparatus according to claim 18, comprising:
means, operative prior to determining that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, for processing (720) the first touch as a view-attached touch event; and is
Means, operative in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, for processing (722) the first touch as a split touch event.
20. The information processing apparatus according to claim 18, further comprising:
means, operative prior to determining that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, for processing (724) the first touch as a view-attached touch event using one or more gesture recognizers associated with the first application; and is
In accordance with a determination (726) that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, an apparatus for performing operations comprising:
ceasing (728) processing the first touch as a view-attached touch event using the one or more gesture recognizers associated with the first application; and
processing (730) the first touch as a split touch event.
21. The information processing apparatus according to any one of claims 18 to 20, comprising:
means, operative when the user interface operation involving the respective user interface object continues, for receiving (750) information describing a change in a second touch on the touch-sensitive surface at a location corresponding to the first application; and
means, operative in response to detecting the change in the second touch, for processing (752), without regard to the first touch, the information describing the change in the second touch using one or more gesture recognizers associated with the first application.
22. The information processing apparatus according to any one of claims 18 to 20, comprising means, operative upon detection of movement of the first touch and in accordance with a determination that the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation, for:
Determining (744) one or more respective regions corresponding to respective locations of the first touch, wherein the one or more respective regions are associated with one or more respective applications; and
when the first touch is within a respective area, providing (746) information describing the first touch to the respective application for the respective area.
23. The information processing apparatus according to any one of claims 18 to 20, comprising means, operative in accordance with a determination that the first touch meets the predefined criteria associated with initiating the respective type of interface operation, for providing (748), to a respective application for a respective area, the information describing the first touch while providing information describing one or more additional touches that do not meet the predefined criteria associated with initiating the respective type of interface operation.
24. The information processing apparatus according to claim 18, comprising:
means, operative when the first touch continues to be detected on the touch-sensitive surface (710) and when the user interface operation involving the respective user interface object continues, for:
Receiving (754), at the first application, information describing a second touch different from the first touch;
in response to (756) receiving the information describing the second touch:
in accordance with a determination that the second touch satisfies predefined add criteria associated with the user interface operation:
receiving (758), at the first application, an instruction to ignore the second touch with respect to a user interface operation other than the respective type of interface operation; and
ignoring (760), by the first application, the second touch with respect to user interface operations other than the respective type of interface operation.
25. The information processing apparatus of claim 24, wherein the predefined addition criteria include (762) a criterion that is met when a distance between the first touch and the second touch is less than a predefined proximity threshold.
26. The information processing apparatus according to any one of claims 18 to 20, 24, and 25, comprising:
means, operative while displaying the user interface for the first application, for receiving (764), at the first application, information describing a third touch detected on the touch-sensitive surface at a location corresponding to a third of the plurality of user interface objects in the first application;
Means, operative in response to (766) receiving the information describing the third touch, for:
in accordance with a determination that the third touch satisfies the predefined criteria associated with initiating the respective type of interface operation:
initiating (768) a second user interface operation of the respective type involving the third phase application user interface object;
while continuing to detect the third touch on the touch-sensitive surface (770) and while continuing the second user interface operation involving the third phase application user interface object:
receiving (772), at the first application, an instruction to ignore the third touch with respect to user interface operations other than the respective type of interface operation; and
ignoring (774), by the first application, the third touch with respect to user interface operations other than the respective type of interface operation.
27. The information processing apparatus according to any of claims 18-20, 24 and 25, comprising means, operative in response to (776) receiving the information describing the first touch, for determining, by the first application, whether the first touch satisfies the predefined criteria associated with initiating the respective type of interface operation.
28. The information processing apparatus according to any one of claims 18-20, 24, and 25, wherein the instruction received (778) at the first application to ignore the first touch with respect to a user interface operation other than the respective type of interface operation comprises a touch cancel event.
29. The information processing apparatus of claim 28, comprising means, operative in response to (780) receiving the touch cancel event, for forgoing performance of user interface operations other than the respective type of interface operation and failing to recognize the first touch as a gesture other than one of a predefined set of gestures associated with the respective type of interface operation.
30. The information processing apparatus according to any of claims 18-20, 24 and 25, wherein initiating the respective type of user interface operation involving the respective user interface object comprises providing (782) a unique session identifier to the first application, the unique session identifier identifying the respective type of user interface operation and the respective user interface object.
31. The information processing apparatus of claim 30, wherein the unique session identifier (784) is associated with additional information regarding a drag operation, the additional information including one or more of: a total number of user interface objects involved in the respective type of the user interface operation that include the respective user interface object, and one or more content types of user interface objects involved in the respective type of the user interface operation.
32. The information processing apparatus of any of claims 18-20, 24, and 25, wherein the respective type of the user interface operation (786) has an associated operation phase value that describes a current phase of the user interface operation, and wherein the phase value is selected from the group consisting of: exiting, entering and in.
CN201911262546.3A 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing Active CN111026306B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201762507096P 2017-05-16 2017-05-16
US62/507,096 2017-05-16
DKPA201770371 2017-05-24
DKPA201770371A DK179979B1 (en) 2017-05-16 2017-05-24 Devices, methods, and graphical user interfaces for touch input processing
US15/917,467 2018-03-09
US15/917,467 US10409477B2 (en) 2017-05-16 2018-03-09 Devices, methods, and graphical user interfaces for touch input processing
CN201880031229.8A CN110651242B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201880031229.8A Division CN110651242B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Publications (2)

Publication Number Publication Date
CN111026306A CN111026306A (en) 2020-04-17
CN111026306B true CN111026306B (en) 2021-02-02

Family

ID=64274873

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201880031229.8A Active CN110651242B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing
CN201911262546.3A Active CN111026306B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201880031229.8A Active CN110651242B (en) 2017-05-16 2018-05-16 Apparatus, method and graphical user interface for touch input processing

Country Status (3)

Country Link
EP (1) EP3593235A1 (en)
CN (2) CN110651242B (en)
WO (1) WO2018213414A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10877660B2 (en) * 2018-06-03 2020-12-29 Apple Inc. Devices and methods for processing inputs using gesture recognizers
DE102019003997A1 (en) * 2019-06-07 2020-12-10 Drägerwerk AG & Co. KGaA Input system and method for controlling an electromedical device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564112A (en) * 1993-10-14 1996-10-08 Xerox Corporation System and method for generating place holders to temporarily suspend execution of a selected command
WO2016196042A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Devices and methods for processing touch inputs over multiple regions of a touch-sensitive surface

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102246065B1 (en) * 2005-03-04 2021-04-29 애플 인크. Multi-functional hand-held device
US8285499B2 (en) * 2009-03-16 2012-10-09 Apple Inc. Event recognition
US9684521B2 (en) * 2010-01-26 2017-06-20 Apple Inc. Systems having discrete and continuous gesture recognizers
US9250788B2 (en) * 2009-03-18 2016-02-02 IdentifyMine, Inc. Gesture handlers of a gesture engine
US8698845B2 (en) * 2010-01-06 2014-04-15 Apple Inc. Device, method, and graphical user interface with interactive popup views
US9519356B2 (en) * 2010-02-04 2016-12-13 Microsoft Technology Licensing, Llc Link gestures
US20120133579A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Gesture recognition management
EP3734405A1 (en) * 2011-02-10 2020-11-04 Samsung Electronics Co., Ltd. Portable device comprising a touch-screen display, and method for controlling same
US20130038552A1 (en) * 2011-08-08 2013-02-14 Xtreme Labs Inc. Method and system for enhancing use of touch screen enabled devices
US20130346912A1 (en) * 2012-06-20 2013-12-26 Alexander Buening Method And System To Launch And Manage An Application On A Computer System Having A Touch Panel Input Device
KR102203885B1 (en) * 2013-04-26 2021-01-15 삼성전자주식회사 User terminal device and control method thereof
US10684740B2 (en) * 2013-11-04 2020-06-16 Facebook, Inc. Intervention conditions
US20150153897A1 (en) * 2013-12-03 2015-06-04 Microsoft Corporation User interface adaptation from an input source identifier change
US20160062636A1 (en) * 2014-09-02 2016-03-03 Lg Electronics Inc. Mobile terminal and control method thereof
US10048856B2 (en) * 2014-12-30 2018-08-14 Microsoft Technology Licensing, Llc Configuring a user interface based on an experience mode transition
US10095396B2 (en) * 2015-03-08 2018-10-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US10101882B2 (en) * 2015-06-05 2018-10-16 Apple Inc. Movement between multiple views
US9961239B2 (en) * 2015-06-07 2018-05-01 Apple Inc. Touch accommodation options
US9880735B2 (en) * 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US20170046058A1 (en) * 2015-08-10 2017-02-16 Apple Inc. Devices, Methods, and Graphical User Interfaces for Adjusting User Interface Objects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564112A (en) * 1993-10-14 1996-10-08 Xerox Corporation System and method for generating place holders to temporarily suspend execution of a selected command
WO2016196042A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Devices and methods for processing touch inputs over multiple regions of a touch-sensitive surface

Also Published As

Publication number Publication date
WO2018213414A1 (en) 2018-11-22
EP3593235A1 (en) 2020-01-15
CN111026306A (en) 2020-04-17
CN110651242B (en) 2023-07-11
CN110651242A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
US11556201B2 (en) Device, method, and user interface for processing intensity of touch contacts
US11747975B2 (en) Devices, methods, and graphical user interfaces for touch input processing
US10976917B2 (en) Devices and methods for interacting with an application switching user interface
US20240069716A1 (en) Devices and Methods for Interacting with an Application Switching User Interface
CN112166402A (en) Apparatus and method for processing input using gesture recognizer
CN111026306B (en) Apparatus, method and graphical user interface for touch input processing
US10540071B2 (en) Device, method, and graphical user interface for displaying a zoomed-in view of a user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant